Skip to content

Deploy to AWS with CDK

Learn how to deploy Pika to AWS using AWS CDK (Cloud Development Kit), providing type-safe infrastructure as code for your application.

By the end of this guide, you will:

  • Configure AWS credentials
  • Set up your CDK project
  • Deploy the Pika service stack
  • Deploy the Pika chat application
  • Configure environment variables
  • Verify your deployment
  • AWS account with appropriate permissions
  • AWS CLI installed and configured
  • Node.js 22+ installed
  • pnpm package manager
  • Pika project cloned locally

AWS CDK uses TypeScript to define cloud infrastructure, which is then synthesized into CloudFormation templates.

Pika typically consists of two stacks:

  1. Pika Service Stack (services/pika/) - Backend infrastructure

    • Lambda functions
    • DynamoDB tables
    • API Gateway
    • IAM roles and policies
  2. Pika Chat Stack (apps/pika-chat/infra/) - Frontend application

    • CloudFront distribution
    • S3 bucket for static assets
    • Lambda@Edge for SSR

Set up your AWS credentials for deployment.

Terminal window
aws configure
# You'll be prompted for:
# AWS Access Key ID: YOUR_ACCESS_KEY
# AWS Secret Access Key: YOUR_SECRET_KEY
# Default region name: us-east-1
# Default output format: json
Terminal window
# Configure a named profile
aws configure --profile pika-dev
# Set the profile for deployment
export AWS_PROFILE=pika-dev
Terminal window
aws sts get-caller-identity

You should see your AWS account information.

Install CDK and project dependencies.

Terminal window
npm install -g aws-cdk
# Verify installation
cdk --version
Terminal window
# From project root
pnpm install

Set up environment variables and configuration.

Location: services/pika/.env

Terminal window
# Deployment stage
STAGE=dev
# AWS configuration
AWS_REGION=us-east-1
AWS_ACCOUNT_ID=123456789012
# OpenAI API key (if using OpenAI models)
OPENAI_API_KEY=your-openai-api-key
# Other configuration
PIKA_SERVICE_NAME=pika

Location: services/pika/cdk.json

{
"app": "npx ts-node --project tsconfig.json bin/pika.ts",
"context": {
"@aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"@aws-cdk/core:stackRelativeExports": "true",
"@aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true,
"@aws-cdk/aws-secretsmanager:parseOwnedSecretName": true,
"@aws-cdk/aws-kms:defaultKeyPolicies": true
}
}

Add AWS tags to all resources in your CDK stacks for better organization and cost tracking.

Location: pika-config.ts

export const pikaConfig: PikaConfig = {
// ... existing config
stackTags: {
// Common tags applied to BOTH Pika service and Pika Chat stacks
common: {
'ManagedBy': 'Pika',
'Project': 'MyCompany',
'env': '{stage}', // Replaced with deployment stage
'CostCenter': '12345'
},
// Tags specific to the Pika service stack (backend)
pikaServiceTags: {
'app': '{pika.projNameKebabCase}', // Pika service name
'Tier': 'Backend'
},
// Tags specific to the Pika Chat stack (frontend)
pikaChatTags: {
'app': '{pikaChat.projNameKebabCase}', // Pika Chat app name
'Tier': 'Frontend'
},
// Component tag names for cost tracking (optional but recommended)
componentTagNames: ['component']
}
};

The stackTags configuration has four sections:

  • common: Tags applied to both Pika service and Pika Chat stacks
  • pikaServiceTags: Additional tags only for the backend service stack (merged with common, overwrites on conflict)
  • pikaChatTags: Additional tags only for the frontend chat stack (merged with common, overwrites on conflict)
  • componentTagNames: Array of tag names to use for component identification (for cost tracking and analysis)

Tags are interpolated at CDK synth time:

Basic Placeholders:

  • {stage} - Deployment stage (e.g., 'dev', 'prod')
  • {timestamp} - Current timestamp in ISO 8601 format
  • {accountId} - AWS account ID where stack is deployed
  • {region} - AWS region where stack is deployed

Pika Project Name Placeholders:

  • {pika.projNameL} - Pika project name (lowercase)
  • {pika.projNameKebabCase} - Pika project name (kebab-case)
  • {pika.projNameTitleCase} - Pika project name (TitleCase)
  • {pika.projNameCamel} - Pika project name (camelCase)
  • {pika.projNameHuman} - Pika project name (human-readable)

Pika Chat Project Name Placeholders:

  • {pikaChat.projNameL} - Pika Chat project name (lowercase)
  • {pikaChat.projNameKebabCase} - Pika Chat project name (kebab-case)
  • {pikaChat.projNameTitleCase} - Pika Chat project name (TitleCase)
  • {pikaChat.projNameCamel} - Pika Chat project name (camelCase)
  • {pikaChat.projNameHuman} - Pika Chat project name (human-readable)

For the Pika service stack only, you can override the modifyStackTags method in custom-stack-defs.ts:

Location: services/pika/lib/stacks/custom-stack-defs.ts

modifyStackTags(tags: Record<string, string>, stage: string): Record<string, string> {
return {
...tags,
'CustomTag': 'CustomValue',
'Stage': stage.toUpperCase(),
'Owner': 'DevOps Team'
};
}

The system automatically validates tags and warns about:

  • Keys exceeding 128 characters
  • Values exceeding 256 characters
  • Invalid characters (only letters, numbers, spaces, _ . : / = + - @ allowed)
  • Reserved aws: prefix

Component tags help you understand cost breakdowns within each AWS service. While AWS Cost Explorer lets you filter by service (e.g., "show me all Lambda costs"), component tags let you see which specific Lambda functions or resources are driving those costs.

Why Component Tags Matter:

When you look at AWS Cost Explorer, you can filter by service (S3, Lambda, DynamoDB, etc.), but you can't easily see:

  • Which Lambda function is consuming the most cost
  • Which DynamoDB table has the highest throughput charges
  • Which inference profile is being used most heavily for Bedrock costs

Component tags solve this by tagging every infrastructure resource with a descriptive component name.

Configure Component Tag Names:

Add a componentTagNames array to your stack tags configuration:

export const pikaConfig: PikaConfig = {
// ... existing config
stackTags: {
common: {
'ManagedBy': 'Pika',
'env': '{stage}'
},
// Tag names to use for component identification
componentTagNames: ['component']
}
};

How It Works:

With componentTagNames: ['component'], every infrastructure resource gets tagged with a component identifier:

// Examples of how resources are tagged:
component: ConverseLambda // The main conversation handler Lambda
component: ChatMessagesTable // DynamoDB table for messages
component: PikaFilesBucket // S3 bucket for file storage
component: OpenSearchDomain // OpenSearch domain for session insights
component: Claude4SonnetInferenceProfile // Bedrock inference profile
component: ECSCluster // ECS cluster for chat webapp
component: FargateService // Fargate service running the webapp

Cost Explorer Analysis:

Once deployed, you can use AWS Cost Explorer to:

  1. Filter by service: e.g., "AWS Lambda"
  2. Group by tag: Select your component tag name (e.g., component)
  3. See breakdown: View costs for each Lambda function separately:
    • ConverseLambda: $X
    • ChatbotApiLambda: $Y
    • KeyRotationLambda: $Z

This is particularly valuable for:

  • Bedrock costs: See which inference profiles (models) are most expensive
  • Lambda costs: Identify which functions need optimization
  • DynamoDB costs: Find which tables have high read/write costs
  • S3 costs: Determine which buckets consume the most storage

Using Multiple Tag Names:

You can specify multiple component tag names to support different organizational structures:

componentTagNames: ['component', 'resource-type', 'cost-center']

This creates three tags on each resource:

// All three tags get the same component value
component: ConverseLambda
resource-type: ConverseLambda
cost-center: ConverseLambda

Tagging Custom Infrastructure:

If you add custom infrastructure in custom-stack-defs.ts, use the applyComponentTags helper to ensure consistent tagging:

// In services/pika/lib/stacks/custom-stack-defs.ts
addStackResoucesAfterWeCreateThePikaConstruct(): void {
const myCustomLambda = new lambda.Function(this.stack, 'MyCustomFunction', {
// ... lambda configuration
});
// Apply component tags automatically
this.applyComponentTags(myCustomLambda, 'MyCustomLambda');
}
// In apps/pika-chat/infra/lib/stacks/custom-stack-defs.ts
addStackResoucesAfterWeCreateThePikaChatConstruct(): void {
const myBucket = new s3.Bucket(this.stack, 'MyCustomBucket', {
// ... bucket configuration
});
// Apply component tags automatically
this.applyComponentTags(myBucket, 'MyCustomS3Bucket');
}

AWS Lambda has a 4KB total limit for all environment variables. To ensure your tags don't consume too much space, Pika enforces a 500 byte limit for tag environment variables (STACK_TAGS + COMPONENT_TAG_NAMES combined).

Why 500 bytes?

This conservative limit leaves 3.5KB for all other environment variables your application needs (API endpoints, configuration values, secrets, etc.).

Check Your Tag Size:

You can verify your tag configuration size before deployment:

// In pika-config.ts or a Node.js REPL
const stackTags = {
common: { /* your tags */ },
pikaServiceTags: { /* your tags */ },
pikaChatTags: { /* your tags */ },
componentTagNames: ['component']
};
// Calculate size for Pika service stack
const serviceStackTags = { ...stackTags.common, ...stackTags.pikaServiceTags };
const serviceSize = JSON.stringify(serviceStackTags).length +
JSON.stringify(stackTags.componentTagNames || []).length;
console.log('Pika service stack tag size:', serviceSize, 'bytes');
// Calculate size for Pika Chat stack
const chatStackTags = { ...stackTags.common, ...stackTags.pikaChatTags };
const chatSize = JSON.stringify(chatStackTags).length +
JSON.stringify(stackTags.componentTagNames || []).length;
console.log('Pika Chat stack tag size:', chatSize, 'bytes');
// Both should be under 500 bytes
if (serviceSize > 500 || chatSize > 500) {
console.error('Tags exceed 500 byte limit!');
} else {
console.log('Tags are within size limits');
}

What Happens if You Exceed the Limit?

If your tags exceed 500 bytes, CDK synthesis will fail with a clear error message before any resources are deployed:

Error: Tag environment variables exceed size limit.
Total size: 623 bytes, maximum: 500 bytes.
STACK_TAGS size: 589 bytes, COMPONENT_TAG_NAMES size: 34 bytes.
Please reduce the number or length of tags in your pika-config.ts stackTags configuration.

Tips to Reduce Tag Size:

  1. Use shorter tag keys: env instead of Environment
  2. Use shorter tag values: dev instead of development
  3. Remove unnecessary tags
  4. Reduce the number of componentTagNames entries
  5. Use placeholder interpolation for repeated values
  • Tags are optional - if not configured, no tags are applied
  • Tags apply to all resources in each stack automatically
  • Use common for shared tags, and pikaServiceTags/pikaChatTags for stack-specific tags
  • Stack-specific tags overwrite common tags if there's a naming conflict
  • Component tags help you analyze costs by specific infrastructure component within each AWS service
  • Tag environment variables are limited to 500 bytes to leave room for other configuration

Bootstrap your AWS environment for CDK deployments (one-time setup per account/region).

Terminal window
cd services/pika
# Bootstrap with your account and region
cdk bootstrap aws://123456789012/us-east-1

This creates necessary resources for CDK deployments (S3 bucket, IAM roles, etc.).

Deploy the backend infrastructure.

Terminal window
cd services/pika
# See what will be created
pnpm cdk:diff
Terminal window
# Deploy with approval prompts
pnpm cdk:deploy
# Or deploy without prompts (use with caution)
pnpm cdk:deploy --require-approval never

You'll see progress as resources are created:

✨ Synthesis time: 5.32s
pika-dev: deploying...
pika-dev: creating CloudFormation changeset...
✅ pika-dev
✨ Deployment time: 425.67s
Outputs:
pika-dev.ConverseFunctionUrl = https://abc123.lambda-url.us-east-1.on.aws/
pika-dev.ApiGatewayUrl = https://xyz789.execute-api.us-east-1.amazonaws.com/dev/
Stack ARN:
arn:aws:cloudformation:us-east-1:123456789012:stack/pika-dev/guid

Save the function URLs and API endpoints - you'll need them for configuration.

Deploy the frontend application.

Location: apps/pika-chat/.env

Terminal window
# Stage
PUBLIC_STAGE=dev
# Backend API endpoint (from service stack output)
PUBLIC_API_URL=https://xyz789.execute-api.us-east-1.amazonaws.com/dev
# Converse function URL (from service stack output)
PUBLIC_CONVERSE_URL=https://abc123.lambda-url.us-east-1.on.aws/
# AWS region
PUBLIC_AWS_REGION=us-east-1
Terminal window
cd apps/pika-chat
# Preview changes
pnpm cdk:diff
# Deploy
pnpm cdk:deploy
✅ pika-chat-dev
Outputs:
pika-chat-dev.CloudFrontUrl = https://d123456.cloudfront.net
pika-chat-dev.S3BucketName = pika-chat-dev-bucket-abc123
Stack ARN:
arn:aws:cloudformation:us-east-1:123456789012:stack/pika-chat-dev/guid

Open the CloudFront URL in your browser to access your deployed Pika chat application.

Step 8: Configure Custom Domain (Optional)

Section titled “Step 8: Configure Custom Domain (Optional)”

Set up a custom domain for your application.

  • Domain registered (Route 53, Google Domains, etc.)
  • SSL certificate in ACM (must be in us-east-1 for CloudFront)

Location: apps/pika-chat/infra/lib/pika-chat-stack.ts

const distribution = new cloudfront.Distribution(this, 'Distribution', {
// ... other config
domainNames: ['chat.yourdomain.com'],
certificate: acm.Certificate.fromCertificateArn(
this,
'Certificate',
'arn:aws:acm:us-east-1:123456789012:certificate/your-cert-id'
)
});
import * as route53 from 'aws-cdk-lib/aws-route53';
import * as targets from 'aws-cdk-lib/aws-route53-targets';
const zone = route53.HostedZone.fromLookup(this, 'Zone', {
domainName: 'yourdomain.com'
});
new route53.ARecord(this, 'ChatARecord', {
zone: zone,
recordName: 'chat',
target: route53.RecordTarget.fromAlias(
new targets.CloudFrontTarget(distribution)
)
});
Terminal window
pnpm cdk:deploy

Test your deployed application.

Terminal window
# Test converse function
curl https://abc123.lambda-url.us-east-1.on.aws/health
# Test API Gateway
curl https://xyz789.execute-api.us-east-1.amazonaws.com/dev/health
  1. Open CloudFront URL in browser
  2. Verify authentication works
  3. Test sending a message
  4. Check agent responds correctly
Terminal window
# View Lambda logs
aws logs tail /aws/lambda/pika-dev-converse --follow
# View API Gateway logs
aws logs tail /aws/apigateway/pika-dev --follow

If you're using Bedrock models, you can verify your inference profiles were created:

Terminal window
# List all application inference profiles
aws bedrock list-inference-profiles --type-equals APPLICATION
# Filter for your stack's profiles
aws bedrock list-inference-profiles --type-equals APPLICATION \
--query 'inferenceProfileSummaries[?contains(inferenceProfileName, `pika-dev`)]'
Terminal window
# Service stack
cd services/pika
pnpm cdk:deploy
# Chat stack
cd apps/pika-chat
pnpm cdk:deploy
Terminal window
# List stack history
aws cloudformation describe-stack-events --stack-name pika-dev
# Rollback to previous version
# In CloudFormation console: Stack Actions → Roll back
Terminal window
# Delete chat stack first
cd apps/pika-chat
pnpm cdk:destroy
# Then delete service stack
cd services/pika
pnpm cdk:destroy
  • Use IAM Roles: Don't hardcode credentials
  • Principle of Least Privilege: Grant minimal necessary permissions
  • Secrets Management: Store secrets in Secrets Manager
  • Enable CloudTrail: Audit AWS API calls
  • Use On-Demand Pricing: Start with on-demand, optimize later
  • Monitor Costs: Set up billing alerts
  • Right-Size Resources: Adjust Lambda memory and timeout
  • Cleanup Unused Resources: Remove old stacks and resources
  • Use Stages: Separate dev, staging, and production
  • Preview Changes: Always run cdk:diff before deploy
  • Automated Testing: Test before production deployment
  • Gradual Rollout: Consider blue/green deployments
Terminal window
# Check permissions
aws iam get-user
# Ensure you have CloudFormation permissions
aws cloudformation describe-stacks
  • Check CloudFormation events in AWS Console
  • Increase Lambda timeout if needed
  • Review security group and VPC settings
  • Check CloudFront distribution status (wait for deployment)
  • Verify S3 bucket permissions
  • Check browser console for errors
  • Verify API URLs in environment variables
  • Review IAM role policies in CloudFormation
  • Check Lambda execution role
  • Verify API Gateway permissions