Preview Environments with FastAPI and EKS
Published on October 12, 2025
Preview environments let you create an isolated, production‑like stack for every pull request so features can be tested with realistic data before merging. This post walks through a practical setup for a FastAPI app on EKS using RDS snapshots, Helm, AWS CDK, and GitHub Actions. All code is available at fastapi-preview-environments.
Architecture
- App: FastAPI + SQLAlchemy + Pydantic
- Database: RDS PostgreSQL restored from the latest automated snapshot of a staging DB
- Orchestration: AWS CDK (Python) to discover the latest snapshot and create an instance
- Platform: EKS + ALB Ingress + External DNS
- Secrets: External Secrets syncs DB creds into Kubernetes
- Packaging: Docker image built from
python:3.12-slimwithuvfor installs
Required Kubernetes components:
- External DNS: Automatically adds DNS records per preview environment
- ALB Ingress: Provides ingress with support for automated certificate discovery
Here's the flow:
- Developer adds
previewlabel to PR - GitHub Actions workflow triggers
- CDK provisions a new RDS instance from the latest staging snapshot
- Docker image is built and pushed to ECR
- Helm deploys the app to a new namespace with the database connection
- ALB ingress exposes the app at
preview-{PR_NUMBER}.example.com
Code Structure
app/: FastAPI application, SQLAlchemy models, Pydantic schemas, DB wiringcdk/: CDK app and stack that restores an RDS instance from the latest snapshothelm/fastapi-preview-environment/: Helm chart with Deployment, Service, Ingress, HPA, ExternalSecretDockerfile: container image for the apppyproject.toml: dependencies foruvinstall in Docker
The Components
FastAPI Application
This project contains a standard FastAPI service with a health check that verifies both API and database connectivity.
# app/main.py
@app.get("/health")
def health_check(response: Response, db: Session = Depends(get_db)):
health_status = {"status": "healthy", "checks": {"api": "ok", "database": "ok"}}
try:
db.execute(text("SELECT 1"))
except Exception as e:
health_status["status"] = "unhealthy"
health_status["checks"]["database"] = f"failed: {str(e)}"
response.status_code = status.HTTP_503_SERVICE_UNAVAILABLE
return health_status
Containerizing with Docker
The Dockerfile installs dependencies via uv and runs Uvicorn:
FROM python:3.12-slim
WORKDIR /app
RUN apt-get update && apt-get install -y gcc postgresql-client curl \
&& curl -LsSf https://astral.sh/uv/install.sh | sh \
&& rm -rf /var/lib/apt/lists/*
ENV PATH="/root/.local/bin:$PATH"
COPY pyproject.toml .
RUN uv pip install --system --no-cache -r pyproject.toml
COPY app/ ./app/
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Provisioning DBs via CDK
AWS CDK provisions the preview stack. For each deployment, it restores a fresh RDS instance from the latest staging snapshot. The flow is simple: first look up the latest snapshot, next create the instance from that snapshot, and finally expose connection details for downstream steps.
To start, the stack uses an AwsCustomResource to look up the most recent automated RDS snapshot from the staging database.
# This block gets the latest snapshot from AWS
get_latest_snapshot = cr.AwsCustomResource(
self,
"GetLatestSnapshot",
on_create=cr.AwsSdkCall(
service="RDS",
action="describeDBSnapshots",
parameters={
"DBInstanceIdentifier": staging_db_name,
"SnapshotType": "automated",
},
physical_resource_id=cr.PhysicalResourceId.of("latest-snapshot-lookup"),
),
policy=cr.AwsCustomResourcePolicy.from_sdk_calls(
resources=cr.AwsCustomResourcePolicy.ANY_RESOURCE
),
)
snapshot_identifier = get_latest_snapshot.get_response_field("DBSnapshots.0.DBSnapshotIdentifier")
Then, the stack creates a new RDS instance from that snapshot inside the VPC with the right sizing and network settings.
database = rds.DatabaseInstanceFromSnapshot(
self,
"PostgresDatabase",
snapshot_identifier=snapshot_identifier,
engine=rds.DatabaseInstanceEngine.postgres(
version=rds.PostgresEngineVersion.VER_17_6
),
instance_type=ec2.InstanceType.of(
ec2.InstanceClass.BURSTABLE3,
ec2.InstanceSize.SMALL,
),
vpc=vpc,
vpc_subnets=subnets,
security_groups=[db_security_group],
publicly_accessible=False,
)
Finally, it exposes the database host and port as CloudFormation outputs so CI/CD and Helm can consume them.
# Output database connection details
CfnOutput(
self,
"DatabaseHost",
value=database.db_instance_endpoint_address,
description="Database endpoint address",
export_name=f"{environment}-db-host",
)
CfnOutput(
self,
"DatabasePort",
value=database.db_instance_endpoint_port,
description="Database port",
export_name=f"{environment}-db-port",
)
Helm Values (preview.yaml)
We have a separate preview.yaml and CI automatically replace the branch name so each PR spins up its own helm release
Example branch slug + templating in CI:
# Derive a slug from the PR branch (e.g., feature/cool-thing -> feature-cool-thing)
BRANCH_SLUG=$(echo "${GITHUB_HEAD_REF:-$GITHUB_REF_NAME}" \
| tr '[:upper:]' '[:lower:]' \
| tr -cs 'a-z0-9' '-')
# Render values file with the branch slug
export branch_name="$BRANCH_SLUG"
envsubst < helm/fastapi-preview-environment/values-preview.yaml > values.rendered.yaml
# Deploy with the rendered values
helm upgrade --install \
backend-preview-$BRANCH_SLUG \
helm/fastapi-preview-environment \
-f values.rendered.yaml
Example values-preview.yaml:
# Basic release identity
nameOverride: backend-preview-{branch_name}
fullnameOverride: backend-preview-{branch_name} # unique per PR
replicaCount: 1
image:
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
targetPort: 8000
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
hosts:
- host: backend-preview-{branch_name}.example.com # per-PR host
paths:
- path: /
pathType: Prefix
# Database settings exposed via outputs + External Secrets
env:
# These are typically populated via External Secrets (backed by SSM/Secrets Manager)
db_host: ${db-host}
db_port: 5432
db_name: backend
db_username: ${db-username}
db_password: ${db-password}
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
CI/CD with GitHub Actions
- Trigger: The workflow runs on pull requests labeled
preview. - Provision: Checks out the code, configures AWS credentials, and runs
cdk deployto create the RDS instance from the latest snapshot. - Build: Builds the Docker image, tags it (for example, with the Git SHA), and pushes it to Amazon ECR.
- Deploy: Runs
helm upgrade --install, overriding values such asingress_hostanddb_hostper PR.
CI CD Piplines
name: Preview Environments
on:
push:
branches:
- "preview-*"
env:
AWS_REGION: us-east-1 # set your region
EKS_CLUSTER: my-eks-cluster # set your EKS cluster name
ECR_REPOSITORY: fastapi-preview # set your ECR repo name
jobs:
preview:
name: Preview
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
- name: Setup Node (CDK CLI)
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Setup Python (CDK app)
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install CDK CLI
run: npm i -g aws-cdk@2
- name: Install CDK dependencies
run: pip install -r cdk/requirements.txt
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Compute variables
id: vars
shell: bash
run: |
BRANCH_SLUG=$(echo "${GITHUB_HEAD_REF:-$GITHUB_REF_NAME}" | tr '[:upper:]' '[:lower:]' | tr -cs 'a-z0-9' '-')
echo "BRANCH_SLUG=$BRANCH_SLUG" >> $GITHUB_ENV
echo "PR_NUMBER=${{ github.event.number }}" >> $GITHUB_ENV
echo "IMAGE_TAG=$BRANCH_SLUG-${{ github.sha }}" >> $GITHUB_ENV
echo "RELEASE=backend-preview-$BRANCH_SLUG" >> $GITHUB_ENV
echo "NAMESPACE=preview-$BRANCH_SLUG" >> $GITHUB_ENV
- name: Build and push image
env:
ECR_REGISTRY: ${{ steps.ecr.outputs.registry }}
run: |
docker build -t $ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:$IMAGE_TAG .
docker push $ECR_REGISTRY/${{ env.ECR_REPOSITORY }}:$IMAGE_TAG
- name: CDK deploy (provision DB from snapshot)
working-directory: cdk
run: |
cdk deploy --require-approval never \
-c branch=$BRANCH_SLUG \
-c image=$IMAGE_TAG
- name: Fetch DB outputs
id: db
run: |
STACK_NAME=fastapi-preview-$BRANCH_SLUG
DB_HOST=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" \
--query "Stacks[0].Outputs[?OutputKey=='DatabaseHost'].OutputValue" --output text)
DB_PORT=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" \
--query "Stacks[0].Outputs[?OutputKey=='DatabasePort'].OutputValue" --output text)
echo "DB_HOST=$DB_HOST" >> $GITHUB_ENV
echo "DB_PORT=$DB_PORT" >> $GITHUB_ENV
- name: Render Helm values
run: |
export branch_name="$BRANCH_SLUG"
export db_host="$DB_HOST"
envsubst < helm/fastapi-preview-environment/values-preview.yaml > values.rendered.yaml
- name: Configure kubectl for EKS
run: aws eks update-kubeconfig --name $EKS_CLUSTER --region $AWS_REGION
- name: Deploy with Helm
run: |
helm upgrade --install "$RELEASE" helm/fastapi-preview-environment \
-n "$NAMESPACE" -f values.rendered.yaml \
--set image.repository=${{ steps.ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }} \
--set image.tag=$IMAGE_TAG
The Good Parts
- Fast feedback loops: Developers test PRs in isolation without waiting for staging.
- Realistic data: Testing with production-like data catches bugs that unit tests miss and reveals issues at real data volumes and edge cases.
- Shareable URLs: Product managers and QA can validate features without running anything locally.
- Automatic cleanup: When the PR closes, the namespace and RDS instance are destroyed by a cleanup workflow (for example,
helm uninstallandcdk destroy).
The Not-So-Good Parts
- Cost: Each preview environment incurs cost. A
db.t3.smallRDS instance is roughly $30/month if left running 24/7. For short‑lived PRs (1–2 days), it’s more like $2–3 per environment, but it adds up. - Slow initial deploys: The first time you add the
previewlabel, it can take 10–15 minutes to provision the RDS instance. Subsequent pushes are faster since CDK won’t recreate the database.
Conclusion
On a recent team, preview environments helped six backend engineers ship two times more PRs with higher confidence. It felt like overkill at first, but quickly became essential. The combination of Helm and GitHub Actions is standard; the differentiator is per‑branch helm releases and isolated data via RDS snapshots. This approach extends cleanly to other stacks (for example, Next.js or Express).