Static Site CI/CD: Astro + S3 + CloudFront with Dual Environment Deploys
I needed a portfolio site. The requirements were simple: fast, cheap to host, blog-friendly, and deployed automatically. I also wanted a dev environment so I could preview changes before they go live.
Here’s the full stack: Astro for the site, S3 + CloudFront for hosting, Terraform for infrastructure, and GitHub Actions for CI/CD. Two AWS accounts — one for dev, one for prod.
Why Astro
I considered Angular (my usual choice) and Next.js. But this is a content site with a blog, not an application. Astro ships zero JavaScript by default, renders everything to static HTML, and supports markdown blog posts out of the box.
The entire site builds in under 600ms. The production bundle is ~55KB.
Infrastructure
Each environment gets its own AWS account with identical infrastructure:
S3 Bucket (site content)
↓ OAC (Origin Access Control)
CloudFront Distribution (CDN + HTTPS)
↓
Route 53 (DNS)
↓
ACM Certificate (SSL)
The S3 bucket blocks all public access. CloudFront authenticates to S3 using Origin Access Control (OAC), not the legacy Origin Access Identity. This means the bucket never needs a public policy — CloudFront is the only thing that can read from it.
All defined in Terraform with environment-specific .tfvars:
# environments/prod.tfvars
environment = "production"
domain_name = "example.com"
subject_alternative_names = ["www.example.com"]
# environments/dev.tfvars
environment = "development"
domain_name = "dev.example.com"
subject_alternative_names = []
Same Terraform, different variables. Dev and prod are structurally identical.
CloudFront Function for Static Routing
Astro outputs files like /about/index.html. When someone visits /about, S3 doesn’t know to serve the index.html inside that directory. Without a rewrite, you get a 403.
A CloudFront Function handles this at the edge:
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += 'index.html';
} else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
This runs on every request with sub-millisecond latency. No Lambda@Edge needed.
Caching Strategy
Two rules:
- HTML files:
Cache-Control: no-cache— always fetch the latest - Everything else:
Cache-Control: max-age=31536000, immutable— cached forever
Astro hashes asset filenames (e.g., about.1hXiVJE3.css), so when content changes, the filename changes, and CloudFront fetches the new version. HTML files always point to the latest hashed assets.
# Deploy assets with long cache (exclude HTML)
aws s3 sync dist/ s3://$BUCKET/ \
--cache-control "public, max-age=31536000, immutable" \
--exclude "*.html"
# Deploy HTML with no cache
aws s3 sync dist/ s3://$BUCKET/ \
--exclude "*" \
--include "*.html" \
--cache-control "no-cache"
# Clean up orphaned files
aws s3 sync dist/ s3://$BUCKET/ --delete
Note the --exclude "*" --include "*.html" on the second command — without the explicit exclude, aws s3 sync includes all files by default and the --include flag does nothing. This is a common mistake that results in all files getting the wrong cache headers.
After deploying, invalidate CloudFront to clear any cached HTML:
aws cloudfront create-invalidation \
--distribution-id $DISTRIBUTION_ID \
--paths "/*"
CI/CD Pipeline
on:
push:
branches: [main] # → deploy to prod
pull_request: # → deploy to dev
jobs:
build:
steps:
- run: npm ci
- run: npm run build
- uses: actions/upload-artifact@v4
deploy-dev:
needs: build
if: github.event_name == 'pull_request'
steps:
- uses: aws-actions/configure-aws-credentials@v4 # OIDC → management
- uses: aws-actions/configure-aws-credentials@v4 # Chain → dev account
with:
role-chaining: true
- run: aws s3 sync ...
- run: aws cloudfront create-invalidation ...
deploy-prod:
needs: build
if: github.ref == 'refs/heads/main'
steps:
# Same pattern, different account
Open a PR → site deploys to dev.example.com. Merge to main → deploys to example.com. No manual steps.
Subdomain Delegation
The dev site uses dev.example.com. Since the prod account owns the example.com hosted zone, the dev account’s dev.example.com zone needs NS delegation.
In the prod account’s Route 53, add an NS record pointing dev.example.com to the dev account’s nameservers. This is a one-time setup.
Google Analytics with GDPR Consent
Analytics only loads after the user accepts cookies:
function loadGA() {
if (!window.__GA_ID) return;
var s = document.createElement('script');
s.src = 'https://www.googletagmanager.com/gtag/js?id=' + window.__GA_ID;
s.async = true;
document.head.appendChild(s);
// ... gtag config
}
// Only load if previously accepted
if (localStorage.getItem('cookie-consent') === 'accepted') {
loadGA();
}
No consent → no tracking. Decline → no tracking. Accept → GA loads and consent is remembered. This isn’t a complete GDPR implementation (you’d want consent expiry and a way to withdraw consent), but it covers the baseline requirement of not tracking without permission.
Adding a Blog Post
Drop a markdown file in src/content/blog/:
---
title: "My New Post"
date: "2026-05-01"
description: "What this post is about"
tags: ["aws", "terraform"]
---
Post content here. Code blocks, links, images — all standard markdown.
Push. CI builds. Site deploys. Post is live. No CMS, no database, no build plugins.
Cost
- S3: ~$0.02/month (a few MB of static files)
- CloudFront: Free tier covers 1TB/month of transfer
- Route 53: $0.50/month per hosted zone ($1 for two environments)
- ACM: Free
- CloudFront invalidations: First 1,000/month free
Total: under $2/month for a fully deployed, SSL-secured, globally cached site with CI/CD and dual environments.
You could do this on Vercel or Netlify for free. But then you don’t own the infrastructure, you don’t learn the AWS patterns, and you can’t point to it in an interview and say “I built all of this.”