TL;DR #
- Terraform modules for VPC, ECS Fargate, RDS PostgreSQL, and S3
- GitHub Actions pipeline:
planon PRs,applyon merge to main - Remote state in S3 with DynamoDB locking
- Sensible defaults for a small team, easy to extend
- Total infra cost for a small app: ~$50-80/month
Who this stack is for #
You’re running on AWS and managing infrastructure by clicking around the console. You want to move to infrastructure as code but don’t want to spend a week designing a module structure. You want something that works today and can grow with you.
Repository structure #
terraform/
environments/
production/
main.tf # Root module — wires everything together
variables.tf # Environment-specific variables
terraform.tfvars # Variable values (DO NOT commit secrets)
backend.tf # Remote state configuration
outputs.tf # Useful outputs
modules/
vpc/
main.tf
variables.tf
outputs.tf
ecs/
main.tf
variables.tf
outputs.tf
rds/
main.tf
variables.tf
outputs.tf
s3/
main.tf
variables.tf
outputs.tfTwo levels: environments for per-env config, modules for reusable components. Start with one environment. Add staging/ when you actually need it, not before.
Remote state setup #
Run this once manually to bootstrap the state backend:
# Create S3 bucket for state
aws s3api create-bucket \
--bucket your-company-tf-state \
--region us-east-1
# Enable versioning (so you can recover from bad applies)
aws s3api put-bucket-versioning \
--bucket your-company-tf-state \
--versioning-configuration Status=Enabled
# Create DynamoDB table for state locking
aws dynamodb create-table \
--table-name your-company-tf-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region us-east-1Backend configuration #
environments/production/backend.tf:
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "your-company-tf-state"
key = "production/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "your-company-tf-locks"
encrypt = true
}
}
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Environment = var.environment
ManagedBy = "terraform"
Project = var.project_name
}
}
}VPC module #
modules/vpc/main.tf:
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = { Name = "${var.project_name}-vpc" }
}
resource "aws_subnet" "public" {
count = length(var.availability_zones)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 4, count.index)
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true
tags = { Name = "${var.project_name}-public-${count.index}" }
}
resource "aws_subnet" "private" {
count = length(var.availability_zones)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 4, count.index + length(var.availability_zones))
availability_zone = var.availability_zones[count.index]
tags = { Name = "${var.project_name}-private-${count.index}" }
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = { Name = "${var.project_name}-igw" }
}
resource "aws_eip" "nat" {
domain = "vpc"
tags = { Name = "${var.project_name}-nat-eip" }
}
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public[0].id
tags = { Name = "${var.project_name}-nat" }
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
tags = { Name = "${var.project_name}-public-rt" }
}
resource "aws_route" "public_internet" {
route_table_id = aws_route_table.public.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
tags = { Name = "${var.project_name}-private-rt" }
}
resource "aws_route" "private_nat" {
route_table_id = aws_route_table.private.id
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main.id
}
resource "aws_route_table_association" "private" {
count = length(aws_subnet.private)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private.id
}modules/vpc/variables.tf:
variable "project_name" {
type = string
}
variable "vpc_cidr" {
type = string
default = "10.0.0.0/16"
}
variable "availability_zones" {
type = list(string)
default = ["us-east-1a", "us-east-1b"]
}modules/vpc/outputs.tf:
output "vpc_id" {
value = aws_vpc.main.id
}
output "public_subnet_ids" {
value = aws_subnet.public[*].id
}
output "private_subnet_ids" {
value = aws_subnet.private[*].id
}ECS Fargate module #
modules/ecs/main.tf:
resource "aws_ecs_cluster" "main" {
name = "${var.project_name}-cluster"
setting {
name = "containerInsights"
value = "enabled"
}
}
resource "aws_cloudwatch_log_group" "ecs" {
name = "/ecs/${var.project_name}"
retention_in_days = 30
}
resource "aws_security_group" "ecs_tasks" {
name_prefix = "${var.project_name}-ecs-"
vpc_id = var.vpc_id
ingress {
from_port = var.container_port
to_port = var.container_port
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "alb" {
name_prefix = "${var.project_name}-alb-"
vpc_id = var.vpc_id
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_lb" "main" {
name = "${var.project_name}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = var.public_subnet_ids
}
resource "aws_lb_target_group" "app" {
name = "${var.project_name}-tg"
port = var.container_port
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "ip"
health_check {
path = "/health"
healthy_threshold = 2
unhealthy_threshold = 3
interval = 30
timeout = 5
}
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.arn
port = 80
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
resource "aws_iam_role" "ecs_execution" {
name = "${var.project_name}-ecs-execution"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = { Service = "ecs-tasks.amazonaws.com" }
}]
})
}
resource "aws_iam_role_policy_attachment" "ecs_execution" {
role = aws_iam_role.ecs_execution.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}modules/ecs/variables.tf:
variable "project_name" {
type = string
}
variable "vpc_id" {
type = string
}
variable "public_subnet_ids" {
type = list(string)
}
variable "private_subnet_ids" {
type = list(string)
}
variable "container_port" {
type = number
default = 3000
}modules/ecs/outputs.tf:
output "cluster_name" {
value = aws_ecs_cluster.main.name
}
output "alb_dns_name" {
value = aws_lb.main.dns_name
}
output "execution_role_arn" {
value = aws_iam_role.ecs_execution.arn
}RDS PostgreSQL module #
modules/rds/main.tf:
resource "aws_security_group" "rds" {
name_prefix = "${var.project_name}-rds-"
vpc_id = var.vpc_id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = var.app_security_group_ids
}
}
resource "aws_db_subnet_group" "main" {
name = "${var.project_name}-db-subnet"
subnet_ids = var.private_subnet_ids
}
resource "aws_db_instance" "main" {
identifier = "${var.project_name}-db"
engine = "postgres"
engine_version = "16.4"
instance_class = var.instance_class
allocated_storage = 20
max_allocated_storage = 100
storage_type = "gp3"
storage_encrypted = true
db_name = var.database_name
username = var.database_username
password = var.database_password
multi_az = false # Set true for production when budget allows
db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = [aws_security_group.rds.id]
backup_retention_period = 7
backup_window = "03:00-04:00"
maintenance_window = "Mon:04:00-Mon:05:00"
skip_final_snapshot = false
final_snapshot_identifier = "${var.project_name}-db-final"
deletion_protection = true
performance_insights_enabled = true
}modules/rds/variables.tf:
variable "project_name" {
type = string
}
variable "vpc_id" {
type = string
}
variable "private_subnet_ids" {
type = list(string)
}
variable "app_security_group_ids" {
type = list(string)
}
variable "instance_class" {
type = string
default = "db.t4g.micro" # ~$12/month, good enough to start
}
variable "database_name" {
type = string
default = "app"
}
variable "database_username" {
type = string
default = "app"
}
variable "database_password" {
type = string
sensitive = true
}modules/rds/outputs.tf:
output "endpoint" {
value = aws_db_instance.main.endpoint
}
output "database_name" {
value = aws_db_instance.main.db_name
}Root module #
environments/production/main.tf:
module "vpc" {
source = "../../modules/vpc"
project_name = var.project_name
}
module "ecs" {
source = "../../modules/ecs"
project_name = var.project_name
vpc_id = module.vpc.vpc_id
public_subnet_ids = module.vpc.public_subnet_ids
private_subnet_ids = module.vpc.private_subnet_ids
}
module "rds" {
source = "../../modules/rds"
project_name = var.project_name
vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnet_ids
app_security_group_ids = [] # Add ECS task security group ID after first apply
database_password = var.database_password
}environments/production/variables.tf:
variable "aws_region" {
type = string
default = "us-east-1"
}
variable "environment" {
type = string
default = "production"
}
variable "project_name" {
type = string
default = "your-app-name"
}
variable "database_password" {
type = string
sensitive = true
}environments/production/terraform.tfvars:
project_name = "your-app-name"
aws_region = "us-east-1"
environment = "production"
# database_password passed via TF_VAR_database_password or -var flagGitHub Actions pipeline #
Save as .github/workflows/terraform.yml:
name: Terraform
on:
pull_request:
paths:
- 'terraform/**'
push:
branches: [main]
paths:
- 'terraform/**'
env:
TF_WORKING_DIR: terraform/environments/production
AWS_REGION: us-east-1
jobs:
plan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
with:
terraform_version: '1.7'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Terraform Init
working-directory: ${{ env.TF_WORKING_DIR }}
run: terraform init
- name: Terraform Format Check
working-directory: ${{ env.TF_WORKING_DIR }}
run: terraform fmt -check -recursive
- name: Terraform Validate
working-directory: ${{ env.TF_WORKING_DIR }}
run: terraform validate
- name: Terraform Plan
working-directory: ${{ env.TF_WORKING_DIR }}
run: terraform plan -no-color -out=tfplan
env:
TF_VAR_database_password: ${{ secrets.DB_PASSWORD }}
- name: Post plan to PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const { execSync } = require('child_process');
const plan = execSync(
`cd ${{ env.TF_WORKING_DIR }} && terraform show -no-color tfplan`
).toString().slice(0, 60000);
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `### Terraform Plan\n\`\`\`\n${plan}\n\`\`\``
});
apply:
needs: plan
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
runs-on: ubuntu-latest
environment: production # Requires manual approval in GitHub
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
with:
terraform_version: '1.7'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Terraform Init
working-directory: ${{ env.TF_WORKING_DIR }}
run: terraform init
- name: Terraform Apply
working-directory: ${{ env.TF_WORKING_DIR }}
run: terraform apply -auto-approve
env:
TF_VAR_database_password: ${{ secrets.DB_PASSWORD }}Set up the production environment in GitHub repo settings with required reviewers for the manual approval gate on applies.
Getting started checklist #
- Bootstrap S3 state bucket and DynamoDB lock table
- Copy the module structure into your repo
- Update
terraform.tfvarswith your project name - Set
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY, andDB_PASSWORDas GitHub secrets - Create the
productionenvironment in GitHub with required reviewers - Open a PR with the Terraform code to see the plan
- Merge to apply
When to outgrow this stack #
- Multiple environments: Copy the
production/directory tostaging/, change the backend key and tfvars - Multiple services: Add more modules, or split into separate state files per service
- Team grows past 10: Consider Terragrunt for DRY multi-environment configs, or move to Terraform Cloud for collaboration features
Related reads #
- Terraform for Small Teams — principles behind this setup
- Secrets Management for Small Teams — how to handle that database password properly
- CI/CD for Small Teams — the app deployment pipeline that pairs with this infra