stop treating iac like serverless terraform—the nested-stack architecture your devops pipeline is missing

why “serverless-ifying terraform” ends in tears

if you’ve ever copy-pasted a single main.tf file, turned it into a repo, and let a ci/cd runner “do its thing”, you’ve likely felt the pain that follows: the plan/apply loop becomes painfully slow, drift-detection feels like archaeology, and promotions across dev → staging → prod are dominated by huge, opaque diffs. this serverless-style monolith treats iac as one giant chunk of code instead of an engineerable, layered stack.

meet the nested-stack mindset

the remedy is simple: break the monolith into nested, composable stacks—each as small as one resource or as large as an entire application layer. instead of shipping all the infra at once, you promote only the granular stack that changed.

what a nested stack looks like

  • a network stack (vpc, subnets, routing tables)
  • a data stack (rds clusters, dynamo tables, caches)
  • an app stack (task definitions, lambdas, api gw)
  • an observability stack (alarms, dashboards, log groups)

each of these is its own terraform module + workspace combo, versioned inside its own folder and referred to by higher-level parent stacks that simply call them as modules.

building your first devops pipeline with nested stacks

here is a practical 4-step recipe that a beginner can implement today. the code uses terraform cloud + github actions, but you can swap in other providers with minimal changes.

step 1 – folder layout that scales

.
└── infra/
    ├── bootstrap/
    │   └── main.tf            # single stack to create the remote state backend
    ├── stacks/
    │   ├── network/
    │   │   └── main.tf        # outputs "vpc_id", "subnet_ids"
    │   ├── data/
    │   │   └── main.tf        # depends_on network outputs
    │   └── web/
    │       └── main.tf        # depends_on network + data
    └── pipelines/
        └── ci.yml

the key detail: state is scoped per stack, so updating the web layer never touches the data layer’s state.

step 2 – dry variables with parent for-each

in each stack you can re-use shared settings via terragrunt or simple terraform maps:

# stacks/network/main.tf
terraform {
  required_version = ">=1.6.0"
}

variable "cidr_map" {
  default = {
    dev  = "10.0.0.0/16"
    prod = "10.1.0.0/16"
  }
}

data "aws_availability_zones" "available" {}

# ... use variables and azs to build vpc

step 3 – ci targets a single stack

inside pipelines/ci.yml we trigger only on the files that actually changed:

# ci.yml
name: ci
on:
  push:
    paths:
      - 'stacks/web/**'
jobs:
  plan-apply:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: terraform init
      - run: terraform plan -out=tfplan
      - run: terraform apply tfplan

net effect: 2-minute plans instead of 20-minute monsters.

winning the seo & discoverability game

because each nested stack lives in its own repo or subdirectory, your documentation becomes search-engine friendly. instead of a 1200-line readme, you now have:

  • a sphinx/mkdocs site auto-generated from per-directory doc folders
  • schema-registry-style markdown for every input and output variable (seo keywords distributed naturally!)
  • shrunken diff links in github that google can index and surface when people search “terraform example vpc subnet”. this boosts your overall devops and full-stack coding authority.

recap & next moves

stop treating your entire infrastructure like one serverless function you hope never gets cold. instead, treat it as the multi-layer application stack it really is. break it into nested stacks, wire them with explicit dependencies, and let your ci pipeline promote only what actually changed. your testing cycles shorten, code reviews become readable, and google starts pointing engineers straight to your well-structured modules.

clone the sample repo, give the layout a spin, and join us next week for “from monolith to micro-stacks: refactor stories”.

Comments

Discussion

Share your thoughts and join the conversation

Loading comments...

Join the Discussion

Please log in to share your thoughts and engage with the community.