This guide details how to deploy a prescriptive, multi-account AWS Landing Zone. The architecture is built on a Hub-Spoke model:
Hub: A central platform/network stack deploys a shared VPC (vpc-network-hub), a Transit Gateway (TGW), and a Network Firewall for centralized traffic inspection.
Spokes: Environment-specific stacks (e.g., envs/prod) deploy separate VPCs (vpc-prod) for workloads like EKS. These spokes connect to the central hub via the Transit Gateway, as confirmed by the TGW attachments.
Hub: A central platform/network stack deploys a shared VPC (vpc-network-hub), a Transit Gateway (TGW), and a Network Firewall for centralized traffic inspection.
Spokes: Environment-specific stacks (e.g., envs/prod) deploy separate VPCs (vpc-prod) for workloads like EKS. These spokes connect to the central hub via the Transit Gateway, as confirmed by the TGW attachments.
0) Prereqs (local)
Terraform ≥ 1.9
AWS CLI with admin creds to your management account
A dedicated S3 bucket name and DynamoDB table name for state (choose unique names)
A GitHub repo (if using the provided CI with OIDC)
πͺ Step-by-Step: Create IAM User for Terraform
Step 1 — Sign in as root or admin
Go to AWS IAM Console.
Step 2 — Create user
Choose Users → Add users
Name: terraform-admin
Select Access key - Programmatic access
Step 3 — Attach permissions
You have two options:
π °️ Option 1 — Attach Managed Policy (recommended for setup)
Attach this AWS-managed policy:
AdministratorAccess
This gives full permissions. Good for bootstrap / backend creation.
You can later restrict this to least privilege.
π ±️ Option 2 — Custom Policy (for tighter control)
For long-term usage, create a custom IAM policy like:
Policy name: TerraformLandingZonePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"s3:*",
"dynamodb:*",
"iam:*",
"kms:*",
"sts:GetCallerIdentity",
"cloudwatch:*",
"logs:*",
"organizations:*"
],
"Resource": "*"
}
]
}
This covers common Terraform actions for your Landing Zone setup.
Then attach this policy to the user.
Step 4 — Save access credentials
Once the IAM user is created, download or copy:
Access key ID
Secret access key
Use these to configure AWS CLI on your Terraform workstation:
1) Clone & layout
git clone https://github.com/dhanuka84/aws-landing-zone-terraform
cd aws-landing-zone-terraform/infra
tree -L 3
You should see: global/{backend,identity}, platform/{control-tower,org,network}, modules/*, envs/{nonprod,prod}.
Project Structure
. ├── aws-lz.drawio ├── AWS-LZ-Icons.png ├── aws-lz.jpg ├── aws_mind_map_arn_darkblue.html ├── aws_service_map.drawio ├── aws_service_map.jpg ├── infra │ ├── envs │ │ ├── nonprod │ │ │ ├── main.tf │ │ │ ├── nonprod.tfvars │ │ │ └── providers.tf │ │ └── prod │ │ ├── main.tf │ │ ├── prod.tfvars │ │ ├── providers.tf │ │ ├── secure-transport-policy.json │ │ └── variables.tf │ ├── global │ │ ├── backend │ │ │ ├── main.tf │ │ │ ├── outputs.tf │ │ │ ├── plan.tfplan │ │ │ ├── terraform.tfstate │ │ │ ├── terraform.tfstate.backup │ │ │ ├── terraform.tfvars │ │ │ └── variables.tf │ │ └── identity │ │ ├── backend.tf │ │ ├── main.tf │ │ └── plan.tfplan │ ├── modules │ │ ├── alb-waf │ │ │ ├── main.tf │ │ │ ├── outputs.tf │ │ │ └── variables.tf │ │ ├── apigw-waf │ │ │ ├── main.tf │ │ │ ├── outputs.tf │ │ │ └── variables.tf │ │ ├── security-baseline │ │ │ └── main.tf │ │ ├── _vendored │ │ │ └── aft │ │ │ ├── CODE_OF_CONDUCT.md │ │ │ ├── CODEOWNERS │ │ │ ├── CONTRIBUTING.md │ │ │ ├── data.tf │ │ │ ├── examples │ │ │ ├── LICENSE │ │ │ ├── locals.tf │ │ │ ├── main.tf │ │ │ ├── modules │ │ │ ├── NOTICE │ │ │ ├── outputs.tf │ │ │ ├── providers.tf │ │ │ ├── PYTHON_VERSION │ │ │ ├── README.md │ │ │ ├── SECURITY.md │ │ │ ├── sources │ │ │ ├── src │ │ │ ├── variables.tf │ │ │ ├── VERSION │ │ │ └── versions.tf │ │ └── vpc-spoke │ │ ├── main.tf │ │ ├── outputs.tf │ │ └── variables.tf │ ├── platform │ │ ├── control-tower │ │ │ ├── assume-role.json │ │ │ ├── backend.tf │ │ │ ├── main.tf │ │ │ ├── organizations-read-access.json │ │ │ ├── plan.tfplan │ │ │ ├── trust-aft.json │ │ │ └── variables.tf │ │ ├── network │ │ │ ├── backend.tf │ │ │ ├── main.tf │ │ │ ├── terraform.tfstate │ │ │ └── terraform.tfstate.backup │ │ └── org │ │ ├── backend.tf │ │ ├── data.tf │ │ ├── main.tf │ │ ├── plan.tfplan │ │ ├── providers.tf │ │ ├── terraform.tfstate │ │ └── terraform.tfstate.backup │ └── README.md ├── README.md
2) Bootstrap remote state (global/backend)
Edit infra/global/backend/terraform.tfvars:
region = "eu-west-1"
state_bucket = "my-company-tfstate-prod"
lock_table = "tf-state-locks"
Apply:
cd infra/global/backend
terraform init
terraform apply -auto-approve
This creates:
S3 bucket for state (versioned, encrypted)
DynamoDB table for state locks
If you plan to migrate to the new use_lockfile later, keep DynamoDB now; you can switch once everything is stable.
3) CI OIDC role (global/identity)
Creates a GitHub OIDC provider + a deploy role.
Edit infra/global/identity/main.tf → set your repo in the trust condition:
"token.actions.githubusercontent.com:sub" : "repo:YOURORG/YOURREPO:ref:refs/heads/main"
Apply:
cd ../identity
terraform init
terraform apply -auto-approve
Record outputs/ARN for github-actions-oidc-deploy.
In GitHub repo → Settings → Secrets:
AWS_OIDC_ROLE_ARN = that role ARN
TF_STATE_BUCKET = your state bucket
TF_LOCK_TABLE = your DynamoDB table
4) Organizations (platform/org)
Control Tower needs an AWS Organization. If you already enabled Control Tower, org exists. These steps add OUs and an example SCP.
cd ../../platform/org
terraform init
terraform plan
terraform apply -auto-approve
This creates OUs: Platform, NonProduction, Production and attaches a sample guardrail SCP (deny EC2 public IPs) to NonProd/Prod.
5) Network Hub (platform/network)
Creates Hub VPC, TGW, Network Firewall, a sample PHZ, and (you’ll add) Hub↔TGW attachment + outputs used by envs.
Check infra/platform/network/main.tf includes:
resource "aws_ec2_transit_gateway_vpc_attachment" "hub" {
transit_gateway_id = aws_ec2_transit_gateway.tgw.id
vpc_id = module.hub.vpc_id
subnet_ids = slice(module.hub.private_subnets, 0, 2)
}
output "transit_gateway_id" { value = aws_ec2_transit_gateway.tgw.id }
output "hub_private_route_table_ids" { value = module.hub.private_route_table_ids }
output "hub_cidr" { value = var.hub_cidr }
Apply:
cd ../network
terraform init
terraform apply -auto-approve
6) Control Tower + AFT (platform/control-tower)
AFT needs code repos. Do not use new CodeCommit if your org/account never used it (AWS blocks first-time creation). Use GitHub/GitLab via CodeConnections.
In infra/platform/control-tower/main.tf:
module "aft" {
source = "aws-ia/control_tower_account_factory/aws"
version = "~> 1.0"
# Use an external VCS (recommended)
vcs_provider = "github" # or "gitlab", "bitbucket", "githubenterprise"
account_request_repo_name = "YOURORG/aft-account-request"
account_customizations_repo_name = "YOURORG/aft-account-customizations"
account_provisioning_customizations_repo_name = "YOURORG/aft-account-provisioning-customizations"
global_customizations_repo_name = "YOURORG/aft-global-customizations"
}
Create those repos in your VCS first (empty is fine). Then:
cd ../control-tower
terraform init
terraform apply -auto-approve
Go to Developer Tools → CodeConnections in the AFT mgmt account and approve the pending GitHub connection.
If you insist on CodeCommit and you’re an existing CodeCommit customer in that org/account, you can revert to the CodeCommit example. Otherwise you’ll get OperationNotAllowedException on create.
7) Spoke Environments (envs/prod, envs/nonprod)
Each env builds a VPC, TGW attachment, endpoints, and optionally ALB+WAF, API Gateway+WAF, EKS.
7.1 Ensure the spoke module exports needed outputs
infra/modules/vpc-spoke/outputs.tf should be:
output "vpc_id" { value = module.vpc.vpc_id }
output "private_subnet_ids" { value = module.vpc.private_subnets }
output "public_subnet_ids" { value = module.vpc.public_subnets }
output "private_route_table_ids" { value = module.vpc.private_route_tables_ids != null ? module.vpc.private_route_tables_ids : module.vpc.private_route_table_ids }
output "tgw_attachment_id" { value = aws_ec2_transit_gateway_vpc_attachment.this.id }
Some vpc module versions export private_route_table_ids vs private_route_tables_ids. The ternary above makes it robust—keep whichever matches your version.
7.2 Wire TGW routes in the env
In infra/envs/prod/main.tf after module "spoke":
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = var.platform_state_bucket
key = "platform/network/terraform.tfstate"
region = var.region
}
}
resource "aws_route" "spoke_to_hub" {
count = length(module.spoke.private_route_table_ids)
route_table_id = module.spoke.private_route_table_ids[count.index]
destination_cidr_block = data.terraform_remote_state.network.outputs.hub_cidr
transit_gateway_id = data.terraform_remote_state.network.outputs.transit_gateway_id
}
resource "aws_route" "hub_to_spoke" {
count = length(data.terraform_remote_state.network.outputs.hub_private_route_table_ids)
route_table_id = data.terraform_remote_state.network.outputs.hub_private_route_table_ids[count.index]
destination_cidr_block = var.vpc_cidr
transit_gateway_id = data.terraform_remote_state.network.outputs.transit_gateway_id
}
Run it:
cd ../../envs/prod
terraform init \
-backend-config="bucket=your-unique-terraform-state-bucket" \
-backend-config="key=envs-nonprod/terraform.tfstate" \
-backend-config="region=eu-west-1" \
-backend-config="dynamodb_table=your-terraform-lock-table"
terraform validate
terraform plan
terraform apply
Repeat in envs/nonprod .
EKS Dependencies
https://dhanuka84.blogspot.com/p/aws-lz-mind-map.html
8) Verifications (quick checks)
TGW wiring: In VPC route tables (spoke private subnets), routes to Hub CIDR via TGW; in Hub private route tables, routes to Spoke CIDR via TGW.
Endpoints: Secrets Manager interface endpoints up in spokes; PHZ associated so private DNS resolves.
ALB / API: ALB created in public subnets; API Gateway/WAF stack in account.
Security baseline: GuardDuty admin set, SecurityHub admin set, Org CloudTrail exists.
9) Common errors & fixes (from your run)
CodeCommit CreateRepository 400: You’re not an existing CodeCommit customer in that org/account. Use external VCS via CodeConnections (recommended).
private_route_table_ids not found: Your vpc-spoke module wasn’t exporting it; add outputs in outputs.tf. If name differs in your vpc module version, adjust as shown above.
tgw_attachment_id referenced in env: Remove module-style outputs accidentally pasted into env files; keep outputs only inside modules.
Lock table deprecation warning: Keep DDB now; plan the use_lockfile migration later.
10) Diagram (vertical, AWS icons)
PNG: you already have a preview link above.
Editable: copy the draw.io XML I provided earlier into a file named aws-lz.drawio, then File → Import in diagrams.net.
Want CIDR/ARN labels auto-inserted from your .tf? I can generate a version that reads your TF and annotates the diagram.
11) Day-2 ops ideas
Add AWS Network Firewall rule groups and route inspection subnets.
Add centralized egress (NAT/egress VPC) via TGW and propagate routes.
Enable AWS Config aggregator organization-wide.
Onboard Account Vending in AFT using your customizations repos.
Add Cross-Account IAM Roles (break-glass, read-only, devops).
12) Clean-up (if needed)
Destroy in reverse order:
cd infra/envs/prod && terraform destroy -auto-approve
cd ../nonprod && terraform destroy -auto-approve
cd ../../platform/network && terraform destroy -auto-approve
cd ../org && terraform destroy -auto-approve
cd ../control-tower && terraform destroy -auto-approve # AFT/CT: follow docs carefully
cd ../../global/identity && terraform destroy -auto-approve
cd ../backend && terraform destroy -auto-approve
$ find . -type d -name ".terraform" -exec rm -rf {} +
