10/27/2025

Hands-On Guide: Building a Production-Ready Azure Landing Zone (Zero Trust Enhanced)



In today’s cloud-native world, a well-architected landing zone is the bedrock of a secure, scalable, and well-governed Azure environment.

This guide is Episode 2 of the first blog post series (see “Building Your Production-Ready Azure Landing Zone”) and uses a real, working github repository — azure_landing_zone_project (branch: zero-trust-imp)— to deploy a production-ready landing zone built with Terraform, automated through Azure DevOps, and secured with a Zero Trust identity and network model.

1. The Architecture: What We’re Building

This architecture extends the standard Hub-and-Spoke model with modern security principles.

https://dhanuka84.blogspot.com/p/azure-landing-zone-zero-trust.html


Key Zero Trust Enhancements

  • Credential-less Identity (OIDC & UAMI):

    • CI/CD Pipeline: Uses Workload Identity Federation (OIDC). Azure DevOps pipelines authenticate to Azure using federated tokens, completely eliminating the need for static Service Principal secrets.

    • Application Workloads: Uses User-Assigned Managed Identities (UAMI). Applications inside AKS (e.g., uami-prod-api-workload) are granted a managed identity, which is then given RBAC roles (like "Key Vault Secrets User") to access other Azure services.

  • Defense-in-Depth Networking:

    • Policy-Driven Governance: A "Deny Public IP Creation" policy is applied at the Management Group level, enforcing a core security principle.

    • Secure Ingress: Public traffic is forced through an Application Gateway (WAF_v2), which provides Web Application Firewall (WAF) protection against OWASP Top 10 attacks.

    • Secure API Exposure: The App Gateway routes traffic to an internal-only API Management (APIM) instance, which securely exposes and manages APIs from the private AKS cluster.

    • Network Security Groups (NSGs):

      • AzureBastionSubnet has its own mandatory, hardened NSG.

      • All spoke subnets (snet-aks-nodes, snet-private-endpoints) get a default "deny-all-by-default, allow-vnet" NSG.

      • snet-app-gateway and snet-apim receive specific, required NSGs for their services.

    • DDoS Protection: A central DDoS Protection Plan is created in the hub and associated with both the hub and all spoke VNets.

    • Automated DNS: Spoke VNets automatically link to the central Private DNS Zones, ensuring Private Endpoints resolve correctly across the environment.

Core Architecture

  • Governance: Management Groups (Platform, Non-Production, Production).

  • Networking (Hub-and-Spoke):

    • Hub VNet: Hosts Azure Firewall, Bastion, DDoS Plan, and Private DNS Zones.

    • Spoke VNets: For Dev, QA, and Prod. All traffic is routed through the Hub Firewall via UDRs.

  • Subscriptions: Connectivity, Dev, QA, Prod.


2. Modern Identity: OIDC and UAMI



    


We replace legacy Service Principals (SPNs) with modern, credential-less identities.

  • Workload Identity Federation (OIDC): For CI/CD automation. We create a federated identity in Entra ID that trusts tokens from our Azure DevOps pipeline, removing all secrets from DevOps.

  • User-Assigned Managed Identity (UAMI): For application workloads. We create a uami-prod-api-workload identity and assign it to our AKS pods. This identity is then granted RBAC roles.

RBAC assignments via Terraform (New Model):


https://dhanuka84.blogspot.com/p/azure-entra-id-zero-trust.html

We now assign roles to the Managed Identity's Principal ID, not a variable holding a secret-based SPN.

Terraform

# Create the Managed Identity for the application

resource "azurerm_user_assigned_identity" "api" {

  name                = "uami-prod-api-workload"

  location            = var.location

  resource_group_name = var.resource_group_name

}


# Assign the UAMI's identity to the Key Vault

module "rbac" {

  assignments = [

    {

      scope_id           = module.kv.id

      role_definition    = "Key Vault Secrets User"

      principal_objectId = azurerm_user_assigned_identity.api.principal_id

    }

  ]

}


This is a more secure, declarative, and manageable approach to identity.


3. Project Structure

infra/

├─ modules/

│  ├─ networking-hub/         # Hub VNet, Subnets, DNS Zones

│  ├─ azure-firewall/         # Azure Firewall

│  ├─ udr/                    # User Defined Routes

│  ├─ networking-spoke/       # Spoke VNet, Subnets, Peering, DNS Link, Default NSG

│  ├─ aks/                    # Kubernetes Service

│  ├─ acr/                    # Container Registry

│  ├─ keyvault/               # Key Vault

│  ├─ private-endpoint/       # Private Endpoints

│  ├─ rbac/                   # Role Assignments

│  ├─ nsg-bastion/      # NEW: Mandatory NSG for Bastion

│  ├─ nsg-apim/         # NEW: NSG for APIM

│  ├─ nsg-app-gateway/  # NEW: NSG for App Gateway

│  ├─ apim/             # NEW: APIM module

│  └─ app-gateway/      # NEW: App Gateway module

├─ platform/

│  ├─ mg/               # Manages MGs and "Deny Public IP" policy

│  └─ connectivity/     # Manages Hub, Firewall, Bastion NSG, DDoS

├─ envs/

│  ├─ dev/ qa/         # Non-Prod environments

│  └─ prod/             # Prod env deploys AppGW, APIM, AKS, UAMI, etc.

└─ pipelines/

   └─ azure-pipelines.yml # CI/CD Pipeline definition



4. Manual Step-by-Step Deployment (Local Machine)

This guide shows how to deploy the entire landing zone manually from your local machine using Terraform and Azure CLI.

Prerequisites

Clone the repository:
Bash
git clone https://github.com/dhanuka84/azure_landing_zone_project.git

cd azure_landing_zone_project

git checkout zero-trust # Make sure you are on the correct branch


  1. Install Tools:

  2. Azure Subscriptions: You need at least four subscriptions: one for Connectivity, one for Dev, one for QA, and one for Prod.

Step 1 – Log in to Azure

You must authenticate your terminal to Azure. This single login will be used by Terraform to manage all resources.

Bash

az login


Note: If you have multiple tenants, ensure you are logged into the tenant that owns all your subscriptions (az login --tenant <TENANT_ID>).

Step 2 – Set Up Terraform Remote State (One-Time Setup)

Terraform needs a storage account to store its state files.

Bash

az group create -n rg-tfstate -l westeurope

az storage account create -n saterraformstate123 -g rg-tfstate --sku Standard_LRS --encryption-services blob

az storage container create -n tfstate --account-name saterraformstate123


Step 3 – CRITICAL: Modify Provider Files for Local Use

The project is configured for OIDC (pipeline) authentication. You must change this to use your local Azure CLI authentication.

You need to comment out use_oidc = true in all 5 of the following files:

  1. infra/platform/mg/providers.tf

  2. infra/platform/connectivity/providers.tf

  3. infra/envs/dev/providers.tf

  4. infra/envs/qa/providers.tf

  5. infra/envs/prod/providers.tf

Example (e.g., in infra/envs/prod/providers.tf):

Terraform

provider "azurerm" {

  features {}

  

  # The pipeline will set ARM_USE_OIDC="true" and other ARM_* env vars

  # use_oidc = true # <--- COMMENT THIS LINE OUT

}


Step 4 – Deploy Platform (Management Groups)

We must deploy in order, starting with the Management Groups.

Bash

# 1. Navigate to the mg directory

cd infra/platform/mg


# 2. Initialize Terraform

# We must provide the backend config, as the provider file is empty.

terraform init \

    -backend-config="resource_group_name=rg-tfstate" \

    -backend-config="storage_account_name=saterraformstate123" \

    -backend-config="container_name=tfstate" \

    -backend-config="key=platform_mg.tfstate" \

    -backend-config="use_azuread_auth=true"


# 3. Plan the deployment

terraform plan -out=tfplan


# 4. Apply the plan

terraform apply "tfplan"


Step 5 – Deploy Platform (Connectivity Hub)

This step deploys the Hub VNet, Firewall, Bastion, DDoS Plan, and DNS zones.

Bash

# 1. Navigate to the connectivity directory

cd ../connectivity


# 2. Initialize Terraform (note the new 'key')

terraform init \

    -backend-config="resource_group_name=rg-tfstate" \

    -backend-config="storage_account_name=saterraformstate123" \

    -backend-config="container_name=tfstate" \

    -backend-config="key=connectivity.tfstate" \

    -backend-config="use_azuread_auth=true"


# 3. Plan the deployment

terraform plan -var-file=hub.tfvars -out=tfplan


# 4. Apply the plan

terraform apply "tfplan"


Step 6 – Deploy Environment Spokes

The environments (dev, qa, prod) depend on the platform components. The data.terraform_remote_state blocks in their configurations will automatically fetch the outputs (like the Firewall IP and DNS Zone IDs) from the state files you just created.

A. Deploy Dev Environment

Bash

# 1. Navigate to the dev directory

cd ../../envs/dev


# 2. Initialize Terraform (note the new 'key')

terraform init \

    -backend-config="resource_group_name=rg-tfstate" \

    -backend-config="storage_account_name=saterraformstate123" \

    -backend-config="container_name=tfstate" \

    -backend-config="key=dev.tfstate" \

    -backend-config="use_azuread_auth=true"


# 3. Plan the deployment

terraform plan -var-file=dev.tfvars -out=tfplan


# 4. Apply the plan

terraform apply "tfplan"


B. Deploy QA Environment

Bash

# 1. Navigate to the qa directory

cd ../qa


# 2. Initialize Terraform (note the new 'key')

terraform init \

    -backend-config="resource_group_name=rg-tfstate" \

    -backend-config="storage_account_name=saterraformstate123" \

    -backend-config="container_name=tfstate" \

    -backend-config="key=qa.tfstate" \

    -backend-config="use_azuread_auth=true"


# 3. Plan the deployment

terraform plan -var-file=qa.tfvars -out=tfplan


# 4. Apply the plan

terraform apply "tfplan"


C. Deploy Prod Environment

Bash

# 1. Navigate to the prod directory

cd ../prod


# 2. Initialize Terraform (note the new 'key')

terraform init \

    -backend-config="resource_group_name=rg-tfstate" \

    -backend-config="storage_account_name=saterraformstate123" \

    -backend-config="container_name=tfstate" \

    -backend-config="key=prod.tfstate" \

    -backend-config="use_azuread_auth=true"


# 3. Plan the deployment

terraform plan -var-file=prod.tfvars -out=tfplan


# 4. Apply the plan (Review carefully before typing 'yes')

terraform apply "tfplan"



7. Validate Deployment

After all steps are complete, you can inspect the resources in the Azure portal or via Azure CLI.

Bash

# Check the Hub VNet and Firewall

az network vnet list -g rg-platform-connectivity -o table

az network firewall show -n afw-hub -g rg-platform-connectivity


# Check the Prod App Gateway Public IP

az network public-ip show -g rg-prod-app-services -n pip-prod-app-gw --query ipAddress -o tsv


# Check the Prod AKS cluster

az aks get-credentials -n aks-prod-main -g rg-prod-app-services

kubectl get nodes


Confirm traffic flows through the firewall by inspecting the Route Tables (UDRs) associated with your spoke subnets.


Conclusion

You now have a production-ready, Zero Trust-aligned Azure Landing Zone implemented as code:

  • Governance through Management Groups and enforceable policies.

  • Secure Networking with Firewall, WAF, DDoS, and layered NSGs.

  • Credential-less Identity using OIDC for CI/CD and UAMI for workloads.

  • Isolated Dev/QA/Prod spokes with secure API exposure.

  • Automated CI/CD with Azure DevOps.

This architecture is scalable, auditable, and compliant — a true foundation for your enterprise workloads.


Local Validation (Dry Run)

If you only want to validate the Terraform syntax without deploying, you can use the "dry run" method.

Navigate to an environment directory:
Bash
cd azure_landing_zone_project/infra/envs/prod


In providers.tf, temporarily comment out the backend "azurerm" {} block.
Terraform
/*

backend "azurerm" {}

*/

  1. Run terraform init. This will initialize a local backend and download providers.

Run terraform validate. This checks all syntax and module references.
Bash
# Clean up any old local state

rm -f .terraform.lock.hcl

rm -rf .terraform


# Initialize and validate

terraform init

terraform validate


  1. Important: Undo your changes to providers.tf before committing.






No comments:

Post a Comment