Deploy on AKS — End-to-End Lab
Build, push, and deploy a multi-tier application to AKS from scratch. Create a cluster, push images to ACR, deploy with Helm, configure Ingress, and verify everything works.
🎯 Lab Overview
This is a complete end-to-end deployment lab. By the end, you'll have:
- A running AKS cluster with 2 node pools
- An ACR registry attached to the cluster
- A sample app (API + Redis) deployed via Helm
- NGINX Ingress exposing the app externally
- TLS termination with a self-signed cert
You need: an Azure subscription, Azure CLI installed, Helm 3 installed, kubectl installed, and Docker Desktop running. If you don't have a subscription, az login with a free trial works.
🔧 Step 1 — Set Up Variables & Resource Group
Start by defining all the variables you'll reuse throughout the lab. This avoids typos and makes cleanup easy.
# Define variables — adjust region and names as needed RESOURCE_GROUP="rg-skilly-lab" LOCATION="eastus" CLUSTER_NAME="aks-skilly-lab" ACR_NAME="acrskilly$RANDOM" # Must be globally unique NODE_COUNT=2 # Create resource group az group create \ --name $RESOURCE_GROUP \ --location $LOCATION # Verify az group show --name $RESOURCE_GROUP --query "properties.provisioningState" -o tsv # Output: Succeeded
🔧 Step 2 — Create ACR & AKS Cluster
# Create Azure Container Registry az acr create \ --resource-group $RESOURCE_GROUP \ --name $ACR_NAME \ --sku Basic # Create AKS cluster with ACR integration (one command!) az aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --node-count $NODE_COUNT \ --node-vm-size Standard_B2s \ --attach-acr $ACR_NAME \ --generate-ssh-keys \ --network-plugin azure \ --enable-managed-identity \ --enable-addons monitoring \ --zones 1 2 3 \ --tier free # Get credentials for kubectl az aks get-credentials \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --overwrite-existing # Verify cluster is running kubectl get nodes # NAME STATUS ROLES AGE VERSION # aks-nodepool1-12345678-vmss0000 Ready agent 2m v1.29.x # aks-nodepool1-12345678-vmss0001 Ready agent 2m v1.29.x
The --attach-acr flag grants the AKS managed identity the AcrPull role on your registry. Without this, pods will fail with ImagePullBackOff because the kubelet can't authenticate to ACR. This single flag replaces manual role assignments.
🔧 Step 3 — Build & Push a Sample Application
We'll use a simple Node.js API that stores page-view counts in Redis.
# Create app directory
mkdir -p skilly-app && cd skilly-app
# Create a simple Node.js app
cat > server.js <<'EOF'
const express = require("express");
const redis = require("redis");
const app = express();
const client = redis.createClient({ url: process.env.REDIS_URL || "redis://redis:6379" });
client.connect();
app.get("/", async (req, res) => {
const views = await client.incr("page_views");
res.json({
message: "Hello from AKS!",
views: views,
hostname: require("os").hostname(),
timestamp: new Date().toISOString()
});
});
app.get("/health", (req, res) => res.status(200).json({ status: "ok" }));
app.listen(3000, () => console.log("Server running on port 3000"));
EOF
cat > package.json <<'EOF'
{
"name": "skilly-app",
"version": "1.0.0",
"main": "server.js",
"dependencies": { "express": "^4.18.2", "redis": "^4.6.10" }
}
EOF
# Create Dockerfile
cat > Dockerfile <<'EOF'
FROM node:20-alpine
WORKDIR /app
COPY package.json .
RUN npm install --production
COPY server.js .
EXPOSE 3000
USER node
CMD ["node", "server.js"]
EOF# Build in the cloud using ACR Tasks (no local Docker needed!) az acr build \ --registry $ACR_NAME \ --image skilly-app:v1 \ . # Verify image exists az acr repository show-tags --name $ACR_NAME --repository skilly-app -o tsv # Output: v1
az acr build sends your source context to Azure and builds the image in the cloud. You don't need Docker running locally. This is faster, works in CI/CD agents without Docker-in-Docker, and the image is already in ACR when the build finishes.
🔧 Step 4 — Create a Helm Chart
# Scaffold a Helm chart cd .. && helm create skilly-chart cd skilly-chart # Clean up defaults — we'll write our own templates rm -rf templates/tests templates/hpa.yaml templates/serviceaccount.yaml
# values.yaml
replicaCount: 3
image:
repository: # Will be set via --set
tag: "v1"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
targetPort: 3000
ingress:
enabled: true
className: nginx
hosts:
- host: skilly.local
paths:
- path: /
pathType: Prefix
redis:
enabled: true
image: redis:7-alpine
port: 6379
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
readinessProbe:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 20# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-app
labels:
app: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.service.targetPort }}
env:
- name: REDIS_URL
value: "redis://{{ .Release.Name }}-redis:{{ .Values.redis.port }}"
readinessProbe:
httpGet:
path: {{ .Values.readinessProbe.path }}
port: {{ .Values.readinessProbe.port }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
livenessProbe:
httpGet:
path: {{ .Values.livenessProbe.path }}
port: {{ .Values.livenessProbe.port }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
resources:
{{- toYaml .Values.resources | nindent 10 }}# templates/redis.yaml
{{- if .Values.redis.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-redis
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}-redis
template:
metadata:
labels:
app: {{ .Release.Name }}-redis
spec:
containers:
- name: redis
image: {{ .Values.redis.image }}
ports:
- containerPort: {{ .Values.redis.port }}
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-redis
spec:
selector:
app: {{ .Release.Name }}-redis
ports:
- port: {{ .Values.redis.port }}
targetPort: {{ .Values.redis.port }}
{{- end }}🔧 Step 5 — Install NGINX Ingress Controller
# Add the ingress-nginx repo helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update # Install NGINX Ingress Controller helm install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.replicaCount=2 \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz # Wait for the external IP kubectl get svc -n ingress-nginx ingress-nginx-controller -w # NAME TYPE EXTERNAL-IP PORT(S) # ingress-nginx-controller LoadBalancer 20.xx.xx.xx 80:31080,443:31443
🔧 Step 6 — Deploy the App with Helm
# Get your ACR login server ACR_LOGIN_SERVER=$(az acr show --name $ACR_NAME --query loginServer -o tsv) # Deploy! helm install skilly-demo ./skilly-chart \ --namespace skilly \ --create-namespace \ --set image.repository=$ACR_LOGIN_SERVER/skilly-app \ --set image.tag=v1 # Check everything is running kubectl get all -n skilly # NAME READY STATUS RESTARTS AGE # pod/skilly-demo-app-xxx 1/1 Running 0 30s # pod/skilly-demo-app-yyy 1/1 Running 0 30s # pod/skilly-demo-app-zzz 1/1 Running 0 30s # pod/skilly-demo-redis-xxx 1/1 Running 0 30s # # NAME TYPE CLUSTER-IP PORT(S) # service/skilly-demo-app ClusterIP 10.0.xxx.xxx 80/TCP # service/skilly-demo-redis ClusterIP 10.0.xxx.xxx 6379/TCP
🔧 Step 7 — Test the Deployment
# Get the external IP of the ingress controller
EXTERNAL_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Test with curl (use Host header since we set skilly.local in Ingress)
curl -H "Host: skilly.local" http://$EXTERNAL_IP/
# {"message":"Hello from AKS!","views":1,"hostname":"skilly-demo-app-xxx","timestamp":"2026-04-20T..."}
# Hit it a few more times — watch the view counter increase and hostname rotate
for i in $(seq 1 5); do
curl -s -H "Host: skilly.local" http://$EXTERNAL_IP/ | jq .
done
# Check pod logs
kubectl logs -n skilly -l app=skilly-demo --tail=20
# Verify Helm release
helm list -n skilly
# NAME NAMESPACE REVISION STATUS CHART
# skilly-demo skilly 1 deployed skilly-chart-0.1.0🔧 Step 8 — Upgrade the App (Rolling Update)
# Make a code change, rebuild with a new tag cd skilly-app # (edit server.js — change the message to "Hello from AKS v2!") az acr build --registry $ACR_NAME --image skilly-app:v2 . # Upgrade the Helm release helm upgrade skilly-demo ./skilly-chart \ --namespace skilly \ --set image.repository=$ACR_LOGIN_SERVER/skilly-app \ --set image.tag=v2 # Watch the rolling update kubectl rollout status deployment/skilly-demo-app -n skilly # Waiting for deployment "skilly-demo-app" rollout to finish: 1 of 3 updated replicas are available... # deployment "skilly-demo-app" successfully rolled out # Verify the new version curl -s -H "Host: skilly.local" http://$EXTERNAL_IP/ | jq .message # "Hello from AKS v2!"
🔧 Step 9 — Rollback
# View release history helm history skilly-demo -n skilly # REVISION STATUS CHART APP VERSION DESCRIPTION # 1 superseded skilly-chart-0.1.0 1.0.0 Install complete # 2 deployed skilly-chart-0.1.0 1.0.0 Upgrade complete # Rollback to revision 1 helm rollback skilly-demo 1 -n skilly # Verify curl -s -H "Host: skilly.local" http://$EXTERNAL_IP/ | jq .message # "Hello from AKS!" (back to v1)
🧹 Cleanup
# Option 1: Delete just the Helm release helm uninstall skilly-demo -n skilly helm uninstall ingress-nginx -n ingress-nginx # Option 2: Delete EVERYTHING (cluster, ACR, resource group) az group delete --name $RESOURCE_GROUP --yes --no-wait # This deletes the AKS cluster, ACR, VNet, NSGs — everything in one command
AKS worker nodes are VMs that cost money. If you're using a free trial or personal subscription, always delete the resource group when you're done with the lab. az group delete is the cleanest way — it removes every Azure resource in the group.
🏗️ What You Built
| Component | Azure Resource | Purpose |
|---|---|---|
| Container Registry | ACR (Basic SKU) | Store Docker images |
| Kubernetes Cluster | AKS (Free tier) | Run workloads |
| Load Balancer | Azure LB (auto-created) | Route external traffic to Ingress |
| Ingress Controller | NGINX (Helm chart) | HTTP routing & TLS |
| Application | 3-replica Deployment | Node.js API |
| Cache | Redis Deployment | Page view counter |
| Package Manager | Helm | Templated K8s manifests |
📝 Key Takeaways
az aks create --attach-acris the single most important flag — it wires up authentication between AKS and ACRaz acr buildremoves the need for local Docker builds — build in the cloud, deploy from the cloud- Helm gives you versioned releases with one-command rollback — essential for production
- NGINX Ingress Controller is the standard way to expose HTTP services on AKS
- Always use
az group deletefor cleanup — it's the only way to ensure nothing is left behind