Intermediate Lesson 5 of 14

Deployments

Declarative updates for Pods. Learn rolling updates, rollbacks, and strategies to manage application lifecycle.

🧒 Simple Explanation (ELI5)

A Deployment is like a manager at a call center. You tell the manager "I need 5 agents handling calls at all times." If one agent takes a break, the manager brings in a replacement. If you want to train everyone on a new script (new version), the manager rotates one agent at a time — so calls are never dropped. If the new script is bad, the manager can instantly switch everyone back to the old one. That's what a Deployment does for your pods.

🤔 Why Use Deployments Instead of Pods?

💡
Key Insight

In production, you should never create pods directly. Always use a Deployment (or another controller like StatefulSet/DaemonSet). Bare pods aren't rescheduled if a node fails.

🔧 Technical Explanation

Deployment → ReplicaSet → Pods

A Deployment doesn't manage pods directly. The hierarchy is:

Deployment Hierarchy
Deployment
→ manages →
ReplicaSet
→ manages →
Pod, Pod, Pod

The Deployment controller creates/updates ReplicaSets. The ReplicaSet controller ensures the right number of pods are running. On update, a new ReplicaSet is created and scaled up while the old one is scaled down.

Update Strategies

StrategyBehaviorUse When
RollingUpdate (default)Gradually replaces old pods with new ones. Controls via maxSurge and maxUnavailable.Most applications. Zero-downtime requirement.
RecreateKills all old pods first, then creates new ones. Brief downtime.Apps that can't run two versions simultaneously (DB schema incompatibility).

RollingUpdate Parameters

⌨️ Hands-on: Deployments in Practice

Create a Deployment

yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: web
          image: nginx:1.24
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "100m"
              memory: "64Mi"
            limits:
              cpu: "200m"
              memory: "128Mi"
          readinessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 5
bash
# Apply
kubectl apply -f deployment.yaml

# Check deployment status
kubectl get deployments
kubectl get rs    # see ReplicaSet
kubectl get pods  # see pods

# Detailed rollout status
kubectl rollout status deployment/web-app

Perform a Rolling Update

bash
# Update image (triggers rolling update)
kubectl set image deployment/web-app web=nginx:1.25

# Watch the rollout
kubectl rollout status deployment/web-app

# See the new ReplicaSet created
kubectl get rs

# Check rollout history
kubectl rollout history deployment/web-app

# View specific revision details
kubectl rollout history deployment/web-app --revision=2

Rollback

bash
# Rollback to previous version
kubectl rollout undo deployment/web-app

# Rollback to a specific revision
kubectl rollout undo deployment/web-app --to-revision=1

# Verify
kubectl rollout status deployment/web-app
kubectl describe deployment web-app | grep Image

Scaling

bash
# Scale up
kubectl scale deployment/web-app --replicas=5

# Scale down
kubectl scale deployment/web-app --replicas=2

# Verify
kubectl get pods

🐛 Debugging Scenarios

Scenario 1: Wrong Image Deployed

Symptom: After updating, new pods are in ImagePullBackOff or ErrImagePull. Old pods are still running (maxUnavailable=0 saved you).

bash
# Check what image was set
kubectl describe deployment web-app | grep Image

# See the failing pods
kubectl get pods
kubectl describe pod web-app-xxxx-yyyy | grep -A 5 "Events"

# Fix: rollback immediately
kubectl rollout undo deployment/web-app

# Or fix the image
kubectl set image deployment/web-app web=nginx:1.25

Scenario 2: Rollout Stuck

Symptom: kubectl rollout status hangs. New pods aren't becoming Ready.

Scenario 3: All Pods Killed During Update (Using Recreate Strategy by Mistake)

Symptom: Complete downtime during update.

Root cause: strategy.type: Recreate instead of RollingUpdate.

Fix: Change strategy to RollingUpdate with maxUnavailable: 0 for zero downtime.

⚠️
Warning

Always set maxUnavailable: 0 and configure readiness probes if zero downtime is required. Without a readiness probe, K8s considers pods ready immediately — so traffic might hit a pod that hasn't finished starting.

🎯 Interview Questions

Beginner

Q: What is a Deployment in Kubernetes?

A Deployment is a Kubernetes resource that provides declarative updates for Pods. You describe a desired state (image, replicas, strategy) and the Deployment controller changes the actual state at a controlled rate. It manages ReplicaSets, which in turn manage Pods.

Q: What's the difference between a Pod and a Deployment?

A Pod is a single instance of a running process. A Deployment manages multiple pod replicas, ensures they're always running (self-healing), handles rolling updates and rollbacks. Pods are ephemeral and not rescheduled if a node fails; Deployments ensure pods are always replaced.

Q: What is a ReplicaSet?

A ReplicaSet ensures a specified number of pod replicas are running at any time. Deployments create and manage ReplicaSets. You rarely create ReplicaSets directly — use Deployments instead, which add rolling update and rollback capabilities on top.

Q: How do you scale a Deployment?

kubectl scale deployment/my-app --replicas=5 or edit the YAML and kubectl apply. For automatic scaling, use a HorizontalPodAutoscaler (HPA) that scales based on CPU, memory, or custom metrics.

Q: How do you roll back a Deployment?

kubectl rollout undo deployment/my-app reverts to the previous revision. kubectl rollout undo deployment/my-app --to-revision=N goes to a specific revision. K8s tracks revision history via ReplicaSets — each update creates a new RS while keeping old ones.

Intermediate

Q: Explain the difference between RollingUpdate and Recreate strategies.

RollingUpdate: Gradually replaces pods — new pods start while old ones still serve traffic. Controlled by maxSurge and maxUnavailable. Zero downtime if configured properly. Recreate: Kills all existing pods before creating new ones. Causes downtime but ensures no two versions run simultaneously. Use Recreate only when old and new versions are incompatible.

Q: What are maxSurge and maxUnavailable?

maxSurge: Maximum number of pods that can be created above the desired count during a rolling update (absolute number or %). maxUnavailable: Maximum pods that can be unavailable during update. For zero downtime: set maxUnavailable=0, maxSurge=1. For faster rollouts: increase both.

Q: What happens to old ReplicaSets after an update?

Old ReplicaSets are scaled to 0 replicas but kept for rollback history. The number of revisions kept is controlled by spec.revisionHistoryLimit (default 10). Setting it to 0 disables rollback — not recommended.

Q: What is progressDeadlineSeconds?

The maximum time (default 600s) Kubernetes waits for a deployment to make progress. If no new pods become ready within this window, the deployment is marked as failed with a ProgressDeadlineExceeded condition. It doesn't rollback automatically — you need to intervene.

Q: How do you pause and resume a rollout?

kubectl rollout pause deployment/my-app stops the rollout mid-way — useful for canary-style testing. Make changes while paused (they accumulate). kubectl rollout resume deployment/my-app continues. Changes are batched into a single rollout when resumed.

Scenario-Based

Q: You deployed a new image version and users report 500 errors. The rollout is still in progress. What do you do?

1) Immediately: kubectl rollout undo deployment/my-app to revert. 2) While reverting, check what went wrong: kubectl logs deployment/my-app, kubectl describe deployment. 3) Once stable, debug the new version in a staging environment. 4) Post-mortem: add better readiness probes so bad pods are never added to service endpoints. 5) Consider canary deployments or progressive delivery for safer rollouts.

Q: Your deployment has 10 replicas and you set maxSurge=2 and maxUnavailable=1. Describe the update process.

During update: max 12 pods total (10 + maxSurge 2), min 9 available (10 - maxUnavailable 1). Process: K8s creates 2 new pods (surge). As they become ready, 1 old pod is terminated (unavailable). Then 1-2 more new pods are created, old ones terminated — always keeping at least 9 available. This continues until all 10 pods run the new version.

Q: You need to update both the app image and a ConfigMap atomically. How?

ConfigMap changes don't trigger a rollout. Options: 1) Name ConfigMaps with a hash suffix (e.g., app-config-v2abc) and reference the new name in the deployment — changing the reference triggers a rollout. 2) Use kubectl rollout restart deployment/my-app after updating the ConfigMap. 3) Use tools like Helm or kustomize that handle this pattern. Option 1 is the most reliable for atomic, auditable changes.

Q: Your rollout is stuck at 50% — new pods are CrashLoopBackOff. Old pods still serve traffic. What's happening and how do you fix it?

The new ReplicaSet's pods are failing, so they never become ready. With maxUnavailable=0, old pods aren't removed. You're in a stalled state. Fix: 1) kubectl rollout undo to cancel the bad rollout. 2) Investigate crash: kubectl logs pod-name --previous. 3) Fix the issue (wrong config, missing env var, bad image). 4) Re-deploy. The progressDeadlineSeconds will eventually mark the deployment as failed, but it won't auto-rollback.

Q: How would you implement blue-green deployments in Kubernetes?

K8s doesn't have native blue-green. Approach: 1) Run two Deployments: app-blue (current) and app-green (new). 2) Both run simultaneously. 3) A Service's selector points to blue labels initially. 4) Test green independently. 5) Switch the Service selector to green. 6) If issues, switch back to blue. 7) Once stable, decommission blue. For more advanced patterns, use Argo Rollouts or Flagger for automated progressive delivery.

🌍 Real-World Use Case

A SaaS company deploys their API service (120 replicas) multiple times daily. Their setup:

📝 Summary

← Back to Kubernetes Course