Beginner Lesson 3 of 14

Pods

The smallest deployable unit in Kubernetes. Understand pod lifecycle, multi-container patterns, and debugging.

🧒 Simple Explanation (ELI5)

A Pod is like an apartment. A container is a person living in the apartment. Most apartments have one person (one container), but sometimes roommates share the apartment — they share the same address (IP), the same kitchen (network), and the same storage closets (volumes). Just like an apartment has a lifecycle (built → occupied → vacated → demolished), pods are created, run, and eventually terminated.

🤔 Why Do We Need Pods?

Why not run containers directly? Because Kubernetes needs a higher-level abstraction:

🔧 Technical Explanation

Pod Lifecycle Phases

PhaseMeaning
PendingPod accepted but not yet running. Waiting for scheduling, image pull, or volume mount.
RunningPod bound to a node, at least one container running.
SucceededAll containers terminated successfully (exit code 0). Common for Jobs.
FailedAll containers terminated, at least one exited with an error.
UnknownPod state cannot be determined (usually node communication failure).

Container States Within a Pod

StateDescription
WaitingContainer not yet running (pulling image, applying secrets).
RunningContainer executing without issues.
TerminatedContainer finished execution (success or failure).

Restart Policies

Multi-Container Pod Patterns

PatternUse CaseExample
SidecarExtend the main container's functionalityLog shipper, proxy (Envoy), file syncer
AmbassadorProxy connections to external servicesLocal proxy to a remote database cluster
Init ContainerRun setup tasks before the app startsDB migration, config fetching, wait-for-dependency

📊 Visual: Pod Structure

Single-Container Pod vs Multi-Container Pod
Single Container
nginx (main)
IP: 10.244.1.5
Volume: /data
Multi Container (Sidecar)
app (main)
log-collector (sidecar)
Shared IP: 10.244.1.8
Shared Volume: /var/log
Pod Lifecycle
Pending
Running
Succeeded / Failed

⌨️ Hands-on: Working with Pods

Create a pod imperatively

bash
# Run an nginx pod
kubectl run my-nginx --image=nginx:latest --port=80

# Check pod status
kubectl get pods

# Watch pod status in real-time
kubectl get pods -w

# Get detailed info
kubectl describe pod my-nginx

# View pod logs
kubectl logs my-nginx

# Execute a command inside the pod
kubectl exec -it my-nginx -- /bin/bash

# Delete the pod
kubectl delete pod my-nginx

Create a pod declaratively (YAML)

yaml
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: my-app
    tier: frontend
spec:
  restartPolicy: Always
  containers:
    - name: app
      image: nginx:1.25
      ports:
        - containerPort: 80
      resources:
        requests:
          memory: "64Mi"
          cpu: "100m"
        limits:
          memory: "128Mi"
          cpu: "250m"
      livenessProbe:
        httpGet:
          path: /
          port: 80
        initialDelaySeconds: 5
        periodSeconds: 10
      readinessProbe:
        httpGet:
          path: /
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 5
bash
# Apply the pod definition
kubectl apply -f pod.yaml

# Verify
kubectl get pods -o wide

# Check events
kubectl describe pod my-app | tail -20

Multi-container pod with sidecar

yaml
# sidecar-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
    - name: app
      image: nginx:1.25
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx
    - name: log-collector
      image: busybox
      command: ["sh", "-c", "tail -f /var/log/nginx/access.log"]
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx
  volumes:
    - name: shared-logs
      emptyDir: {}

Init container example

yaml
# init-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: app-with-init
spec:
  initContainers:
    - name: wait-for-db
      image: busybox
      command: ["sh", "-c", "until nslookup my-db-service; do echo waiting for DB; sleep 2; done"]
  containers:
    - name: app
      image: nginx:1.25
💡
Tip

Init containers run before app containers and must complete successfully. Use them for pre-flight checks like waiting for a database, running migrations, or downloading config files.

🐛 Debugging Scenarios

Scenario 1: Pod Keeps Restarting (CrashLoopBackOff)

Symptom: Pod shows CrashLoopBackOff with increasing restart count.

bash
# Step 1: Check why the container is exiting
kubectl logs my-pod --previous

# Step 2: Check events for more clues
kubectl describe pod my-pod | grep -A 20 "Events"

# Step 3: Check exit code
kubectl get pod my-pod -o jsonpath='{.status.containerStatuses[0].lastState.terminated.exitCode}'

# Common causes:
# - Exit code 1: Application error (missing env var, config issue)
# - Exit code 137: OOMKilled (increase memory limit)
# - Exit code 139: Segfault (application bug)
# - Command exits immediately (entrypoint/CMD issue)

Scenario 2: Pod Stuck in Pending

Cause: Scheduler can't find a suitable node.

Scenario 3: Pod in ImagePullBackOff

Cause: Kubernetes can't pull the container image.

bash
# Check the exact error
kubectl describe pod my-pod | grep -A 5 "Events"

# Fix: if private registry
kubectl create secret docker-registry my-reg-secret \
  --docker-server=myregistry.io \
  --docker-username=user \
  --docker-password=pass

# Then add to pod spec:
# imagePullSecrets:
#   - name: my-reg-secret

🎯 Interview Questions

Beginner

Q: What is a Pod in Kubernetes?

A Pod is the smallest deployable unit in Kubernetes. It's a wrapper around one or more containers that share the same network namespace (same IP, same ports), storage volumes, and lifecycle. Kubernetes schedules, scales, and manages pods — not individual containers.

Q: Can a Pod have multiple containers? Why?

Yes. Multi-container pods are used when containers are tightly coupled and need to share resources. Common patterns: sidecar (log collector alongside app), ambassador (local proxy), init containers (setup tasks before app starts). They share the same IP and can communicate via localhost.

Q: What are the possible phases of a Pod?

Pending (waiting to be scheduled/started), Running (at least one container running), Succeeded (all containers exited with code 0), Failed (at least one container exited with an error), Unknown (state can't be determined, usually node communication issue).

Q: What is the default restart policy for a Pod?

Always. This means Kubernetes will always restart a container when it exits, regardless of the exit code. Other options: OnFailure (restart only on non-zero exit) and Never.

Q: What is an Init Container?

Init containers run before the app containers and must complete successfully. They're used for setup tasks: waiting for a dependency to be ready, running database migrations, downloading config files. Each init container runs sequentially and must exit with code 0 before the next one starts.

Intermediate

Q: What's the difference between liveness and readiness probes?

Liveness probe: "Is the container still alive?" If it fails, kubelet kills the container and restarts it. Used to detect deadlocks or hung processes. Readiness probe: "Is the container ready to accept traffic?" If it fails, the pod is removed from Service endpoints (no traffic routed to it) but not restarted. Used during startup or when a pod needs to temporarily stop serving.

Q: What happens when a pod gets OOMKilled?

The container exceeds its memory limit. The Linux kernel's OOM killer terminates the process (exit code 137). Kubernetes sees the container as failed and restarts it (if restartPolicy allows). Fix: increase memory limit, optimize application memory usage, or fix memory leaks. Check with kubectl describe pod — you'll see "OOMKilled" in the last state.

Q: How do containers in a pod communicate?

Containers in the same pod share the network namespace. They communicate via localhost on different ports. They can also share data through mounted volumes (emptyDir, etc.). They don't need Service or DNS for inter-container communication within the same pod.

Q: What are resource requests vs limits?

Requests: The guaranteed minimum resources the container needs. Used by the scheduler to place pods on nodes. Limits: The maximum resources the container can use. CPU is throttled; memory over-limit triggers OOMKill. Best practice: always set both. Requests should reflect actual needs; limits provide a safety ceiling.

Q: What is a static pod?

Static pods are managed directly by kubelet on a specific node, not by the API server. They're defined as YAML files in a directory kubelet watches (typically /etc/kubernetes/manifests/). The control plane components (apiserver, scheduler, controller-manager, etcd) are often run as static pods on master nodes. They can't be managed by kubectl — only by editing files on the node.

Scenario-Based

Q: A pod is in CrashLoopBackOff. Walk through your debugging process.

1) kubectl logs pod-name --previous to see logs from the crashed container. 2) kubectl describe pod pod-name — check Events section and container last state/exit code. 3) Exit code 1 = app error (check config, env vars). Exit code 137 = OOMKilled (increase memory limit). Exit code 139 = segfault (app bug). 4) Check if the container command is correct — sometimes the entrypoint exits immediately. 5) Try running the image locally: docker run -it image:tag to reproduce.

Q: Your pod is stuck in Pending for 10 minutes. What do you check?

1) kubectl describe pod — check Events for scheduler messages. 2) "Insufficient cpu/memory" → nodes are full; scale cluster or reduce requests. 3) "no nodes match node selector" → check nodeSelector labels vs node labels. 4) Taints without tolerations → add tolerations or remove taints. 5) Unbound PVC → check PersistentVolume availability and StorageClass. 6) Check for ResourceQuota limits on the namespace.

Q: You need to debug a running container but curl isn't installed. What do you do?

Use ephemeral debug containers (K8s 1.25+): kubectl debug -it pod-name --image=busybox --target=container-name. This attaches a debug container that shares the process namespace. Alternatively, run a debug pod in the same namespace: kubectl run debug --image=nicolaka/netshoot -it --rm — this image has curl, dig, netstat, and other network tools.

Q: A pod's readiness probe starts failing but liveness passes. What happens?

The pod is removed from all Service endpoints — no new traffic is routed to it. But the pod keeps running since liveness passes. This is the correct behavior when a pod is temporarily unable to serve (e.g., loading a cache, reconnecting to a DB). Once readiness passes again, the pod is added back to endpoints. It's a graceful way to handle temporary unavailability without restarting.

Q: Your sidecar container is consuming too many resources and affecting the main app. How do you fix it?

1) Set resource limits on the sidecar container independently. Each container in a pod can have its own requests/limits. 2) If the sidecar is a log collector overwhelming disk I/O, add rate limiting in its config. 3) Consider if the sidecar truly needs to be in the same pod — could it run as a DaemonSet instead? 4) Profile the sidecar to find the resource bottleneck (CPU, memory, network).

🌍 Real-World Use Case

An e-commerce platform uses the sidecar pattern extensively. Every application pod has:

Resource requests/limits are set per container: the app gets 500m CPU/512Mi memory, Envoy gets 100m/128Mi, and fluentd gets 50m/64Mi.

📝 Summary

← Back to Kubernetes Course