Beginner Lesson 1 of 14

What is Kubernetes

Understand why Kubernetes exists, what problems it solves, and how it differs from Docker.

🧒 Simple Explanation (ELI5)

Imagine you run a restaurant. Docker is like a food truck — it packages everything needed to cook a single dish (your app + dependencies) into one portable container. But what happens when you have 100 food trucks across a city? Who decides which truck goes where? Who replaces a broken-down truck? Who handles the dinner rush?

Kubernetes is the fleet manager. It decides where each truck goes, replaces broken ones automatically, adds more trucks when demand spikes, and makes sure customers always get their food. You tell Kubernetes "I need 5 trucks running the pizza app" and it handles everything else.

🤔 Why Do We Need Kubernetes?

Without Kubernetes, running containers in production means you have to manually:

Kubernetes automates all of this. It's the industry standard for container orchestration — used by Google, Microsoft, Amazon, and virtually every major tech company.

💡
Key Insight

Docker is about packaging apps into containers. Kubernetes is about managing those containers at scale. They're complementary, not competitors.

🔧 Technical Explanation

Kubernetes (often abbreviated K8s) is an open-source container orchestration platform originally designed by Google, now maintained by the Cloud Native Computing Foundation (CNCF).

Core Concepts

What Kubernetes Does

CapabilityDescription
Self-healingRestarts failed containers, replaces unresponsive nodes
Horizontal scalingAdds or removes container instances based on load
Service discoveryBuilt-in DNS and load balancing for services
Rolling updatesZero-downtime deployments with rollback capability
Secret managementStores and injects sensitive data securely
Storage orchestrationMounts persistent volumes from cloud or local storage

📊 Visual: Docker vs Kubernetes

Docker Alone vs Kubernetes
Docker Only
Container 1
Container 2
Container 3
❌ Container 4 (crashed)
Manual restart needed
With Kubernetes
Pod 1 ✅
Pod 2 ✅
Pod 3 ✅
♻️ Pod 4 (auto-restarted)
Self-healing

⌨️ Hands-on: Your First Kubernetes Commands

If you have a cluster running (minikube, kind, or cloud), try these:

bash
# Check cluster info
kubectl cluster-info

# See all nodes in the cluster
kubectl get nodes

# View all namespaces
kubectl get namespaces

# Run a simple pod
kubectl run hello --image=nginx --port=80

# Check the pod status
kubectl get pods

# View pod details
kubectl describe pod hello

# Delete the test pod
kubectl delete pod hello
💡
Tip

Don't have a cluster yet? Install minikube or kind locally. For cloud, Azure AKS gives a free tier to experiment.

🐛 Debugging Scenario

Scenario: "Why not just use Docker Compose in production?"

A common beginner question. Docker Compose is excellent for local development, but here's what it lacks for production:

NeedDocker ComposeKubernetes
Multi-node deployment❌ Single host✅ Multi-node cluster
Self-healing❌ Manual restart✅ Auto-restart & reschedule
Scaling⚠️ Limited✅ HPA, VPA, cluster autoscaler
Rolling updates❌ Downtime on redeploy✅ Zero-downtime rolling update
Service discovery⚠️ Basic DNS✅ Built-in DNS + load balancing
Secret management⚠️ Env files✅ Encrypted secrets, RBAC

Scenario: "My team deployed 50 containers manually and it keeps breaking"

This is exactly the problem Kubernetes solves. When your team manually manages containers:

Kubernetes handles all of this through its reconciliation loop: it continuously checks if reality matches the desired state and corrects any drift.

🎯 Interview Questions

Beginner

Q: What is Kubernetes and why is it used?

Kubernetes is an open-source container orchestration platform that automates deploying, scaling, and managing containerized applications. It's used because manually managing containers across servers doesn't scale — Kubernetes automates scheduling, self-healing, load balancing, rolling updates, and secret management.

Q: What is the difference between Docker and Kubernetes?

Docker is a container runtime — it packages and runs individual containers. Kubernetes is an orchestration layer — it manages many containers across many machines. Docker builds and runs the container; Kubernetes decides where to run it, restarts it if it fails, scales it, and routes traffic to it. They work together, not as replacements.

Q: What does "K8s" mean?

K8s is a numeronym for Kubernetes — "K" + 8 letters + "s". It's shorthand used in the community. Kubernetes itself comes from the Greek word for "helmsman" or "pilot".

Q: What is a Kubernetes cluster?

A cluster is a set of machines (nodes) that run containerized applications managed by Kubernetes. It consists of a Control Plane (master components that make scheduling decisions) and Worker Nodes (where the actual application containers run).

Q: What is "desired state" in Kubernetes?

Desired state is what you declare in YAML manifests — for example, "I want 3 replicas of my app running image v2." Kubernetes continuously compares the actual state of the cluster to this desired state and takes corrective action (starts pods, kills pods, restarts crashed ones) to make them match.

Intermediate

Q: How does Kubernetes achieve self-healing?

Through the reconciliation loop. Controllers (like the Deployment controller) continuously watch the cluster state. If a pod crashes, the controller detects the mismatch between desired (3 replicas) and actual (2 running), and creates a new pod. If a node dies, pods are rescheduled to healthy nodes. Liveness and readiness probes provide additional health checking.

Q: What's the difference between imperative and declarative approaches in K8s?

Imperative: You tell K8s what to do step by step: kubectl run nginx --image=nginx. Declarative: You define the desired state in a YAML file and kubectl apply -f it. Kubernetes figures out the steps. Declarative is preferred for production because manifests are version-controlled, reproducible, and auditable.

Q: What container runtimes does Kubernetes support?

Kubernetes supports any CRI (Container Runtime Interface) compliant runtime. Common ones: containerd (default in most distros), CRI-O (lightweight, designed for K8s). Docker was used historically but was deprecated as a runtime in K8s 1.24 — though Docker-built images still work fine with containerd.

Q: What are namespaces and why are they important?

Namespaces provide logical isolation within a cluster. They let you divide resources between teams, environments (dev/staging/prod), or projects. Resource quotas and RBAC policies can be scoped to namespaces. Default namespaces: default, kube-system, kube-public, kube-node-lease.

Q: Can Kubernetes run without Docker?

Yes. Since K8s 1.24, Docker is no longer a supported runtime. Kubernetes uses the CRI (Container Runtime Interface) and typically runs with containerd or CRI-O. However, you can still build container images with Docker — the images are OCI-compatible and work with any runtime.

Scenario-Based

Q: Your manager asks "Why can't we just use Docker Compose in production?" How do you respond?

Docker Compose is single-host — it can't distribute containers across machines. In production you need multi-node deployment, automatic failover, scaling, rolling updates, and proper secret management. Kubernetes provides all of this plus a declarative model that integrates with CI/CD. Compose is great for local dev, but Kubernetes is designed for production resilience.

Q: A server hosting 20 containers goes down at 2 AM. How does Kubernetes handle this vs manual management?

Without K8s: pager goes off, engineer SSHs in, manually restarts containers or moves them to another server. With K8s: the node controller detects the node is unhealthy, marks pods as terminated, scheduler automatically redistributes workloads to healthy nodes. Self-healing happens in seconds/minutes — no human intervention needed.

Q: Your app gets 10x traffic during a flash sale. How does Kubernetes help?

Configure a Horizontal Pod Autoscaler (HPA) to scale pods based on CPU/memory or custom metrics. If nodes run out of capacity, the Cluster Autoscaler adds more nodes. When traffic drops, both scale back down. This is reactive (metric-based) — for predictable events, you can also pre-scale by updating replica counts before the sale.

Q: You deployed a bad version and users are seeing errors. What's your rollback strategy?

Kubernetes tracks deployment revision history. Run kubectl rollout undo deployment/myapp to instantly roll back to the previous working version. For more control: kubectl rollout undo deployment/myapp --to-revision=3. The rolling update strategy ensures zero downtime during both deploy and rollback.

Q: Your team is debating between Kubernetes and AWS ECS. What factors do you consider?

Consider: Portability — K8s runs anywhere (cloud, on-prem, hybrid); ECS is AWS-only. Complexity — ECS is simpler if you're AWS-only; K8s has a steeper learning curve but more flexibility. Ecosystem — K8s has a massive ecosystem (Helm, operators, service mesh); ECS is more limited. Team skill — if the team knows K8s, use it. Vendor lock-in — K8s minimizes it; ECS creates AWS dependency. For multi-cloud or hybrid: K8s. For simple AWS-only workloads: ECS can be sufficient.

🌍 Real-World Use Case

A fintech startup runs 30 microservices processing payment transactions. Before Kubernetes, they deployed containers manually to EC2 instances with shell scripts. Problems:

After migrating to Kubernetes (AKS):

📝 Summary

← Back to Kubernetes Course