Helm requires a working Kubernetes cluster and kubectl configured. If you followed the Kubernetes course, you're already set.
What is Helm
The package manager for Kubernetes — why it exists, what problem it solves, and how it compares to managing raw YAML.
🧒 Simple Explanation (ELI5)
Imagine you want to install a complex application on your computer — say, a video editor with plugins, codecs, and settings. You don't download 50 files individually and configure each one. You use an installer (like apt-get on Linux or a .msi on Windows) that bundles everything together.
Helm is that installer for Kubernetes. Instead of writing 10–20 YAML files for a single application (Deployment, Service, ConfigMap, Secret, Ingress, HPA...), Helm packages them into a single chart that you install with one command.
🔧 Why Do We Need Helm?
The Problem with Raw YAML
In the Kubernetes course, you learned to deploy apps with kubectl apply -f. That works, but at scale it breaks down:
| Problem | Raw YAML | Helm |
|---|---|---|
| Deploy 15 microservices | 150+ YAML files to manage | 15 helm installs (or 1 umbrella chart) |
| Same app, different envs | Duplicate YAML for dev/staging/prod | One chart, different values files |
| Upgrade an app | Manually edit YAMLs, kubectl apply | helm upgrade with new values |
| Rollback | git revert + kubectl apply (painful) | helm rollback myapp 1 |
| Share configs | Copy-paste YAML between teams | Publish chart to a registry |
📊 Visual: Helm vs Raw YAML
🔧 Technical Explanation
Helm is a package manager and templating engine for Kubernetes. It has three core concepts:
| Concept | What It Is | Analogy |
|---|---|---|
| Chart | A package of Kubernetes YAML templates + metadata | Like a .deb or .rpm package |
| Release | An installed instance of a chart in a cluster | An installed application |
| Repository | A place to store and share charts | Like apt repo or npm registry |
⌨️ Hands-on: Install Helm
# Install Helm (macOS) brew install helm # Install Helm (Windows — winget) winget install Helm.Helm # Install Helm (Linux) curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash # Verify helm version
# Your first Helm install — deploy nginx from a public chart helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update # Install nginx helm install my-nginx bitnami/nginx # See what was created helm list kubectl get all
🐛 Debugging Scenario
Scenario: "helm install" fails immediately
# Error: "Kubernetes cluster unreachable" # This means Helm can't connect to your cluster # Debug: kubectl cluster-info # Check if kubectl works kubectl config current-context # Verify correct context # If using Docker Desktop or minikube: minikube status # or check Docker Desktop K8s is enabled # Error: "release name already exists" helm list -A # Check if the name is already used helm uninstall my-nginx # Remove old release first
🎯 Interview Questions
Beginner
Helm is the package manager for Kubernetes. It simplifies deploying, upgrading, and managing applications on Kubernetes by bundling all required Kubernetes manifests (Deployment, Service, ConfigMap, etc.) into a single chart. Think of it as apt/yum for Kubernetes.
Helm solves the complexity of managing multiple YAML files for Kubernetes applications. Without Helm, deploying a single app might require 10+ YAML files. Helm bundles them into a chart with templating, so you can deploy, upgrade, rollback, and share applications easily. It also eliminates YAML duplication across environments via values files.
Chart: A package containing Kubernetes manifest templates and metadata. Release: A running instance of a chart installed in a cluster (you can install the same chart multiple times with different release names). Repository: A collection of charts available for download (like Artifact Hub, Bitnami, or your own private repo).
kubectl apply applies static YAML files — no templating, no versioning, no rollback tracking. Helm adds: templating (dynamic values per environment), release management (tracks install/upgrade/rollback history), packaging (bundle files into a chart), and dependency management (include sub-charts). kubectl is the knife; Helm is the kitchen.
A Helm chart is a directory (or compressed archive) containing: Chart.yaml (metadata — name, version, description), values.yaml (default configuration values), templates/ (Kubernetes YAML templates with Go template syntax), and optionally charts/ (dependency subcharts). It's the unit of packaging in Helm.
Intermediate
The biggest change: Tiller was removed. Helm 2 used a server-side component (Tiller) in the cluster, which was a security risk (it had cluster-admin access). Helm 3 is client-only — it talks directly to the Kubernetes API using your kubeconfig. Other changes: release names are now namespace-scoped, 3-way strategic merge for upgrades, JSON Schema validation for values.
Yes. Each installation creates a unique release. For example: helm install app-dev bitnami/nginx and helm install app-staging bitnami/nginx create two independent releases of the same chart, potentially with different values. Release names must be unique within a namespace.
Artifact Hub (artifacthub.io) is the central search website for Helm charts, similar to Docker Hub for container images. It aggregates charts from multiple repositories (Bitnami, JetStack, Prometheus, etc.). You can discover charts, read documentation, and find install commands there.
Helm: Full templating engine, packaging, versioning, release management, rollback. Best for distributable applications, third-party software, and complex deployments with dependencies. Kustomize: Overlay-based patching of plain YAML, no templating, built into kubectl (kubectl apply -k). Best for internal team configs where you want to keep base YAML readable. Key differences: Helm has release lifecycle (install/upgrade/rollback/history) — Kustomize doesn't. Helm has dependency management — Kustomize doesn't. Kustomize keeps YAML as plain YAML (no template syntax) — Helm requires learning Go templates. In practice: Many teams use both: Helm for third-party charts (nginx, PostgreSQL, Redis), Kustomize for internal app overlays. ArgoCD supports both natively.
Helm 3 stores release information as Kubernetes Secrets (by default) in the namespace where the release is installed. The secret type is helm.sh/release.v1. This means release data is backed by etcd and respects RBAC. You can change the storage backend to ConfigMaps via --storage=configmaps.
Scenario-Based
Create a base Helm chart (or umbrella chart) with templates for the common patterns (Deployment, Service, ConfigMap, HPA). Each microservice gets its own values file with service-specific configuration (image, replicas, env vars). Deploy all 20 with a script or Helmfile. Upgrades become helm upgrade per service. Rollbacks are instant. Template once, configure many.
The release name is already in use in that namespace. Check with helm list -n <namespace>. If the old release is not needed, helm uninstall <name>. If it's a failed install that left a stale release, helm uninstall <name> cleans it up. Alternatively, use helm upgrade --install which installs if not present or upgrades if it exists — this is idempotent and CI/CD-friendly.
Create one chart. Create three values files: values-dev.yaml, values-staging.yaml, values-prod.yaml. Each overrides replicas, image tag, resource limits, ingress host, etc. Deploy: helm install myapp-dev ./mychart -f values-dev.yaml -n dev. Same chart, different configuration — zero duplication.
Immediate recovery: helm rollback <release-name> <revision-number>. Check history first: helm history <release-name> to see all revisions and find the last working one. Helm reverts all Kubernetes resources to the previous state. Then investigate: compare values with helm get values <release-name> --revision N and check manifests with helm get manifest.
Create a library chart or base chart that enforces company standards (resource limits, labels, security contexts, health probes). Publish it to an internal chart repository (ChartMuseum, OCI registry, or GitHub Pages). Teams use it as a dependency or use helm create from the template. Add JSON Schema validation in values.schema.json to enforce required values. Use OPA/Kyverno to validate rendered manifests.
🔄 Migrating from kubectl to Helm
If you already deploy with kubectl apply -f, here's how to migrate step by step:
# Step 1: Scaffold a chart from your existing YAMLs
helm create myapp
# Step 2: Copy your existing manifests into templates/
cp deployment.yaml myapp/templates/
cp service.yaml myapp/templates/
cp configmap.yaml myapp/templates/
# Step 3: Replace hardcoded values with template variables
# Before (in deployment.yaml):
# replicas: 3
# image: myapp:v1.2.3
# After:
# replicas: {{ .Values.replicaCount }}
# image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
# Step 4: Define defaults in values.yaml
# replicaCount: 3
# image:
# repository: myapp
# tag: v1.2.3
# Step 5: Test rendering
helm template test ./myapp
helm lint ./myapp
# Step 6: Adopt existing resources into Helm
# Label & annotate your running resources so Helm "owns" them:
kubectl annotate deployment myapp \
meta.helm.sh/release-name=myapp \
meta.helm.sh/release-namespace=default
kubectl label deployment myapp \
app.kubernetes.io/managed-by=Helm
# Step 7: Install (Helm adopts the existing resources)
helm install myapp ./myapp
After migration, all changes should go through helm upgrade, not kubectl edit or kubectl apply. Manual kubectl edits will be detected by Helm's 3-way merge on the next upgrade — they may be preserved or overwritten depending on conflicts. Helm becomes the single source of truth for your K8s resources.
🌍 Real-World Use Case
A fintech startup with 30 microservices migrated from raw YAML to Helm:
- Before: 300+ YAML files, 2-hour manual deployment, copy-paste errors between envs
- After: 1 base chart + 30 values files,
helm upgradeper service in CI/CD, rollback in seconds - Result: Deployment time dropped from 2 hours to 10 minutes. Rollback went from "revert git + redeploy" to one command. Config drift between environments eliminated.
🌍 Real-World Scenario: E-Commerce Platform Migration
A mid-size e-commerce company (15 engineers, 12 microservices) migrated from kubectl to Helm:
| Aspect | Before (kubectl) | After (Helm) |
|---|---|---|
| Deploy a service | kubectl apply -f deploy/ service/ config/ ingress/ (4 commands, 4 files) | helm upgrade --install myapp ./chart -f values-prod.yaml |
| Rollback from bad deploy | Find last good commit → git checkout → kubectl apply (10-30 min) | helm rollback myapp (30 seconds) |
| New environment | Copy all YAMLs, find-and-replace values (error-prone) | New values file: values-staging.yaml |
| Onboard new dev | "Read the wiki, here are 47 YAML files" | helm install dev-env ./chart -f values-dev.yaml |
| Audit what's deployed | kubectl get all + compare with Git (drift common) | helm get values myapp --all + helm get manifest myapp |
📝 Summary
- Helm is the package manager for Kubernetes
- It bundles multiple YAML files into a single chart
- Core concepts: Chart (package), Release (installed instance), Repository (chart store)
- Eliminates YAML duplication with templating and values files
- Provides release management — install, upgrade, rollback with full history