--reuse-values: Keeps all values from the previous release and applies your new overrides on top. --reset-values (default): Resets to chart defaults and applies only the values you explicitly provide. Mixing these up is a common source of "values not applied" bugs.
Installing & Managing Releases
Master the Helm release lifecycle — install, upgrade, rollback, uninstall, and inspect releases like a pro.
🧒 Simple Explanation (ELI5)
Think of Helm releases like app installs on your phone. helm install downloads and installs the app. helm upgrade updates it to a newer version. helm rollback goes back to the previous version if the update broke something. helm uninstall removes it. helm list shows all your installed apps.
🔧 Command Reference
Install
# From a repository helm install my-release bitnami/nginx # From a local directory helm install my-release ./mychart # With custom values file helm install my-release ./mychart -f values-prod.yaml # With inline overrides helm install my-release ./mychart --set replicaCount=3 --set image.tag=v2.0 # Install in specific namespace (creates it if --create-namespace) helm install my-release ./mychart -n production --create-namespace # Generate a unique name automatically helm install bitnami/nginx --generate-name # Dry run — render and validate without installing helm install my-release ./mychart --dry-run
Upgrade
# Upgrade with new values helm upgrade my-release ./mychart --set image.tag=v3.0 # Upgrade with a values file helm upgrade my-release ./mychart -f values-prod.yaml # Install if not exists, upgrade if it does (idempotent — CI/CD best practice) helm upgrade --install my-release ./mychart -f values-prod.yaml # Reuse previous values (don't reset to defaults) helm upgrade my-release ./mychart --reuse-values --set image.tag=v3.0 # Wait for pods to be ready before marking success helm upgrade my-release ./mychart --wait --timeout 5m
A team used --reuse-values in CI/CD for months. When the chart added a new securityContext default, it was never applied to production — because --reuse-values keeps the OLD values (which didn't have securityContext). The new chart default was silently ignored. Fix: Always use explicit -f values.yaml files instead of --reuse-values. Or use --reset-then-reuse-values (Helm 3.14+) which applies new chart defaults first, then re-applies previous overrides.
Rollback
# View release history helm history my-release # Rollback to previous revision helm rollback my-release # Rollback to specific revision helm rollback my-release 2 # Rollback with wait helm rollback my-release 2 --wait --timeout 3m
Inspect & Debug
# List all releases helm list helm list -A # All namespaces helm list -n staging # Specific namespace # Get release info helm status my-release # Get values used helm get values my-release # User-supplied only helm get values my-release --all # All values (defaults + overrides) helm get values my-release --revision 2 # Values at specific revision # Get rendered manifests helm get manifest my-release # Get everything (notes, values, hooks, manifest) helm get all my-release
Uninstall
# Remove release and all its Kubernetes resources helm uninstall my-release # Keep release history (allows rollback even after uninstall) helm uninstall my-release --keep-history
📊 Visual: Release Lifecycle
(rev 1)
(rev 2)
(rev 3)
(rev 4)
⌨️ Hands-on Lab
# 1. Add repo and install helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update helm install web bitnami/nginx -n demo --create-namespace # 2. Check status helm list -n demo helm status web -n demo kubectl get all -n demo # 3. Upgrade — change replica count helm upgrade web bitnami/nginx -n demo --set replicaCount=3 helm history web -n demo # 4. Oops, wrong — rollback helm rollback web 1 -n demo helm history web -n demo # 5. Get values at each revision helm get values web -n demo --revision 1 helm get values web -n demo --revision 2 # 6. Clean up helm uninstall web -n demo kubectl delete namespace demo
🐛 Debugging Scenarios
Scenario 1: Upgrade failed — release stuck in "pending-upgrade"
# Check status helm status my-release # STATUS: pending-upgrade # This happens when the upgrade timed out or was interrupted # Fix: Rollback to the last successful revision helm history my-release # Find last "deployed" revision helm rollback my-release <last-working-revision> # If rollback also fails, force it: helm rollback my-release <revision> --force
Scenario 2: Values not applied after upgrade
# Verify what values are active helm get values my-release --all # Common cause: forgot to pass values file # Wrong: helm upgrade my-release ./mychart --set image.tag=v2 # This RESETS all previous custom values to defaults! # Right: either pass all values again: helm upgrade my-release ./mychart -f values-prod.yaml --set image.tag=v2 # Or reuse previous values: helm upgrade my-release ./mychart --reuse-values --set image.tag=v2
Scenario 3: "cannot re-use a name that is still in use"
# Release name already exists helm list -A | grep my-release # Option 1: Use a different name helm install my-release-v2 ./mychart # Option 2: Upgrade instead helm upgrade --install my-release ./mychart # Option 3: Uninstall first helm uninstall my-release
🎯 Interview Questions
Beginner
helm install creates a new release (revision 1). It fails if the release name already exists. helm upgrade updates an existing release to a new revision. It fails if the release doesn't exist. helm upgrade --install combines both — installs if not present, upgrades if it exists. This is the CI/CD best practice.
helm rollback <release> <revision>. First check history with helm history <release> to find the target revision. Rollback creates a new forward revision with the old config. Add --wait to ensure pods are ready before marking success.
helm get values <release> shows user-supplied overrides. helm get values --all shows all merged values. helm get values --revision N shows values at a specific revision. helm get manifest shows the rendered Kubernetes YAML.
--dry-run renders templates and validates against the Kubernetes API server, but does NOT create any resources. It shows you exactly what would be deployed. Useful for testing before actual install/upgrade. Combine with --debug for even more detail.
helm list shows all releases in a namespace (name, revision, status, chart, app version). helm status <release> shows detailed info for one release: status, last deployed time, namespace, revision, and the NOTES.txt output.
Intermediate
It's idempotent: installs if the release doesn't exist, upgrades if it does. This means your CI/CD pipeline doesn't need to check "does this release exist?" first. The same command works for first deployment and subsequent updates. Essential for GitOps and automated pipelines.
Default behavior (--reset-values): Helm starts from the chart's default values.yaml and applies only what you explicitly pass. Previous custom values are lost unless you pass them again. --reuse-values: Helm starts from the previous release's values and applies your new overrides on top. Use --reuse-values when making incremental changes; use default when you want a clean state.
Helm marks the revision as "failed". The cluster may be in a mixed state (some resources updated, some not). Check with helm status and helm history. Recovery: helm rollback to the last successful revision. Use --atomic flag to auto-rollback on failure, and --wait to wait for readiness.
--atomic makes the upgrade/install transactional: if anything fails (template error, timeout, pod crash), Helm automatically rolls back to the previous revision. It implies --wait. This prevents leaving the cluster in a broken state after a failed upgrade.
Multiple -f files (later files override earlier): helm install app ./chart -f base.yaml -f prod.yaml. Multiple --set: --set a=1 --set b=2. --set overrides -f values. For complex values, use --set-file or --set-json. Priority order: defaults.yaml < -f file1 < -f file2 < --set.
Scenario-Based
Immediate: helm rollback <release> --wait. Check history first if needed: helm history <release>. Verify: kubectl get pods — pods should be rolling back. Then investigate: compare values between revisions with helm get values --revision N. Prevention: use --atomic for auto-rollback, readiness probes, and staged rollouts.
This happens when an install is interrupted or the cluster can't create resources. 1) Check: helm status <release>. 2) Check Kubernetes: kubectl get events -n <ns>. 3) Uninstall the failed release: helm uninstall <release>. 4) Fix the issue (resources, quotas, image). 5) Re-install. If uninstall hangs, delete the release secrets: kubectl delete secrets -l name=<release>,owner=helm.
1) Rollback: helm rollback <release> to restore previous values. 2) Then upgrade correctly: helm upgrade <release> ./chart -f values-prod.yaml. 3) Verify: helm get values <release> --all matches expected config. 4) Prevention: always use helm upgrade --install -f values.yaml in scripts. Never rely on --reuse-values alone for critical configs.
One chart, multiple values files and namespaces: helm upgrade --install myapp-dev ./chart -f values-dev.yaml -n dev, helm upgrade --install myapp-prod ./chart -f values-prod.yaml -n prod. For multi-cluster: use different kubeconfig contexts. For scale: use Helmfile or ArgoCD to manage many releases declaratively.
If --keep-history was used: helm rollback <release> <last-revision> restores it. If not: the release secrets are deleted and recovery via Helm is not possible. Kubernetes resources may still be running (they're not auto-deleted if uninstall failed). You'd need to re-install with the same values. Prevention: use --keep-history in production, maintain values in Git.
🌍 Real-World Use Case
A DevOps team manages 50+ microservices across 3 environments:
- CI/CD pipelines use
helm upgrade --install --atomic --wait --timeout 10m - Rollback is automatic on failure (thanks to
--atomic) - Each environment has its own values file stored in Git
- Release history enables instant rollback during incidents — recovery time dropped from 30 minutes to under 2 minutes
🌍 Real-World Scenario: Production Incident Response
3 AM incident: users report 500 errors after a deploy. The on-call engineer's response:
# Step 1: Confirm the problem (what K8s resources are unhealthy?) kubectl get pods -n production # NAME READY STATUS RESTARTS AGE # myapp-6f8d7c45b-abc12 0/1 CrashLoopBackOff 4 8m # Step 2: Check what changed (Helm history = audit trail) helm history myapp -n production # REVISION STATUS DESCRIPTION # 14 superseded Upgrade complete ← last known good # 15 deployed Upgrade complete ← broken deploy # Step 3: Compare values between revisions helm get values myapp -n production --revision 14 > /tmp/rev14.yaml helm get values myapp -n production --revision 15 > /tmp/rev15.yaml diff /tmp/rev14.yaml /tmp/rev15.yaml # Found: image.tag changed from v2.3.1 to v2.4.0 # Step 4: Rollback immediately (creates revision 16 with rev 14's config) helm rollback myapp 14 -n production --wait --timeout 3m # Step 5: Verify recovery kubectl get pods -n production # myapp-7a9e8b32d-xyz99 1/1 Running 0 45s curl -s https://myapp.example.com/health | jq .status # "ok"
⚠️ What Rollback Cannot Undo
- Database migrations — If a pre-upgrade hook ran a DB migration, rollback reverses the K8s resources but NOT the database. You need a separate migration rollback strategy.
- PersistentVolumeClaims — PVCs are not deleted on rollback/uninstall by default (data protection). Expanded PVCs cannot shrink.
- CRDs — Helm never deletes CRDs (to prevent data loss). CRD changes are not rolled back.
- External resources — DNS records, cloud load balancers, external secrets created by hooks are not automatically reversed.
🔒 Concurrent Deploy Protection
# Two engineers deploy at the same time: # Engineer A: helm upgrade myapp . -n production # (starts processing...) # Engineer B: helm upgrade myapp . -n production # Error: UPGRADE FAILED: another operation # (install/upgrade/rollback) is in progress # This is because Helm locks the release via K8s Secrets. # Only one operation can run per release at a time. # In CI/CD, prevent concurrent deploys: # GitHub Actions: use concurrency groups # concurrency: # group: deploy-production # cancel-in-progress: false # If a release is genuinely stuck (not just slow): helm history myapp -n production # Find stuck revision # If rollback works: helm rollback myapp <last-good-rev> -n production # If even rollback fails (rare): kubectl delete secret sh.helm.release.v1.myapp.v<stuck> -n production
📝 Summary
helm installcreates a release;helm upgradeupdates it;helm rollbackreverts ithelm upgrade --installis idempotent — the CI/CD best practice- Use
--atomicfor auto-rollback on failure,--waitfor readiness checking helm history/helm get valuesfor auditing and debugging- Always pass values files explicitly — don't rely on defaults in production