In a multi-team environment, give each team a Kubernetes Role scoped to their namespace. Since Helm uses kubeconfig, team A can only helm install/upgrade in their namespace — not team B's. This is a major Helm 3 security improvement over Helm 2's cluster-wide Tiller.
Helm Architecture
How Helm 3 works under the hood — the client, charts, releases, repositories, and the Kubernetes API interaction.
🧒 Simple Explanation (ELI5)
Think of Helm like a pizza delivery app. The Helm CLI (client) is the app on your phone. A chart is the pizza recipe. A repository is the menu of available pizzas. When you order (install), the app sends the order to the kitchen (Kubernetes API), and what arrives at your door is a release — a running instance of that recipe.
🔧 Technical Explanation
Helm 3 Architecture (No Tiller)
Helm 3 is a client-only architecture. The Helm CLI communicates directly with the Kubernetes API server using your kubeconfig credentials. There's no server-side component.
What Happens During helm install
- Load chart — Helm reads the chart (local directory, .tgz, or from a repo)
- Merge values — Default values.yaml + any overrides (--set or -f)
- Render templates — Go template engine processes templates/ with merged values → produces plain Kubernetes YAML
- Validate — Optionally validates against JSON Schema (values.schema.json)
- Send to K8s — Rendered manifests are sent to the Kubernetes API server
- Store release — Release metadata (values, manifest, version) is stored as a Secret in the release namespace
(templates + values)
(Go templates)
Key Components
| Component | Description |
|---|---|
| Helm CLI | The command-line client. Runs on your machine or CI/CD runner. All logic lives here in Helm 3. |
| Chart | The package. Contains templates, values, metadata, and optionally dependencies. |
| Release | A named instance of a chart running in a cluster. Has revision history for rollbacks. |
| Repository | An HTTP server hosting chart archives (index.yaml + .tgz files). Or an OCI registry. |
| Release Secret | Kubernetes Secret storing the release state (values, rendered manifest, status, revision number). |
Helm 2 vs Helm 3
| Feature | Helm 2 | Helm 3 |
|---|---|---|
| Architecture | Client + Tiller (server) | Client-only |
| Security | Tiller had cluster-admin — risk | Uses kubeconfig RBAC |
| Release scope | Cluster-wide | Namespace-scoped |
| Release storage | ConfigMaps via Tiller | Secrets (default) |
| Upgrade strategy | 2-way merge | 3-way strategic merge |
⌨️ Hands-on
# Check Helm version (confirm v3)
helm version
# version.BuildInfo{Version:"v3.x.x", ...}
# See where Helm stores data locally
helm env
# List repositories
helm repo list
# See release secrets in a namespace
kubectl get secrets -l owner=helm -n default
🔐 Helm & Kubernetes RBAC
Helm inherits your kubectl RBAC permissions. This has direct implications for what Helm can do:
# Check what YOUR user can do (Helm inherits these permissions) kubectl auth can-i create deployments -n production kubectl auth can-i create secrets -n production kubectl auth can-i list secrets -n production # Needed for helm list/status # If a helm install fails with "forbidden" errors: # Your kubeconfig user lacks the required K8s RBAC permissions # Example error: # Error: INSTALLATION FAILED: deployments.apps is forbidden: # User "dev-user" cannot create resource "deployments" # in API group "apps" in the namespace "production" # Minimum RBAC for Helm to work in a namespace: # - get/list/create/update/delete on Secrets (release storage) # - get/list/create/update/delete on all resource types the chart creates # (Deployments, Services, ConfigMaps, etc.)
🌐 Multi-Cluster Deployment
# List available clusters in your kubeconfig kubectl config get-contexts # Deploy to a specific cluster without switching context helm upgrade --install myapp ./chart \ --kube-context prod-us-east \ -f values-prod.yaml -n production helm upgrade --install myapp ./chart \ --kube-context prod-eu-west \ -f values-prod-eu.yaml -n production # Verify deployments across clusters helm list --kube-context prod-us-east -n production helm list --kube-context prod-eu-west -n production # In CI/CD: merge multiple kubeconfigs export KUBECONFIG=~/.kube/config-us:~/.kube/config-eu kubectl config get-contexts # Shows both clusters
🐛 Debugging Scenarios
Scenario 1: Helm uses wrong cluster
# Helm uses your current kubeconfig context kubectl config current-context # If it shows wrong cluster, switch: kubectl config use-context my-prod-cluster # Or set per-command: helm list --kube-context my-prod-cluster
Scenario 2: Can't find release that was installed
# Releases are namespace-scoped in Helm 3! helm list # Only shows current namespace helm list -A # Show ALL namespaces helm list -n staging # Check specific namespace
🎯 Interview Questions
Beginner
The Helm client is the CLI tool that users interact with. In Helm 3, it's the only component — there's no server. It handles chart loading, template rendering, value merging, and communicates with the Kubernetes API using kubeconfig credentials.
Tiller was a server-side component in Helm 2 that ran inside the cluster. It processed Helm commands and managed releases. It was removed in Helm 3 because: 1) It required cluster-admin privileges — a major security risk. 2) It was a single point of failure. 3) It complicated RBAC since all users shared Tiller's permissions. Helm 3 moved all logic to the client.
A release is a running instance of a Helm chart in a Kubernetes cluster. It has a unique name (within its namespace), a revision number that increments on each upgrade/rollback, and stored metadata (values used, rendered manifests). You can have multiple releases of the same chart with different names and values.
A Helm repository is a location where packaged charts (.tgz files) are stored and served, along with an index.yaml that lists all available charts and versions. Repositories can be HTTP servers, GitHub Pages, cloud storage (S3, GCS), ChartMuseum, or OCI-compliant registries (Docker Hub, ACR, ECR, GHCR).
Helm uses the same kubeconfig file as kubectl. It reads ~/.kube/config (or the path set in KUBECONFIG env var) and uses the current context's credentials. This means Helm inherits the same RBAC permissions as the user's kubectl — no separate auth needed.
Intermediate
By default, Helm 3 stores release data as Kubernetes Secrets of type helm.sh/release.v1 in the release's namespace. Each revision is a separate secret. The secret contains: the chart metadata, values used, rendered manifest, and release status. You can switch to ConfigMap storage with HELM_DRIVER=configmap.
When upgrading, Helm 3 compares three things: the old manifest (previous revision), the live state (what's actually in the cluster), and the new manifest (from the upgrade). This means manual changes made with kubectl between Helm operations are preserved if they don't conflict. Helm 2 only did 2-way merge (old vs new), potentially overwriting manual changes.
Helm supports storing charts in OCI-compliant registries (Docker Hub, ACR, ECR, GHCR) as OCI artifacts. Instead of traditional Helm repos with index.yaml, you use helm push and helm pull oci://registry/chart. This leverages existing container registry infrastructure and auth.
1) Helm loads the chart and merges new values. 2) Renders new templates. 3) Compares old manifest, live state, and new manifest (3-way merge). 4) Computes a diff. 5) Applies only the changes to the cluster. 6) Creates a new release revision. 7) Stores the new revision as a Secret. On failure, the revision is marked "failed" and can be rolled back.
helm template: Pure client-side rendering. No cluster connection needed. Outputs rendered YAML. Cannot validate against cluster (e.g., CRD existence). helm install --dry-run: Connects to the cluster, validates rendered manifests against the API server, but doesn't create resources. Better for catching server-side validation errors.
Scenario-Based
Helm 3 releases are namespace-scoped. helm list defaults to the current kubectl namespace. Run helm list -n staging or helm list -A to see all namespaces. This catches many Helm 3 migration surprises.
Helm 3's 3-way merge compares old manifest, live state, and new manifest. If the manual change doesn't conflict with the new manifest's changes, it's preserved. If it does conflict, the new manifest wins. Best practice: never manually edit Helm-managed resources — all changes should go through helm upgrade with updated values.
Helm uses kubeconfig for cluster access. In CI/CD: 1) Set KUBECONFIG env var or use --kube-context per cluster. 2) Use separate values files per cluster/environment. 3) Run helm upgrade --install --kube-context prod-cluster -f values-prod.yaml. Tools like Helmfile or ArgoCD can manage multi-cluster deployments declaratively.
helm get values <release> -n production shows user-supplied values. helm get values <release> --all shows all values (defaults + overrides). helm get manifest <release> shows the actual rendered YAML that was applied. helm history <release> shows all revisions. This is stored in the release Secrets.
1) Remove Tiller: kubectl delete deployment tiller-deploy -n kube-system. 2) Migrate releases: use the helm-2to3 plugin to convert release data from ConfigMaps to Secrets. 3) Release names become namespace-scoped — ensure no cross-namespace name collisions. 4) Test with helm 2to3 convert in staging first. 5) Update CI/CD scripts (no --tiller-namespace flags).
🌍 Real-World Use Case
A platform team at a SaaS company migrated from Helm 2 to Helm 3 across 5 clusters:
- Removed Tiller from all clusters, eliminating a cluster-admin security risk
- Migrated 200+ releases using the
helm-2to3plugin with zero downtime - Namespace-scoped releases enabled proper RBAC — dev teams could only manage their own releases
- 3-way merge resolved a long-standing issue where manual hotfixes were overwritten by Helm upgrades
🌍 Real-World Scenario: Release Secret Investigation
A DevOps engineer gets paged at 2 AM because helm upgrade fails with "another operation in progress." Here's the investigation that maps directly to Helm architecture:
# Step 1: Check release status (reads from K8s Secrets) helm status myapp -n production # STATUS: pending-upgrade ← stuck! # Step 2: See all revisions (each is a K8s Secret) helm history myapp -n production # REVISION STATUS DESCRIPTION # 5 deployed Upgrade complete # 6 pending-upgrade Preparing upgrade # Step 3: Look at the actual K8s Secrets kubectl get secrets -n production -l owner=helm,name=myapp # NAME TYPE AGE # sh.helm.release.v1.myapp.v5 helm.sh/release.v1 2d # sh.helm.release.v1.myapp.v6 helm.sh/release.v1 3m ← stuck # Step 4: Fix by rolling back (creates revision 7 from data in revision 5) helm rollback myapp 5 -n production # Step 5: Verify K8s resources are healthy kubectl get pods -n production kubectl get events -n production --sort-by='.lastTimestamp' | tail -5
This scenario shows how Helm's architecture (Secrets-based storage, revision tracking) directly maps to Kubernetes primitives and enables recovery.
📝 Summary
- Helm 3 is client-only — no Tiller, uses kubeconfig for auth
helm installflow: load chart → merge values → render templates → send to K8s API → store release- Releases are namespace-scoped and stored as Kubernetes Secrets
- 3-way merge during upgrades preserves non-conflicting manual changes
- Charts can live in HTTP repos or OCI registries