Every Service gets a DNS entry: <service-name>.<namespace>.svc.cluster.local. Within the same namespace, just use the service name. Cross-namespace: service-name.other-namespace.
Services
Expose your applications to the network. Understand ClusterIP, NodePort, LoadBalancer, and service discovery.
🧒 Simple Explanation (ELI5)
Pods have IP addresses, but those IPs change every time a pod restarts or gets rescheduled. Imagine if your phone number changed every time you restarted your phone — nobody could call you. A Service is like a permanent phone number that always routes to the right person, even if they switch phones. It provides a stable endpoint for a set of pods.
🤔 Why Do We Need Services?
- Stable endpoint: Pod IPs are ephemeral; Services provide a fixed IP and DNS name
- Load balancing: Distributes traffic across multiple pod replicas
- Service discovery: Other pods find services via DNS (
my-service.my-namespace.svc.cluster.local) - Decoupling: Consumers don't need to track individual pod IPs
🔧 Technical Explanation
Service Types
| Type | Accessible From | How It Works | Use Case |
|---|---|---|---|
| ClusterIP (default) | Inside cluster only | Virtual IP accessible only within the cluster. kube-proxy routes traffic. | Internal service-to-service communication |
| NodePort | External (via node IP:port) | Opens a port (30000-32767) on every node. Routes to ClusterIP. | Dev/testing, on-prem without LB |
| LoadBalancer | External (via cloud LB) | Provisions a cloud load balancer that routes to NodePort → ClusterIP. | Production external traffic (cloud) |
| ExternalName | Inside cluster | CNAME record pointing to an external DNS name. No proxying. | Mapping external services (RDS, external API) |
How Services Route Traffic
Services use label selectors to find target pods. The Endpoints controller watches for pods matching the selector and creates an Endpoints object with their IPs. kube-proxy programs iptables/IPVS rules to route traffic from the Service's ClusterIP to pod IPs.
📊 Visual: Service Types
⌨️ Hands-on: Working with Services
ClusterIP Service
# clusterip-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: ClusterIP
selector:
app: web-app
ports:
- protocol: TCP
port: 80 # Service port
targetPort: 8080 # Pod port
NodePort Service
# nodeport-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
type: NodePort
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080 # Accessible on every node at this port
LoadBalancer Service
# lb-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-lb
spec:
type: LoadBalancer
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
# Apply and verify kubectl apply -f clusterip-svc.yaml # Check service kubectl get svc web-service kubectl describe svc web-service # Check endpoints (pod IPs the service routes to) kubectl get endpoints web-service # Test from inside the cluster kubectl run test --image=busybox --rm -it -- wget -qO- http://web-service # DNS lookup from inside cluster kubectl run test --image=busybox --rm -it -- nslookup web-service # For LoadBalancer, get external IP kubectl get svc web-lb -w # wait for EXTERNAL-IP
🐛 Debugging Scenarios
Scenario 1: Service Not Reachable
Symptom: curl http://my-service from another pod returns Connection refused or times out.
# Step 1: Does the service exist? kubectl get svc my-service # Step 2: Are there endpoints? (CRITICAL) kubectl get endpoints my-service # Empty endpoints = selector doesn't match any pods! # Step 3: Check selector vs pod labels kubectl describe svc my-service | grep Selector kubectl get pods --show-labels # Step 4: Are the target pods ready? kubectl get pods -l app=my-app # Pods must pass readiness probes to be in endpoints # Step 5: Is targetPort correct? kubectl describe svc my-service | grep TargetPort kubectl exec pod-name -- netstat -tlnp # is the app listening on that port? # Step 6: Test DNS resolution kubectl run test --image=busybox --rm -it -- nslookup my-service
Scenario 2: LoadBalancer External IP is "Pending"
Cause: Cloud controller hasn't provisioned the LB yet, or you're on bare metal/minikube.
- On cloud: wait a few minutes, check cloud provider LB dashboard
- On minikube: use
minikube tunnelto expose LoadBalancer services - On bare metal: use MetalLB or switch to NodePort/Ingress
Scenario 3: Service Returns Responses from the Wrong Pod
Cause: Label selector is too broad — matching pods from another deployment.
Fix: Make selectors specific. Use unique label combinations.
🎯 Interview Questions
Beginner
A Service is an abstraction that provides a stable network endpoint (IP + DNS name) for a set of pods. Since pod IPs are ephemeral, Services give consumers a permanent address and load balance traffic across healthy pod replicas.
ClusterIP: Virtual IP accessible only inside the cluster. Default type. Used for internal communication. NodePort: Opens a static port (30000-32767) on every node. Accessible externally via nodeIP:nodePort. NodePort includes a ClusterIP automatically.
Kubernetes runs a DNS server (CoreDNS) in the cluster. Every Service gets a DNS record: service-name.namespace.svc.cluster.local. Pods can reach services by name. Within the same namespace, just the service name works. Environment variables are also injected, but DNS is the recommended approach.
A LoadBalancer service provisions an external load balancer from the cloud provider (AWS ELB, Azure LB, GCP LB). It gets a public IP that routes traffic to the NodePort → ClusterIP → pods. The easiest way to expose a service externally on cloud platforms.
Endpoints are the IP addresses of pods that match a Service's selector. The Endpoints controller automatically creates and updates Endpoints objects as pods are created, deleted, or change readiness status. If a Service has no Endpoints, traffic goes nowhere — this is the #1 thing to check when a Service isn't working.
Intermediate
ExternalName maps a Service to a DNS name (CNAME record) — like my-database.example.com. No proxying or port forwarding. Pods in the cluster can access my-db-service and it resolves to the external DNS. Useful for mapping external services (RDS, external APIs) into the cluster's service discovery.
port: The port the Service listens on (ClusterIP:port). targetPort: The port the container is actually listening on. nodePort: The port opened on every node (only for NodePort/LoadBalancer types). Traffic flow: nodePort → port → targetPort.
kube-proxy watches the API server for Service and Endpoints changes. In iptables mode (default): creates iptables rules that intercept traffic to the Service IP and redirect to pod IPs (random selection). In IPVS mode: programs IPVS rules with more load-balancing algorithms (round-robin, least connections). kube-proxy itself doesn't proxy traffic — it programs the Linux kernel to do it.
A Service with clusterIP: None. Instead of providing a single virtual IP, DNS returns individual pod IPs directly. Used with StatefulSets where each pod needs to be addressed individually (e.g., database replicas). DNS returns A records for each pod: pod-0.my-service.namespace.svc.cluster.local.
sessionAffinity: ClientIP ensures all requests from the same client IP go to the same pod (sticky sessions). Default is None (random distribution). Timeout is configurable. Use it when application state is stored in-memory and not shared across replicas. Better approach: make apps stateless and use external session stores (Redis).
Scenario-Based
1) Check endpoints: kubectl get endpoints my-svc — are unhealthy pods in the list? 2) Check readiness probes — if poorly configured, failing pods still receive traffic. 3) Check if pods are restarting: kubectl get pods. 4) During rolling updates, new pods might be added to endpoints before fully ready. 5) Check for resource limits causing throttling or OOM. 6) Review pod logs around the 502 timestamps.
1) targetPort doesn't match the container's listening port. 2) Network policies blocking traffic. 3) The app is listening on 127.0.0.1 instead of 0.0.0.0 — it rejects non-localhost connections. 4) kube-proxy not running on the node. 5) Firewall rules blocking traffic between nodes. Test: kubectl exec into a pod and curl localhost:targetPort to confirm the app works locally.
Use cross-namespace DNS: http://service-c.namespace-c.svc.cluster.local:port. Short form: http://service-c.namespace-c:port. This works out of the box with ClusterIP services. No special configuration needed unless NetworkPolicies restrict cross-namespace traffic.
Use a single Ingress controller (NGINX, Traefik) with one LoadBalancer service. Define Ingress rules to route traffic based on hostname or path to different ClusterIP services. This reduces to one cloud LB for all HTTP services. For non-HTTP (TCP/UDP), consider a single LoadBalancer with multiple ports, or a service mesh.
This happens with the Recreate deployment strategy — all old pods are killed before new ones start. Fix: 1) Switch to RollingUpdate strategy. 2) If you must use Recreate, accept the brief downtime. 3) Ensure readiness probes are configured so new pods only receive traffic once ready. 4) Check maxUnavailable settings if using RollingUpdate — it shouldn't allow all pods to be down simultaneously.
🌍 Real-World Use Case
A microservices-based e-commerce platform with 40 services uses this pattern:
- Internal APIs: ClusterIP services for all service-to-service communication
- Public API gateway: Single LoadBalancer service for the API gateway
- Ingress: NGINX Ingress controller routes to individual services by path
- Database access: Headless service for PostgreSQL StatefulSet (primary + read replicas)
- External dependencies: ExternalName service mapping to managed Redis and RDS endpoints
📝 Summary
- Services provide stable endpoints for ephemeral pods
- ClusterIP (internal) → NodePort (node-level) → LoadBalancer (cloud LB) — each builds on the previous
- Services use label selectors to find target pods via Endpoints
- DNS discovery:
service.namespace.svc.cluster.local - Always check
kubectl get endpointsfirst when debugging service issues