Intermediate Lesson 6 of 14

Services

Expose your applications to the network. Understand ClusterIP, NodePort, LoadBalancer, and service discovery.

🧒 Simple Explanation (ELI5)

Pods have IP addresses, but those IPs change every time a pod restarts or gets rescheduled. Imagine if your phone number changed every time you restarted your phone — nobody could call you. A Service is like a permanent phone number that always routes to the right person, even if they switch phones. It provides a stable endpoint for a set of pods.

🤔 Why Do We Need Services?

🔧 Technical Explanation

Service Types

TypeAccessible FromHow It WorksUse Case
ClusterIP (default)Inside cluster onlyVirtual IP accessible only within the cluster. kube-proxy routes traffic.Internal service-to-service communication
NodePortExternal (via node IP:port)Opens a port (30000-32767) on every node. Routes to ClusterIP.Dev/testing, on-prem without LB
LoadBalancerExternal (via cloud LB)Provisions a cloud load balancer that routes to NodePort → ClusterIP.Production external traffic (cloud)
ExternalNameInside clusterCNAME record pointing to an external DNS name. No proxying.Mapping external services (RDS, external API)

How Services Route Traffic

Services use label selectors to find target pods. The Endpoints controller watches for pods matching the selector and creates an Endpoints object with their IPs. kube-proxy programs iptables/IPVS rules to route traffic from the Service's ClusterIP to pod IPs.

📊 Visual: Service Types

Service Routing Architecture
External
User / Browser
LoadBalancer
Cloud LB (public IP)
NodePort
Node:30080
ClusterIP
10.96.0.10:80
→ Pod A
→ Pod B
→ Pod C

⌨️ Hands-on: Working with Services

ClusterIP Service

yaml
# clusterip-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: ClusterIP
  selector:
    app: web-app
  ports:
    - protocol: TCP
      port: 80          # Service port
      targetPort: 8080   # Pod port

NodePort Service

yaml
# nodeport-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-nodeport
spec:
  type: NodePort
  selector:
    app: web-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
      nodePort: 30080    # Accessible on every node at this port

LoadBalancer Service

yaml
# lb-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: web-lb
spec:
  type: LoadBalancer
  selector:
    app: web-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
bash
# Apply and verify
kubectl apply -f clusterip-svc.yaml

# Check service
kubectl get svc web-service
kubectl describe svc web-service

# Check endpoints (pod IPs the service routes to)
kubectl get endpoints web-service

# Test from inside the cluster
kubectl run test --image=busybox --rm -it -- wget -qO- http://web-service

# DNS lookup from inside cluster
kubectl run test --image=busybox --rm -it -- nslookup web-service

# For LoadBalancer, get external IP
kubectl get svc web-lb -w  # wait for EXTERNAL-IP
💡
DNS Pattern

Every Service gets a DNS entry: <service-name>.<namespace>.svc.cluster.local. Within the same namespace, just use the service name. Cross-namespace: service-name.other-namespace.

🐛 Debugging Scenarios

Scenario 1: Service Not Reachable

Symptom: curl http://my-service from another pod returns Connection refused or times out.

bash
# Step 1: Does the service exist?
kubectl get svc my-service

# Step 2: Are there endpoints? (CRITICAL)
kubectl get endpoints my-service
# Empty endpoints = selector doesn't match any pods!

# Step 3: Check selector vs pod labels
kubectl describe svc my-service | grep Selector
kubectl get pods --show-labels

# Step 4: Are the target pods ready?
kubectl get pods -l app=my-app
# Pods must pass readiness probes to be in endpoints

# Step 5: Is targetPort correct?
kubectl describe svc my-service | grep TargetPort
kubectl exec pod-name -- netstat -tlnp  # is the app listening on that port?

# Step 6: Test DNS resolution
kubectl run test --image=busybox --rm -it -- nslookup my-service

Scenario 2: LoadBalancer External IP is "Pending"

Cause: Cloud controller hasn't provisioned the LB yet, or you're on bare metal/minikube.

Scenario 3: Service Returns Responses from the Wrong Pod

Cause: Label selector is too broad — matching pods from another deployment.

Fix: Make selectors specific. Use unique label combinations.

🎯 Interview Questions

Beginner

Q: What is a Kubernetes Service?

A Service is an abstraction that provides a stable network endpoint (IP + DNS name) for a set of pods. Since pod IPs are ephemeral, Services give consumers a permanent address and load balance traffic across healthy pod replicas.

Q: What is the difference between ClusterIP and NodePort?

ClusterIP: Virtual IP accessible only inside the cluster. Default type. Used for internal communication. NodePort: Opens a static port (30000-32767) on every node. Accessible externally via nodeIP:nodePort. NodePort includes a ClusterIP automatically.

Q: How does service discovery work in Kubernetes?

Kubernetes runs a DNS server (CoreDNS) in the cluster. Every Service gets a DNS record: service-name.namespace.svc.cluster.local. Pods can reach services by name. Within the same namespace, just the service name works. Environment variables are also injected, but DNS is the recommended approach.

Q: What is a LoadBalancer service?

A LoadBalancer service provisions an external load balancer from the cloud provider (AWS ELB, Azure LB, GCP LB). It gets a public IP that routes traffic to the NodePort → ClusterIP → pods. The easiest way to expose a service externally on cloud platforms.

Q: What are Endpoints in Kubernetes?

Endpoints are the IP addresses of pods that match a Service's selector. The Endpoints controller automatically creates and updates Endpoints objects as pods are created, deleted, or change readiness status. If a Service has no Endpoints, traffic goes nowhere — this is the #1 thing to check when a Service isn't working.

Intermediate

Q: What is an ExternalName service?

ExternalName maps a Service to a DNS name (CNAME record) — like my-database.example.com. No proxying or port forwarding. Pods in the cluster can access my-db-service and it resolves to the external DNS. Useful for mapping external services (RDS, external APIs) into the cluster's service discovery.

Q: What's the difference between port, targetPort, and nodePort?

port: The port the Service listens on (ClusterIP:port). targetPort: The port the container is actually listening on. nodePort: The port opened on every node (only for NodePort/LoadBalancer types). Traffic flow: nodePort → port → targetPort.

Q: How does kube-proxy implement Services?

kube-proxy watches the API server for Service and Endpoints changes. In iptables mode (default): creates iptables rules that intercept traffic to the Service IP and redirect to pod IPs (random selection). In IPVS mode: programs IPVS rules with more load-balancing algorithms (round-robin, least connections). kube-proxy itself doesn't proxy traffic — it programs the Linux kernel to do it.

Q: What is a headless Service?

A Service with clusterIP: None. Instead of providing a single virtual IP, DNS returns individual pod IPs directly. Used with StatefulSets where each pod needs to be addressed individually (e.g., database replicas). DNS returns A records for each pod: pod-0.my-service.namespace.svc.cluster.local.

Q: What is sessionAffinity in Services?

sessionAffinity: ClientIP ensures all requests from the same client IP go to the same pod (sticky sessions). Default is None (random distribution). Timeout is configurable. Use it when application state is stored in-memory and not shared across replicas. Better approach: make apps stateless and use external session stores (Redis).

Scenario-Based

Q: Users report intermittent 502 errors from your Service. What do you investigate?

1) Check endpoints: kubectl get endpoints my-svc — are unhealthy pods in the list? 2) Check readiness probes — if poorly configured, failing pods still receive traffic. 3) Check if pods are restarting: kubectl get pods. 4) During rolling updates, new pods might be added to endpoints before fully ready. 5) Check for resource limits causing throttling or OOM. 6) Review pod logs around the 502 timestamps.

Q: Your Service has endpoints but no traffic reaches the pods. What could be wrong?

1) targetPort doesn't match the container's listening port. 2) Network policies blocking traffic. 3) The app is listening on 127.0.0.1 instead of 0.0.0.0 — it rejects non-localhost connections. 4) kube-proxy not running on the node. 5) Firewall rules blocking traffic between nodes. Test: kubectl exec into a pod and curl localhost:targetPort to confirm the app works locally.

Q: You have microservices A, B, C in different namespaces. How does A call C?

Use cross-namespace DNS: http://service-c.namespace-c.svc.cluster.local:port. Short form: http://service-c.namespace-c:port. This works out of the box with ClusterIP services. No special configuration needed unless NetworkPolicies restrict cross-namespace traffic.

Q: Each LoadBalancer Service gets its own cloud LB and public IP. With 20 services, costs are high. What's the alternative?

Use a single Ingress controller (NGINX, Traefik) with one LoadBalancer service. Define Ingress rules to route traffic based on hostname or path to different ClusterIP services. This reduces to one cloud LB for all HTTP services. For non-HTTP (TCP/UDP), consider a single LoadBalancer with multiple ports, or a service mesh.

Q: During a deployment, you notice the Service briefly routes to zero pods. Why?

This happens with the Recreate deployment strategy — all old pods are killed before new ones start. Fix: 1) Switch to RollingUpdate strategy. 2) If you must use Recreate, accept the brief downtime. 3) Ensure readiness probes are configured so new pods only receive traffic once ready. 4) Check maxUnavailable settings if using RollingUpdate — it shouldn't allow all pods to be down simultaneously.

🌍 Real-World Use Case

A microservices-based e-commerce platform with 40 services uses this pattern:

📝 Summary

← Back to Kubernetes Course