Cloud Native 14 min read

Demystifying Kubernetes Service Discovery: How Services Keep Pods Connected

This article explains Kubernetes service discovery by first covering essential network concepts, then detailing how Services, DNS, Endpoints, and kube-proxy work together to provide stable, load‑balanced communication between dynamic Pods in a cluster.

Efficient Ops
Efficient Ops
Efficient Ops
Demystifying Kubernetes Service Discovery: How Services Keep Pods Connected

K8S Service Discovery Journey

Kubernetes service discovery is often confusing; this article is divided into two parts: background network knowledge and an in‑depth look at Kubernetes service discovery.

Network background knowledge

Deep dive into Kubernetes service discovery

Understanding the networking fundamentals first can help readers skip ahead to the service discovery section if they are already familiar.

K8S Network Basics

Kubernetes applications run in containers, which are grouped into Pods.

Each Pod attaches to a flat IP network, often a VXLAN overlay, called the Pod network.

Every Pod has a unique, routable IP address within the Pod network.

These three factors let each application component communicate directly without NAT.

Dynamic Network

When scaling horizontally, new Pods with new IPs are added; when scaling down, Pods and their IPs are removed, making the network appear chaotic. Rolling updates add new Pods and remove old ones, requiring constant tracking of healthy Pods.

Calling this object "Service" is a bad idea because the term already describes application processes.

Kubernetes solves this with the Service object, which provides a stable network endpoint and load‑balances traffic to Pods.

Service Brings Stability

A Service creates a stable network endpoint in front of a set of Pods and distributes traffic among them.

Typically, a Service sits in front of Pods that perform the same function, e.g., a front‑end Service for web Pods and another Service for authentication Pods.

Clients communicate with the Service, which then balances the load to the Pods.

The Service’s name, IP, and port remain constant even as underlying Pods change.

K8S Service Parsing

A Service can be viewed as having a front‑end and a back‑end:

Front‑end: immutable name, IP, and port.

Back‑end: a set of Pods that match specific label selectors.

The front‑end’s stability means clients don’t need to worry about DNS cache expiration.

The back‑end is dynamic, consisting of Pods selected by labels and accessed via load balancing.

Load balancing is a simple Layer‑4 round‑robin; the same connection is consistently routed to the same Pod.

Summary

Applications run in containers as Pods, all sharing a flat Pod network with unique IPs, enabling direct communication. Pods are transient, so Kubernetes introduces the Service object to provide a stable name, IP, and port, forwarding traffic to healthy Pods.

Deep Dive into Service Discovery

Service discovery comprises two functions: registration and discovery.

Service Registration

Kubernetes uses DNS as the service registry. Each cluster runs a DNS service (CoreDNS) in the

kube-system

namespace. Every Service automatically registers its name and ClusterIP in this DNS.

Registration steps:

POST a new Service definition to the API Server.

The request passes authentication, authorization, and admission checks.

The Service receives a ClusterIP and is stored in the cluster data store.

The Service configuration is propagated cluster‑wide.

CoreDNS creates the corresponding DNS A record.

Once registered, other Pods can discover the Service via DNS.

Endpoint Object

Each Service has a Label Selector; Pods matching this selector are listed in an automatically created Endpoints object.

The Endpoints object holds the current list of healthy Pod IPs for the Service.

Service Discovery in Action

Assume two applications,

my‑app

and

your‑app

, each with its own Service (

my‑app‑svc

and

your‑app‑svc

) and corresponding DNS records:

my‑app‑svc: 10.0.0.10

your‑app‑svc: 10.0.0.20

Pods query the cluster DNS to resolve Service names to ClusterIPs, then send traffic to the Service Network.

Prerequisite: the client must know the target Service’s name.

Network Mechanics

ClusterIP resides in the Service Network, which lacks a direct route. Pods send traffic to their default gateway (CBR0), which forwards it to the node’s network interface. The node’s kernel then rewrites the packet header to the IP of a healthy Pod from the Endpoints list.

Each node runs

kube-proxy

, which watches Service changes and creates iptables or IPVS rules to capture traffic destined for Service ClusterIPs and forward it to the appropriate Pod IPs.

kube-proxy is not a traditional proxy; it merely manages iptables/IPVS rules.
iptables is being replaced by IPVS (Layer‑4 load balancer) for better performance.

Final Recap

Creating a Service allocates a virtual ClusterIP, registers the name and IP in cluster DNS, and creates an Endpoints object containing healthy Pods. All nodes configure iptables/IPVS rules to capture traffic to the ClusterIP and forward it to the real Pod IPs.

Translator: Pseudo Architect

Source: https://nigelpoulton.com/blog/f/demystifying-kubernetes-service-discovery

Kubernetesservice discoveryServiceCoreDNSkube-proxyPod NetworkClusterIP
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.