Cloud Native 8 min read

How Kubernetes Networking Works: Inside Pods, Nodes, and Cross‑Node Communication

This article demystifies Kubernetes networking by explaining the fundamental design of unique Pod IPs, intra‑node communication via veth pairs and bridges, and inter‑node packet routing across CIDR blocks, providing clear step‑by‑step illustrations of how containers talk within and between nodes.

Efficient Ops
Efficient Ops
Efficient Ops
How Kubernetes Networking Works: Inside Pods, Nodes, and Cross‑Node Communication

Kubernetes Network Model

The core design principle of Kubernetes networking is that each Pod receives a unique IP address, which is shared by all containers in the Pod and is routable to every other Pod in the cluster.

每个Pod都有唯一的IP。

These Pod IPs are backed by a sandbox container (the "pause" container) that holds the network namespace for the Pod. Even if a container dies and a new one is created, the Pod IP remains unchanged, eliminating IP or port conflicts on the host.

The only requirement is that every Pod IP must be reachable from all other Pods, regardless of the node they run on.

Intra‑node Communication

On each Kubernetes node (a Linux machine), there is a root network namespace (root netns) that contains the primary network interface

eth0

. Each Pod also has its own network namespace and is connected to the root namespace via a virtual Ethernet pair (veth pair), forming a pipe with one end in the Pod and the other in the root namespace.

The Pod‑side of the veth pair is named

eth0

, while the host‑side appears as

vethxxx

. Linux bridges (e.g.,

cbr0

) connect all Pods on the node, similar to Docker's

docker0

bridge.

When a packet travels from

pod1

to

pod2

on the same node, the flow is:

Retain the packet in

pod1

's

eth0

and the host side in

vethxxx

.

Pass it to the bridge

cbr0

, which issues an ARP request to discover the destination IP.

The

vethyyy

endpoint replies that it owns the IP, so the bridge knows where to forward the packet.

The packet traverses the veth pipe and arrives at

pod2

's network.

This simple mechanism enables container‑to‑container communication within a node.

Inter‑node Communication

Pods must also be reachable across nodes. Kubernetes does not dictate how this is achieved; it can rely on L2 (ARP) or L3 (IP routing) overlays, or cloud provider routing tables. Each node is assigned a unique CIDR block for its Pod IPs, ensuring no overlap between nodes.

In cloud environments, the provider's routing tables direct traffic to the correct node. Additional network plugins can also handle cross‑node traffic.

When a packet moves from

pod1

on node 1 to

pod4

on node 2, the steps are:

Retain the packet in

pod1

's

eth0

and the host side in

vethxxx

.

Pass it to

cbr0

, which ARPs for the destination.

Since node 1 does not own

pod4

's IP, the packet is forwarded to the node's primary interface

eth0

.

The packet leaves node 1 with source

pod1

and destination

pod4

.

The node routing table, configured with CIDR routes, directs the packet toward the node whose CIDR contains

pod4

's IP (node 2).

On node 2, the packet arrives at

eth0

, is forwarded to the bridge

cbr0

, which ARPs and discovers the owning

vethyyy

.

The bridge forwards the packet through the veth pair into

pod4

's network.

These steps illustrate the fundamental workings of Kubernetes networking, both within a single node and across multiple nodes.

Feel free to leave comments and discuss further.

cloud-nativekubernetesNetworkingPod IPContainers
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.