Cloud Native 8 min read

How Does Kubernetes Networking Really Work? A Visual Deep Dive

This article explains the core principles of Kubernetes networking, covering pod IP allocation, intra‑node communication via veth pairs and bridges, and inter‑node packet routing using CIDR blocks and cloud provider routes, all illustrated with diagrams.

Efficient Ops
Efficient Ops
Efficient Ops
How Does Kubernetes Networking Really Work? A Visual Deep Dive

If you have already used Kubernetes for testing or production services, you may have felt its revolutionary impact; if you haven’t, you should start quickly because it is a clear technology trend.

Although many tools exist to set up and manage clusters, understanding what happens under the hood—especially the network—is essential for troubleshooting and solving real problems.

Kubernetes Network Model

The core design principle of Kubernetes networking is that each Pod has a unique IP address shared by all containers in the Pod and routable to every other Pod.

<code>每个Pod都有唯一的IP。</code>

These IPs are kept in a sandbox (pause) container that preserves the network namespace, so the IP remains stable even if containers are recreated. The only requirement is that every Pod IP must be reachable from all other Pods, regardless of the node they run on.

Intra‑Node Communication

Each Kubernetes node (a Linux machine) has a root network namespace with the primary interface

eth0

. Every Pod gets its own network namespace and a virtual Ethernet pair (veth) that connects the Pod namespace to the root namespace. One end of the pair appears as

eth0

inside the Pod, while the other end has a name like

vethxxxx

.

The Linux bridge

cbr0

(similar to Docker’s

docker0

) links all these veth interfaces, allowing Pods on the same node to communicate.

When a packet travels from

pod1

to

pod2

on the same node, the steps are:

Retain the packet in

pod1

's

eth0

and the root namespace side in

vethxxx

.

Pass it to

cbr0

, which uses ARP to discover the destination IP.

The bridge learns that

vethyyy

owns the destination IP.

The packet traverses the veth pair and arrives at

pod2

's network.

Inter‑Node Communication

Pods must also be reachable across nodes. Kubernetes does not dictate the mechanism; it can be L2 (ARP) or L3 (IP routing) overlays, or any CNI plugin. Each node is assigned a unique CIDR block for Pod IPs, ensuring no IP conflict between nodes.

In cloud environments, the cloud provider’s routing tables usually handle cross‑node traffic. Proper routing on each node directs packets to the node that owns the destination Pod’s CIDR.

When a packet travels from

pod1

on node 1 to

pod4

on node 2, the process is:

Retain the packet in

pod1

's

eth0

and the root side in

vethxxx

.

Pass it to

cbr0

, which ARPs for the destination.

Since the destination IP is not on node 1, the packet is forwarded to the node’s primary interface

eth0

.

The packet leaves node 1 with source

pod1

and destination

pod4

.

The node’s routing table, configured with CIDR routes, forwards the packet toward the node whose CIDR contains

pod4

's IP.

On node 2, the bridge receives the packet, ARPs, discovers the owning

vethyyy

, and forwards it through the veth pair to

pod4

.

This overview covers the fundamentals of Kubernetes networking.

Cloud NativeKubernetesnetworkingbridgeCNIPod IP
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.