Cloud Native 19 min read

How Kubernetes Enables Container Networking: From Docker Bridge to CNI Plugins

This article explains Kubernetes container networking fundamentals, the role of Linux network namespaces, veth pairs, bridges, and iptables, and compares same‑host communication via docker0 with cross‑host solutions like CNI plugins (flannel, Calico) and their routing modes.

Efficient Ops
Efficient Ops
Efficient Ops
How Kubernetes Enables Container Networking: From Docker Bridge to CNI Plugins

Container Network Basics

In Kubernetes, network connectivity between containers is essential, but Kubernetes itself does not implement a container network; it relies on plug‑in mechanisms. The basic principles are that any pod can communicate directly with any other pod across nodes without NAT, nodes can talk to pods, and each pod has an independent network stack shared by its containers.

A Linux container’s network stack lives in its own network namespace, which includes a network interface, a loopback device, a routing table, and iptables rules. Implementing container networking requires several Linux features:

Network Namespace : isolates network stacks.

Veth Pair : a pair of virtual Ethernet devices that connect different namespaces.

Iptables/Netfilter : kernel‑level packet filtering and manipulation.

Bridge : a layer‑2 virtual switch that forwards frames based on MAC addresses.

Routing : Linux routing tables determine packet forwarding.

Same‑Host Communication

On a single host, Docker creates the

docker0

bridge. Containers connect to this bridge via a veth pair; one end resides in the container (e.g.,

eth0

), the other end appears on the host (e.g.,

veth20b3dac

).

<code>docker run -d --name c1 hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh</code>
<code>docker exec -it c1 /bin/sh</code><code>/ # ifconfig</code><code>eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02</code><code>        inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0</code><code>/ # route -n</code><code>0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0</code>

The host’s bridge can be inspected with

brctl show

:

<code># brctl show</code><code>docker0        8000.02426a4693d2    no        veth20b3dac</code>

Launching a second container and pinging it demonstrates inter‑container connectivity:

<code>docker run -d --name c2 -it hub.pri.ibanyu.com/devops/alpine:v3.8 /bin/sh</code><code>docker exec -it c1 /bin/sh</code><code>/ # ping 172.17.0.3</code>

Ping succeeds because the destination IP matches a route whose gateway is

0.0.0.0

, indicating a direct (layer‑2) route via the bridge. ARP is used to resolve the MAC address of the target container.

Cross‑Host Networking and CNI

Docker’s default setup cannot reach containers on different hosts. Kubernetes introduces the Container Network Interface (CNI) to standardize plug‑in integration. Popular CNI plugins include flannel, calico, weave, and contiv. CNI typically creates its own bridge (

cni0

) on each node.

CNI supports three networking modes:

Overlay : Uses tunnels (e.g., VXLAN, IPIP) to encapsulate the entire pod network, allowing cross‑host communication without relying on the underlying network.

Layer‑3 Routing : Pods remain in separate subnets; routing tables on each node forward traffic directly, requiring the nodes to share a Layer‑2 network.

Underlay : Pods use the physical network directly; routing is handled by BGP or similar protocols without a bridge.

Flannel Host‑gw Example

In the host‑gw mode, a node routes pod traffic to the destination node’s IP. Example route on node1:

<code>10.244.1.0/24 via 10.168.0.3 dev eth0</code>

This rule forwards packets destined for the

10.244.1.0/24

pod subnet to the next‑hop node (

10.168.0.3

), where the

cni0

bridge delivers them to the target pod.

Calico with BGP

Calico replaces the bridge with pure routing. Its components are:

Calico CNI plugin – integrates with kubelet.

Felix – maintains host routing rules and forwarding information.

BIRD – runs BGP to distribute routes.

Confd – configuration management.

Each pod gets a veth pair; the host side is attached directly to the host network namespace. A typical route installed by Felix looks like:

<code>10.92.77.163 dev cali93a8a799fe1 scope link</code>

Calico operates in a node‑to‑node mesh by default, where every node runs a BGP client that peers with all others. For larger clusters (>50 nodes), a Router‑Reflector (RR) topology is recommended to reduce BGP sessions.

Choosing a Solution

In public‑cloud environments, using the cloud provider’s CNI or a simple flannel host‑gw setup is common. In private data‑center scenarios, Calico’s BGP‑based routing often provides better performance and flexibility. The choice depends on the underlying network topology and scalability requirements.

References

https://github.com/coreos/flannel/blob/master/Documentation/backends.md

https://coreos.com/flannel/

https://docs.projectcalico.org/getting-started/kubernetes/

https://www.kancloud.cn/willseecloud/kubernetes-handbook/1321338

DockerKubernetesCNIContainer NetworkingCalicoFlannel
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.