How Docker and Kubernetes Networking Works: From Bridge to Flannel
This article explains Docker's built‑in network drivers, the bridge mode construction and external access, then compares Kubernetes networking requirements and details how Flannel implements pod‑to‑pod communication using overlay networks and packet encapsulation.
Docker Network Modes
Docker uses a plug‑in architecture for networking and provides several built‑in drivers: bridge, host, none, overlay, macvlan, and third‑party network plugins, selectable with the
--networkflag.
bridge : the default driver; creates a network namespace, assigns an IP, and connects the container to a virtual bridge (docker0).
host : container shares the host's network stack.
none : disables networking; only the loopback interface is available.
overlay : enables multiple Docker daemons to communicate, supporting Swarm services and inter‑container traffic.
macvlan : assigns a MAC address to the container, allowing it to appear as a physical device on the network.
Network plugins : third‑party plugins can be installed from Docker Store or other vendors.
The default bridge mode creates a virtual bridge named
docker0with private address ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
When a container starts, Docker creates a veth pair; one end is placed inside the container as
eth0, the other remains on the host attached to
docker0. Port mapping with
-por
-Pexposes container services to the external network.
<code>$ docker run -P myimage</code> <code>$ docker run -p 8080:80 myimage</code>Kubernetes Network Model
Kubernetes must solve four communication problems: container‑to‑container, pod‑to‑pod, pod‑to‑service within the cluster, and external application‑to‑service.
Each pod receives a unique IP, making it behave like a virtual machine; thus intra‑pod communication works without NAT.
Communication Inside a Single Pod
Kubernetes creates a pause container that holds the network namespace for the pod; all containers in the pod share this namespace and can reach each other via
localhost.
Communication Between Different Pods
Inter‑pod traffic is handled by network plugins such as Flannel, which uses an L3 overlay. Flannel runs a
flanneldagent on each node, allocates a subnet from etcd, and assigns IPs to pods.
Flannel Operation Steps
Configure the cluster network in etcd.
<code>$ etcdctl ls /coreos.com/network/config</code>Allocate a subnet for each node.
<code>$ etcdctl ls /coreos.com/network/subnets</code>Start
flanneldon each node; it reads the subnet lease from etcd.
Create the virtual interface
flannel.1on the node.
<code>$ ip addr show flannel.1</code>Configure Docker's bridge
docker0with a unique CIDR using the
--bipflag.
<code>$ ip addr show docker0</code>Update the routing table so that packets can be forwarded across hosts.
<code>$ route -n</code>Data Flow Between Containers
When a source container sends data, it first reaches the
docker0bridge, which forwards it to
flannel.1. Flannel encapsulates the packet (e.g., VXLAN) and sends it out through the host's
eth0. The destination node's
eth0receives the packet, flannel decapsulates it, passes it to
flannel.1, which then forwards it to the destination
docker0bridge and finally to the target container.
Source container can inspect its routing table with:
<code>$ kubectl exec -it -p {PodID} -c {ContainerID} -- ip route</code>Destination node can view its routes with:
<code>$ ip route</code>The encapsulation adds Ethernet and IP headers; after the second encapsulation the packet traverses the physical network and is finally delivered to the target container.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.