Master Docker & Kubernetes Networking: From Bridge to Flannel Explained
This article walks through Docker's built‑in network drivers—including bridge, host, none, overlay, macvlan and plugins—then dives into Kubernetes networking, detailing Pod communication, the Flannel CNI workflow, and how data traverses virtual bridges and physical interfaces.
Before discussing Kubernetes networking, let’s review Docker networking. Docker uses a plugin architecture and provides several network drivers by default: bridge, host, none, overlay, macvlan, and network plugins. The
--networkflag selects the driver when running a container.
bridge : default driver; creates a network namespace, assigns an IP, and connects the container to a virtual bridge.
host : uses the host’s network stack directly.
none : provides no network, only the loopback interface.
overlay : connects multiple Docker daemons for swarm services.
macvlan : assigns a MAC address to the container, allowing it to appear as a physical device on the network.
Network plugins : third‑party plugins available from Docker Store or other vendors.
By default Docker uses the bridge driver; the following diagram illustrates it.
1.1 Building a bridge network
When Docker is installed it creates a virtual bridge named
docker0with private address ranges 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16. The
ifconfigcommand shows
docker0details, and
docker network inspect bridgereveals its subnet and gateway.
Running a container creates a veth pair; one end (named
eth0) is placed inside the container, the other remains in the host’s
docker0namespace. The
brctl showcommand lists the veth devices.
1.2 External access
The
docker0bridge is not reachable from outside, so ports must be published with
-por
-P.
-Pmaps container ports to random host ports, while
-p hostPort:containerPortmaps to a specific host port.
<code>$ docker run -P {image}</code> <code>$ docker run -p {hostPort}:{containerPort} {image}</code>Kubernetes Network Model
Kubernetes networking differs from Docker and must solve four problems: communication between containers, between Pods, between Pods and Services inside the cluster, and between external applications and Services.
Container‑to‑container communication
Pod‑to‑Pod communication
Pod‑to‑Service communication
External: application‑to‑service communication.
Kubernetes assumes each Pod has its own IP, making Pods behave like physical hosts for port mapping, service discovery, load balancing, etc. This article focuses on container‑to‑container and Pod‑to‑Pod communication; see other articles for Pod‑to‑Service and external access.
2.1 Communication between containers in the same Pod
Kubernetes creates a pause container for each Pod, assigns a unique IP, and places other containers in the same network namespace (using
--net=container:xxx). All containers in the Pod share the namespace and can reach each other via
localhost.
2.2 Communication between containers in different Pods
Cross‑Pod communication relies on network plugins such as Flannel or Calico. Flannel, the default CNI, uses an L3 overlay where Pods on the same host share a subnet, while Pods on different hosts belong to different subnets.
Each node runs a
flanneldagent that allocates a subnet and assigns IPs to Pods. Flannel stores configuration in etcd and forwards packets via VXLAN, UDP, or host‑gw backends.
2.3 Flannel operation in Kubernetes
1) Set the cluster network in etcd:
<code>$ etcdctl ls /coreos.com/network/config</code>2) Allocate subnets for each node:
<code>$ etcdctl ls /coreos.com/network/subnets</code> <code>$ etcdctl ls /coreos.com/network/subnets/{subnet}</code>3) Start
flanneldon each node; it reads the etcd config, obtains a subnet lease, and writes details to
/run/flannel/subnet.env.
<code>$ cat /var/run/flannel/subnet.env</code>4) Create the virtual interface
flannel.1on the node:
<code>$ ip addr show flannel.1</code>5) Configure Docker’s bridge
docker0with a unique CIDR using the
--bipflag:
<code>$ ip addr show docker0</code>6) Adjust the routing table so that packets can traverse nodes:
<code>$ route -n</code>2.4 Data path
When a source container sends data, it first reaches the
docker0bridge, which forwards it to the
flannel.1virtual NIC. The NIC encapsulates the packet (e.g., VXLAN) and sends it out via the host’s
eth0. The Ethernet header contains source and destination MAC addresses, and the IP header contains source and destination IPs. The packet traverses the physical network to the destination node, where
eth0receives it, the kernel decapsulates it, and
flannel.1passes it back to
docker0, which finally delivers it to the target container.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.