Cloud Native 14 min read

How VXLAN Enables Pod Networking with Flannel in Kubernetes

This article explains the fundamentals of VXLAN, why it is preferred over VLAN, and provides a step‑by‑step guide on configuring Flannel in VXLAN mode within a Kubernetes cluster to achieve cross‑node pod communication.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How VXLAN Enables Pod Networking with Flannel in Kubernetes

Overview

This article covers three main topics: a brief introduction to VXLAN, reasons for using VXLAN, and how to use Flannel (VXLAN mode) in Kubernetes to enable pod‑to‑pod communication.

VXLAN Introduction

VXLAN (Virtual eXtensible LAN) is an overlay tunneling technology that builds a virtual Layer‑2 network on top of a physical Layer‑3 network using UDP. It decouples the logical network from the physical underlay, supporting both virtual machines and containers.

Why Use VXLAN

Supports up to 2 24 subnets (compared to 2 12 for VLAN) by using a Virtual Network Identifier (VNI).

Provides multi‑tenant isolation with independent IP and MAC allocation.

Meets cloud‑native requirements for flexible, large‑scale VM migration while keeping the broadcast domain bounded.

Using Flannel (VXLAN) in Kubernetes

Note: The following commands were tested on a Kubernetes 1.19 cluster installed with kubeadm, using Flannel’s VXLAN mode.
<code># kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", ...}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", ...}</code>
<code>wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code>
<code>kubectl apply -f kube-flannel.yml</code>
<code># kubectl get po -A | grep flannel
kube-system   kube-flannel-ds-f4x7m   1/1   Running   0   15h
kube-system   kube-flannel-ds-ltr8h   1/1   Running   0   15h
kube-system   kube-flannel-ds-mp76x   1/1   Running   0   15h</code>

What Flannel Does After Installation

Creates a VXLAN interface named

flannel.1

(MTU 1450, destination port 8472, local IP of the node).

<code># ip -d link show flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether fe:be:87:93:06:e2 brd ff:ff:ff:ff:ff:ff
    vxlan id 1 local 192.168.0.39 dev eth0 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum</code>
<code># ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
    inet 10.244.0.0  netmask 255.255.255.255  broadcast 10.244.0.0
    ether fe:be:87:93:06:e2  txqueuelen 0  (Ethernet)</code>

Creates routing entries for other node pod CIDRs so that pod traffic is routed to

flannel.1

.

<code># route -n
10.244.1.0   10.244.1.0   255.255.255.0   UG 0 0 0 flannel.1
10.244.2.0   10.244.2.0   255.255.255.0   UG 0 0 0 flannel.1</code>

Adds static ARP entries for each node’s VTEP MAC address.

<code># arp -n
10.244.1.0   ether 0e:61:06:ff:7a:73   CM   flannel.1
10.244.2.0   ether 0a:72:bf:3f:cd:40   CM   flannel.1</code>
<code># bridge fdb
0a:72:bf:3f:cd:40 dev flannel.1 dst 192.168.0.8 self permanent
fe:be:87:93:06:e2 dev flannel.1 dst 192.168.0.39 self permanent</code>

Pod‑to‑Pod Communication

Same‑Node Pods

Pods on the same node share the same subnet (e.g., 10.244.1.0/24) and can reach each other directly without encapsulation.

<code># kubectl get po -o wide
nginx-deployment-66b6c48dd5-nzjgd   1/1 Running 0 35m 10.244.1.8 node1
nginx-deployment-66b6c48dd5-jcwc9   1/1 Running 0 35m 10.244.1.9 node1</code>
<code># route -n (inside pod)
Destination   Gateway   Genmask        Flags   Iface
0.0.0.0       10.244.1.1   0.0.0.0   UG   eth0
10.244.0.0    10.244.1.1   255.255.0.0 UG   eth0
10.244.1.0    0.0.0.0      255.255.255.0 U   eth0</code>

Cross‑Node Pods

When pods reside on different nodes, traffic is encapsulated into VXLAN packets, sent to the remote node’s VTEP, decapsulated, and finally delivered via the CNI bridge (cni0).

<code># kubectl get po -o wide
nginx-deployment-66b6c48dd5-f7v9q   1/1 Running 0 60m 10.244.2.4 node2
nginx-deployment-66b6c48dd5-nzjgd   1/1 Running 0 60m 10.244.1.8 node1</code>
<code># route -n (inside pod on node1)
Destination   Gateway   Genmask        Flags   Iface
0.0.0.0       10.244.1.1   0.0.0.0   UG   eth0
10.244.2.0    10.244.1.1   255.255.255.0 UG   eth0</code>
<code># ip -d link show flannel.1 (on node1)
... vxlan id 1 local 192.168.0.39 dev eth0 dstport 8472 ...</code>

The host routes the packet to the remote node’s IP, the remote node’s VTEP (flannel.1) receives the VXLAN packet on port 8472, decapsulates it, and the inner IP packet matches the remote node’s pod CIDR, which is then forwarded to the pod via

cni0

.

Summary

Cross‑node pod communication in a Flannel‑VXLAN setup requires host routing, VXLAN encapsulation/decapsulation, and proper ARP/FDB entries. Monitoring can be performed with

tcpdump

on

cni0

,

flannel.1

,

eth0

, or the veth pairs.

cloud-nativekubernetesOverlay NetworkFlannelVXLANPod Networking
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.