Cloud Native 17 min read

How Does Calico’s IPIP Mode Enable Cross‑Node Pod Communication in Kubernetes?

This article explains Calico’s IPIP networking mode in Kubernetes, detailing its architecture, core components, routing behavior, packet encapsulation, and practical verification through ping tests, route inspection, and packet captures, helping readers understand cross‑node pod communication.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How Does Calico’s IPIP Mode Enable Cross‑Node Pod Communication in Kubernetes?

Introduction

This article analyzes the IPIP network mode of Calico in Kubernetes, explaining the devices (calixxxx, tunl0) created and how cross‑node network communication works.

1. Introduction to Calico

Calico is a popular CNI option in the Kubernetes ecosystem. Compared with Flannel, Calico offers higher performance and flexibility, providing host‑to‑pod networking, security policies, and management features. It implements a pure L3 BGP‑based solution that integrates with OpenStack, AWS, GCE, and other platforms.

Each node runs a virtual router (vRouter) in the Linux kernel. The vRouter advertises container routes via BGP to the whole Calico network, enabling IP routing between containers without additional NAT, tunnels, or overlay networks, thus saving CPU cycles and improving efficiency. Calico also leverages iptables to enforce Kubernetes NetworkPolicy.

Official site: https://www.projectcalico.org/

2. Calico Architecture and Core Components

Core components:

Felix – an agent running on each workload node that configures routes and ACLs to ensure endpoint connectivity.

etcd – a strongly consistent, highly available key‑value store that persists Calico data and network metadata.

BGP Client (BIRD) – reads kernel routing state set by Felix and distributes it within the data center.

BGP Route Reflector (BIRD) – used in large deployments to avoid N² mesh connections between BGP clients.

3. How Calico Works

Calico treats each host’s protocol stack as a router and each container as an endpoint attached to that router. Standard BGP runs between routers, allowing them to learn the network topology and forward traffic at L3, ensuring cross‑node pod connectivity.

4. Calico’s Two Networking Modes

1) IPIP – encapsulates an IP packet inside another IP packet, effectively creating an IP‑level tunnel that bridges otherwise separate networks. The implementation resides in the kernel source

net/ipv4/ipip.c

.

2) BGP – the Border Gateway Protocol, a decentralized vector routing protocol that exchanges IP prefixes between autonomous systems without using traditional IGP metrics.

5. Analysis of IPIP Network Mode

In the author’s environment, IPIP is used. The following commands illustrate pod discovery and ping testing.

<code># kubectl get po -o wide -n paas | grep hello
demo-hello-perf-d84bffcb8-7fxqj   1/1   Running   0   9d   10.20.105.215   node2.perf  <none>   <none>
demo-hello-sit-6d5c9f44bc-ncpql   1/1   Running   0   9d   10.20.42.31   node1.sit   <none>   <none>
</code>

Ping test from

demo-hello-perf

to

demo-hello-sit

:

<code># ping 10.20.42.31
PING 10.20.42.31 (10.20.42.31) 56(84) bytes of data.
64 bytes from 10.20.42.31: icmp_seq=1 ttl=62 time=5.60 ms
64 bytes from 10.20.42.31: icmp_seq=2 ttl=62 time=1.66 ms
64 bytes from 10.20.42.31: icmp_seq=3 ttl=62 time=1.79 ms
--- 10.20.42.31 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 1.662/3.015/5.595/1.825 ms
</code>

Route table inside the pod:

<code># route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref Use Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG    0      0    0   eth0
169.254.1.1     0.0.0.0         255.255.255.255 UH    0      0    0   eth0
</code>

On the host node

node2.perf

the routing table shows a

tunl0

entry for the destination network:

<code># route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref Use Iface
0.0.0.0         172.16.36.1     0.0.0.0         UG    100    0    0   eth0
10.20.42.0      172.16.35.4     255.255.255.192 UG    0      0    0   tunl0
...</code>

The

tunl0

device forwards packets destined for

10.20.42.0/26

to the gateway

172.16.35.4

, which reaches the pod on the other node.

The mysterious

cali04736ec14ce

interface is one end of a veth pair created for the pod. The other end appears inside the pod as

eth0@if122964

. Traffic sent to

cali04736ec14ce

on the host is actually delivered to the pod.

Thus, in IPIP mode all pod traffic is encapsulated and sent through the

tunl0

tunnel, adding an extra IP layer.

6. Packet Capture Analysis

Using

tcpdump -i eth0 -nn -w icmp_ping.cap

on the host of

demo-hello-sit

while pinging from the other pod captures a five‑layer packet: two IP layers (pod‑to‑pod and host‑to‑host) and the encapsulated ICMP payload.

The diagram shows the two hosts (red boxes) and the two pod IPs (blue boxes). The outer IP header belongs to the host network, while the inner header belongs to the pod network.

Because

tunl0

is a tunnel endpoint, the packet is encapsulated before being sent to the remote tunnel device.

7. Pod‑to‑Service Access

Service objects:

<code># kubectl get svc -o wide -n paas | grep hello
demo-hello-perf   ClusterIP   10.10.255.18   <none>   8080/TCP   10d   appEnv=perf,appName=demo-hello
demo-hello-sit    ClusterIP   10.10.48.254   <none>   8080/TCP   10d   appEnv=sit,appName=demo-hello
</code>

Capturing traffic on the host of

demo-hello-sit

while curling the service shows that source and destination IPs are still the host and pod IPs, confirming that IPIP is used for service traffic as well.

Through these examples, the communication mechanism of Calico’s IPIP mode becomes clear.

Cloud NativekubernetesNetworkCNICalicoIPIP
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.