Cloud Native 11 min read

How Kubernetes Ensures Seamless Pod Networking with CNI and Network Policies

This article explains Kubernetes' fundamental network requirements, the pod networking model, the role of CNI plugins, common implementation approaches, and how Network Policies provide fine-grained traffic control, offering a comprehensive overview of container networking within cloud-native clusters.

Efficient Ops
Efficient Ops
Efficient Ops
How Kubernetes Ensures Seamless Pod Networking with CNI and Network Policies

Kubernetes Basic Network Requirements

Kubernetes aggregates large numbers of container instances into a cluster that may run on heterogeneous underlying networks; ensuring inter‑container connectivity is a primary concern in production.

Pod Network Model

Kubernetes abstracts containers with the pod concept, the basic scheduling unit. From a networking perspective, each pod must satisfy:

Each pod has a unique IP address and all pods reside in a flat, directly reachable network space.

All containers within the same pod share the same network namespace (netns).

Consequently:

Containers in the same pod share ports and can be accessed via

localhost

+ port.

Because each pod has its own IP, there is no need for host‑port mapping or port‑conflict handling.

Kubernetes further defines three basic requirements for a qualified cluster network:

Pods on the same node can communicate directly without explicit NAT.

Any node can communicate directly with any pod without address translation, and vice‑versa.

The source and destination IP seen by a pod are identical, with no intermediate address translation.

Only networks satisfying these conditions can host Kubernetes. Based on this assumption, Kubernetes introduced the classic three‑tier

pod-deployment-service

model. Since version 1.1, Kubernetes adopted the CNI (Container Network Interface) standard.

CNI

The CNI specification imposes fewer constraints on developers than the older CNM model and does not depend on Docker. Implementing a CNI plugin requires a configuration file and an executable that handles ADD and DEL operations (and optionally VERSION).

Typical workflow when Kubernetes uses a CNI plugin:

Kubelet creates a

pause

container to generate the pod's netns.

The configured CNI plugin (or a chain of plugins) is invoked.

The plugin reads environment variables and command‑line arguments to obtain the netns and network device, then performs the ADD operation.

The CNI plugin configures the

pause

container's network; other containers in the pod inherit this network.

Pod Network Model Details

When a pod starts, the

pause

container creates a netns that other containers share. The single pod network resembles the Docker bridge model: containers share the same network device, routing table, and service ports, allowing intra‑pod communication via

localhost

. External traffic reaches the pod through the host's

docker0

bridge, which performs NAT via iptables.

Common Kubernetes Network Solutions

Pod‑to‑pod communication follows two patterns: same‑node communication via the host bridge (layer‑2) and cross‑node communication, which can be achieved either by modifying the underlying network (SDN) or by reusing the existing underlay with overlay or routing techniques.

Overlay (e.g., VxLAN, IPIP) encapsulates container packets within host network packets; popular implementations include Flannel.

Underlay routing adds container networks to the host routing table; solutions include Flannel host‑gw and Calico.

Major CNI projects:

Flannel – widely used, supports multiple backends (UDP, VxLAN, host‑gw).

Weave – similar to Flannel, originally UDP‑only, later added fast‑pass (VxLAN) and built‑in high‑availability storage.

Calico – modifies host routes and synchronizes them via BGP; also offers an IPIP overlay for environments lacking BGP.

Network Policy (Policy Control)

Network Policy is Kubernetes' built‑in, label‑based mechanism for isolating applications and controlling traffic. It works only with CNI plugins that support policy enforcement (e.g., Flannel does not).

Typical Network Policy example:

<code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978
</code>

The policy uses selectors (

namespaceSelector

,

podSelector

) to define which pods are affected, and specifies ingress and egress rules based on IP, protocol, and port, effectively implementing a whitelist model.

Cloud NativekubernetesCNINetwork PolicyPod Networking
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.