Cloud Native 16 min read

Understanding the Kubernetes Networking Model: Services, IPs, and Ports

This article provides a comprehensive overview of Kubernetes networking, explaining key concepts such as network namespaces, veth pairs, iptables, services, ClusterIP, NodePort, Ingress, and the role of CNI plugins, while illustrating internal and external communication with practical YAML and kubectl examples.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Understanding the Kubernetes Networking Model: Services, IPs, and Ports

In previous articles we introduced Kubernetes as a whole; this piece dives into its core networking technologies, starting with essential terminology such as network namespaces, veth device pairs, iptables/netfilter, bridges, and routing.

Glossary

1. Network namespace: isolates independent network stacks in Linux, enabling Docker containers to have isolated networking.

2. Veth pair: a virtual Ethernet pair that connects different network namespaces.

3. Iptables/Netfilter: Netfilter implements packet filtering in the kernel, while iptables is a user‑space tool to manage Netfilter rule tables.

4. Bridge: a Layer‑2 device that connects multiple Linux ports, similar to a switch.

5. Routing: Linux uses routing tables to decide where to forward IP packets.

Complex Network Model

Kubernetes abstracts the cluster network to achieve a flat topology, allowing us to reason about networking without physical node constraints.

Key abstractions include:

Service

A Service hides the dynamic nature of backend Pods and provides load‑balancing. It is usually bound to a Deployment and accessed via a stable address. Service‑to‑Pod mapping is performed using label selectors.

Service types (ClusterIP, NodePort, LoadBalancer) determine visibility: ClusterIP is internal only, NodePort exposes a port on each node, and LoadBalancer provisions an external load balancer.

Example of inspecting a Service:

$ kubectl get svc --selector app=nginx
NAME   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
nginx  ClusterIP  172.19.0.166
80/TCP    1m
$ kubectl describe svc nginx
Name:            nginx
Namespace:       default
Labels:          app=nginx
Selector:        app=nginx
Type:            ClusterIP
IP:              172.19.0.166
Port:            80/TCP
TargetPort:      80/TCP
Endpoints:       172.16.2.125:80,172.16.2.229:80

The Service proxies two Pod instances (172.16.2.125:80 and 172.16.2.229:80).

Two IP Concepts

Pod IP: each Pod receives a unique IP allocated from the Docker bridge subnet; Pods can communicate directly using these IPs.

Cluster IP: a virtual IP assigned to a Service; it cannot be pinged directly but is used by kube‑proxy (via iptables or IPVS) to forward traffic to backend Pods.

Three Port Concepts

Port (Service port): the port exposed by the Service (e.g., MySQL default 3306). It is only reachable inside the cluster.

nodePort: a port on each node that forwards external traffic to the Service. For example, setting nodePort: 30001 allows access via http://node:30001 .

targetPort: the container’s port defined in the Dockerfile (e.g., 80 for nginx).

Example Service YAML:

kind: Service
apiVersion: v1
metadata:
name: mallh5-service
namespace: abcdocker
spec:
selector:
app: mallh5web
type: NodePort
ports:
- protocol: TCP
port: 3017
targetPort: 5003
nodePort: 31122

If nodePort is omitted, the Service defaults to ClusterIP .

Internal Cluster Communication

Single‑Node Communication

Within a node, communication occurs between containers in the same Pod (via 127.0.0.1 ) or between Pods on the same node (through the docker0 bridge). Example routing table:

root@node-1:/opt/bin# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.23.100.1    0.0.0.0         UG    0      0        0 eth0
10.1.0.0        0.0.0.0         255.255.0.0     U     0      0        0 flannel.1   # cross‑node traffic
10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 docker0    # intra‑node traffic

In‑Pod communication shares the network namespace; containers talk via 127.0.0.1:port . The veth pair connects the container’s eth0 to the host bridge.

Pod‑to‑Pod Communication on the Same Node

Pods share the same docker0 bridge, so traffic is forwarded directly via veth pairs without leaving the node.

Cross‑Node Communication

Cross‑node traffic relies on CNI plugins (e.g., Flannel, Calico). Flannel creates a flannel.1 bridge per node, assigns a unique subnet, and uses veth pairs to forward packets between docker0 and flannel.1 . Packets are encapsulated (VXLAN, IPIP, etc.) and sent over the physical NIC.

External Access to the Cluster

NodePort

Setting a Service to type: NodePort exposes it on a static port on every node, allowing access via NodeIP:NodePort . Example InfluxDB Service:

kind: Service
apiVersion: v1
metadata:
name: influxdb
spec:
type: NodePort
ports:
- port: 8086
nodePort: 31112
selector:
name: influxdb

Ingress

Ingress provides HTTP layer (L7) load balancing and path‑based routing to Services. A typical Ingress YAML:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test.name.com
http:
paths:
- path: /test
backend:
serviceName: service-1
servicePort: 8118
- path: /name
backend:
serviceName: service-2
servicePort: 8228

The Ingress controller watches these rules, generates an Nginx configuration, writes it to /etc/nginx.conf , and reloads Nginx to apply changes.

Conclusion and Outlook

This article dissected the Kubernetes networking model from Service, IP, and Port perspectives, covering internal pod communication, cross‑node networking via CNI, and external exposure through NodePort and Ingress. Future posts will explore deeper networking details.

KubernetesnetworkingServiceIngressCNInodeportClusterIP
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.