Demystifying Kubernetes Networking: Services, IPs, and Ports Explained
This article breaks down Kubernetes' networking model by defining key concepts such as network namespaces, veth pairs, iptables, bridges, services, ClusterIP, NodePort, and illustrates intra‑ and inter‑node communication, as well as external access methods like NodePort and Ingress.
In a previous article we gave a comprehensive overview of Kubernetes; now we explore its core networking technologies step by step.
Glossary
1. Network namespace : Linux isolates independent network stacks into separate namespaces, preventing communication between them; Docker leverages this to achieve container‑level network isolation.
2. Veth pair : A virtual Ethernet pair that enables communication between different network namespaces.
3. Iptables/Netfilter : Netfilter runs in kernel mode to execute packet‑filtering rules; iptables runs in user space to manage Netfilter rule tables, together providing flexible packet processing.
4. Bridge : A layer‑2 device that connects multiple Linux ports, functioning like a switch.
5. Routing : Linux uses routing tables at the IP layer to decide where to forward packets.
Complex Network Model
Kubernetes abstracts the cluster’s internal network to achieve a flat network topology, allowing us to reason about networking without physical node constraints.
The following key abstractions are highlighted:
Service
A Service abstracts a set of backend Pods, providing a stable access point and load‑balancing. It is usually bound to a Deployment and selects Pods via label selectors.
Service types determine exposure scope: ClusterIP (internal only), NodePort (exposed on each node’s IP), and LoadBalancer (external cloud load balancer). Example commands:
<code>$ kubectl get svc --selector app=nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 172.19.0.166 <none> 80/TCP 1m
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP: 172.19.0.166
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 172.16.2.125:80,172.16.2.229:80
Session Affinity: None
Events: <none></code>The Service above proxies two Pod instances (172.16.2.125:80 and 172.16.2.229:80).
Two IPs
Kubernetes defines Pod IP (assigned from the Docker bridge network) for each Pod, enabling direct Pod‑to‑Pod communication, and Cluster IP , a virtual IP used only by Services and not directly pingable. ClusterIP is managed by kube‑proxy via iptables or IPVS.
Three Ports
In Kubernetes, Port refers to the Service port exposed to other Services, not the generic TCP/UDP port. Variants include:
Port
The Service’s own port (e.g., MySQL Service defaults to 3306) accessible only inside the cluster.
NodePort
Exposes the Service on each node’s IP at a static port (e.g., 30001), allowing external clients to reach the Service via
http://node:30001.
targetPort
The container’s port defined in the Dockerfile (e.g., 80 for Nginx).
Example Service YAML:
<code>kind: Service
apiVersion: v1
metadata:
name: mallh5-service
namespace: abcdocker
spec:
selector:
app: mallh5web
type: NodePort
ports:
- protocol: TCP
port: 3017
targetPort: 5003
nodePort: 31122</code>In‑Cluster Communication
Single‑Node Communication
Within a single node, communication occurs without crossing physical NICs, covering intra‑Pod container communication and inter‑Pod communication on the same node.
Routing table example:
<code>root@node-1:/opt/bin# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.23.100.1 0.0.0.0 UG 0 0 0 eth0
10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0</code>In‑Pod communication shares the network namespace; containers talk via
127.0.0.1:port. The veth pair connects the container’s
eth0to the host bridge.
Inter‑Pod Communication on the Same Node
Pods share the same
docker0bridge, so traffic is forwarded via the veth pair directly using the destination Pod’s IP.
Cross‑Node Communication
Cross‑node traffic requires a Container Network Interface (CNI) plugin. Kubernetes supports many CNI implementations such as bridge, Calico, Flannel, etc.
Overlay networks (e.g., VXLAN‑based Flannel) encapsulate packets to traverse different subnets, while SDN solutions like Calico use layer‑3 routing with optional IPIP encapsulation.
Flannel creates a
flannel.1bridge on each node, assigns a unique subnet, and uses veth pairs to forward packets between
docker0and
flannel.1. The flannel daemon encapsulates traffic into UDP packets sent over the physical NIC.
External Access to the Cluster
Common external exposure methods include LoadBalancer, Ingress, and NodePort. This section focuses on NodePort and Ingress.
NodePort
Setting a Service’s type to NodePort exposes it on a static port on every node’s IP, allowing access via
nodeIP:nodePort.
Example InfluxDB Service:
<code>kind: Service
apiVersion: v1
metadata:
name: influxdb
spec:
type: NodePort
ports:
- port: 8086
nodePort: 31112
selector:
name: influxdb</code>Ingress
Ingress provides HTTP‑level load balancing and path‑based routing, exposing multiple Services behind a single external URL.
Ingress YAML example:
<code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test.name.com
http:
paths:
- path: /test
backend:
serviceName: service-1
servicePort: 8118
- path: /name
backend:
serviceName: service-2
servicePort: 8228</code>The Ingress controller watches these rules, generates an Nginx configuration, and reloads Nginx to apply changes.
Summary and Outlook
This article illustrated Kubernetes networking by dissecting a Service, two IP concepts, and three port types, and described both intra‑cluster and external access mechanisms. Future posts will dive deeper into each networking detail.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.