Master Kubernetes: Core Concepts, Architecture, and Advanced Networking Explained
This comprehensive guide demystifies Kubernetes by covering its fundamental principles, core components, multi‑center deployment models, service discovery methods, pod resource sharing, CNI plugins, load‑balancing layers, network isolation dimensions, and IP addressing schemes, equipping readers with the knowledge to excel in K8s interviews and real‑world deployments.
One Goal: Container Operations
Kubernetes (K8s) is an open‑source platform for automated container operations, including deployment, scheduling, and scaling across nodes.
Key functions:
Automated container deployment and replication.
Real‑time elastic scaling of container workloads.
Container orchestration with load balancing.
Scheduling: deciding on which machine a container runs.
Components:
kubectl – command‑line client, entry point for the system.
kube‑apiserver – REST API server, control plane entry.
kube‑controller‑manager – runs background tasks such as node status, pod counts, and service associations.
kube‑scheduler – assigns pods to nodes based on resource availability.
etcd – high‑availability key‑value store for configuration sharing and service discovery.
kube‑proxy – runs on each node, proxies pod network traffic and retrieves service info from etcd.
kubelet – node‑level agent that receives pod assignments, manages containers, and reports status to the apiserver.
DNS – optional DNS service that creates DNS records for each Service, enabling pod‑to‑service name resolution.
Architecture diagram:
Two‑Site Three‑Center Deployment
Consists of a local production center, a local disaster‑recovery center, and a remote disaster‑recovery center, addressing data‑consistency challenges.
Kubernetes uses the etcd component as a highly available, strongly consistent service‑discovery store, inspired by Zookeeper and doozer, offering four key characteristics:
Simple – HTTP + JSON API usable with curl.
Secure – optional SSL client authentication.
Fast – each instance supports up to a thousand writes per second.
Trustworthy – Raft algorithm ensures distributed consistency.
Four‑Layer Service Discovery
K8s provides two service‑discovery mechanisms:
Environment variables : kubelet injects Service‑related env vars into each pod at creation; requires the Service to exist before the pod.
Example environment variable for Service
redis‑masterwith ClusterIP : Port 10.0.0.11:6379:
DNS : Deploy KubeDNS as a cluster add‑on to enable DNS‑based Service discovery.
Both methods rely on underlying TCP/UDP transport.
Five Shared Resources in a Pod
A Pod is the smallest deployable unit in K8s, containing one or more tightly coupled containers that share:
PID namespace – containers can see each other’s processes.
Network namespace – containers share the same IP address and port range.
IPC namespace – containers communicate via SystemV IPC or POSIX message queues.
UTS namespace – containers share a hostname.
Volumes – shared storage defined at the Pod level.
Pod lifecycle is managed by a Replication Controller, defined via a template and scheduled onto a node; when all containers finish, the Pod terminates.
Six Common CNI Plugins
CNI (Container Network Interface) defines a standard for container networking; common plugins include bridge, host‑local, macvlan, ptp, ipvlan, and flannel.
Seven‑Layer Load Balancing
Load balancing requires communication between servers; IDC (Internet Data Center) provides the network bridge.
Key network devices include access switches (TOR), core switches, MGW/NAT for load balancing and address translation, and external core routers.
Layer 2 load balancing – MAC‑based.
Layer 3 load balancing – IP‑based.
Layer 4 load balancing – IP + port based.
Layer 7 load balancing – URL and application‑level information.
NodePort exposes services via a fixed host port, but has limitations; an external load balancer (e.g., Nginx) combined with Ingress provides a 7‑layer solution.
Eight Isolation Dimensions
K8s scheduler must consider these dimensions when placing pods.
Nine Network Model Principles
K8s networking follows four basic principles, three network‑requirement principles, one architecture principle, and one IP principle.
Each pod receives a unique IP address, forming a flat, directly reachable network space across nodes (IP‑per‑Pod model).
Key properties:
Pod IPs are assigned by docker0.
Pod internal IP and port match external view.
Containers within a pod share the network stack and can communicate via localhost.
No NAT is required for inter‑container communication.
All nodes can communicate with containers regardless of NAT.
Container address seen externally is the same as internal.
Ten IP Address Classes
A class: 1.0.0.0‑126.255.255.255, default mask /8 (255.0.0.0)</code>
<code>B class: 128.0.0.0‑191.255.255.255, default mask /16 (255.255.0.0)</code>
<code>C class: 192.0.0.0‑223.255.255.255, default mask /24 (255.255.255.0)</code>
<code>D class: 224.0.0.0‑239.255.255.255, used for multicast</code>
<code>E class: 240.0.0.0‑255.255.255.255, research use</code>
<code>0.0.0.0 – default route, represents unknown hosts/networks</code>
<code>127.0.0.1 – loopback address</code>
<code>224.0.0.1 – multicast address</code>
<code>169.254.x.x – link‑local address when DHCP fails</code>
<code>10.x.x.x, 172.16‑31.x.x, 192.168.x.x – private address spaceOps Community
A leading IT operations community where professionals share and grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.