Cloud Native 8 min read

Kubernetes: What It Is and Why It’s Hard to Get Started

This article provides a concise, question‑and‑answer overview of Kubernetes, explaining its role as a distributed container‑orchestration system, the architecture of master and worker nodes, core components such as etcd, kube‑apiserver, scheduler, controllers, and how services, pods, labels, and scaling operate within a cluster.

Architecture Digest
Architecture Digest
Architecture Digest
Kubernetes: What It Is and Why It’s Hard to Get Started

Kubernetes is a distributed cluster management system built on container technology, representing Google's decades of experience in large‑scale container deployment.

The cluster consists of multiple Node machines (physical or virtual) overseen by a Master node that centrally manages them.

Question 1: How do Master and Worker nodes communicate? The Master runs the kube-apiserver process, providing the API hub for all components. Each Node runs a kubelet that reports status to the Master and receives commands, creating Pods as instructed. Pods are the basic execution unit and may contain one or more containers sharing a network namespace via a pause container.

Question 2: How does the Master schedule Pods onto specific Nodes? The kube-scheduler executes scheduling algorithms (e.g., round‑robin) to select an optimal Node. Users can also direct Pods to particular Nodes by matching Node labels with Pod selectors.

Question 3: Where is the cluster state stored and who maintains it? All configuration and state are stored in etcd , a highly available key‑value store. Access to this data is mediated by the kube-apiserver , which offers a RESTful interface for internal components and external users.

Question 4: How do external users access Pods running in the cluster? Kubernetes introduces the Service abstraction, which groups Pods with the same labels and provides a stable virtual IP. A kube-proxy on each Node routes traffic from the Service IP to the appropriate Pod IPs, handling load‑balancing across multiple Pods.

Question 5: How are Pods dynamically scaled? A Replication Controller (or newer Deployment) defines the desired replica count for a Pod. The controller continuously reconciles the actual number of Pods with the desired count, allowing manual updates or automatic scaling via a horizontal pod autoscaler.

Question 6: How do the various components cooperate? The kube-controller-manager runs multiple controllers (Service, Replication, Node, ResourceQuota, Namespace, etc.). Each controller watches the API server for state changes and attempts to bring the actual cluster state in line with the desired specifications.

The article concludes by listing essential concepts (Node, Pod, Label, Selector, various Controllers) and runtime components (kube‑apiserver, kube‑controller‑manager, kube‑scheduler, kubelet, kube‑proxy, pause container) that together enable Kubernetes’ operation.

Source: author “banq”, originally published at https://www.jdon.com/64368.html. The content is shared for learning purposes; all rights belong to the original author.

cloud-nativekubernetescluster managementContainer OrchestrationControllersPods
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.