Step‑by‑Step Deployment of an etcd Cluster and Kubernetes Control Plane with Certificates, Systemd Services, and CNI Networking
This tutorial walks through configuring server hosts, generating TLS certificates with cfssl for etcd and Kubernetes components, deploying an etcd cluster and Kubernetes master services (apiserver, controller‑manager, scheduler) via systemd, setting up kubelet and kube‑proxy on worker nodes, installing Docker, applying Flannel CNI, and adding additional worker nodes to the cluster.
The guide begins by defining the three server IPs (k8s‑master, k8s‑node1, k8s‑node2) and preparing the host file on each node.
It then uses wget to download the cfssl tools, makes them executable, and moves them to /usr/local/bin . A Certificate Authority (CA) configuration ( ca-config.json ) and CSR ( ca-csr.json ) are created, and a self‑signed CA certificate is generated with cfssl gencert -initca ca-csr.json | cfssljson -bare ca .
Etcd certificates are produced by creating ca-config.json , ca-csr.json , and server-csr.json for the etcd server, then running cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server . The resulting ca.pem , server.pem , and server-key.pem are copied to /opt/etcd/ssl .
An etcd systemd unit file ( /usr/lib/systemd/system/etcd.service ) is created with the appropriate flags, and the service is started and enabled. The cluster health is verified using etcdctl endpoint health .
Docker is installed on all nodes with yum -y install docker-ce . The tutorial then creates Kubernetes TLS assets for the master, generating a CA and a server certificate for the API server, and stores them under /opt/kubernetes/ssl .
Kubernetes control‑plane components are configured: kube-apiserver.conf , kube-controller-manager.conf , and kube-scheduler.conf are written, each defining a set of command‑line options. Corresponding systemd unit files ( kube-apiserver.service , kube-controller-manager.service , kube-scheduler.service ) are created, reloaded, started, and enabled.
On the worker side, kubelet.conf and kube-proxy.conf are prepared, along with their YAML configuration files ( kubelet-config.yml and kube-proxy-config.yml ). Bootstrap kubeconfig files are generated with kubectl config set‑* commands, signed by the master CA, and placed in /opt/kubernetes/cfg . Systemd units for kubelet and kube-proxy are created, reloaded, started, and enabled.
The Flannel CNI plugin is installed by extracting binaries to /opt/cni/bin and applying the Flannel manifest ( kube-flannel.yaml ). RBAC rules are applied to allow the API server to communicate with kubelet via a ClusterRole and ClusterRoleBinding defined in apiserver-to-kubelet-rbac.yaml .
Finally, additional worker nodes are added by copying the prepared /opt/kubernetes directory and systemd unit files to the new hosts, adjusting hostnames in the configuration files, and starting the kubelet and kube‑proxy services. The new nodes are approved on the master using kubectl certificate approve and become part of the cluster, as shown by kubectl get nodes .
Practical DevOps Architecture
Hands‑on DevOps operations using Docker, K8s, Jenkins, and Ansible—empowering ops professionals to grow together through sharing, discussion, knowledge consolidation, and continuous improvement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.