Cloud Native 10 min read

How to Add a Kubernetes Node: Step‑by‑Step Deployment, CNI, and Runtime Setup

This guide walks you through initializing a new Kubernetes node, installing a container runtime (containerd or Docker with cri‑dockerd), configuring kernel parameters, deploying the Calico CNI plugin, and verifying the node and network components in a production‑grade cluster.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
How to Add a Kubernetes Node: Step‑by‑Step Deployment, CNI, and Runtime Setup

This article continues the previous tutorial on deploying a Kubernetes master, focusing on adding a new node, installing a container runtime, deploying a CNI plugin, and verifying the setup.

Node Initialization

Stop the firewall and disable SELinux:

<code>$ sudo systemctl stop firewalld && sudo systemctl disable firewalld
$ sudo setenforce 0
$ sudo sed -ri 's/^(SELINUX)=.*$/\1=disabled/' /etc/selinux/config</code>

Disable swap:

<code>$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab</code>

Load required kernel modules:

<code>$ cat <<-EOF | sudo tee /etc/sysconfig/modules/ipvs.modules > /dev/null
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- br_netfilter
modprobe -- ipip
EOF

$ sudo chmod 755 /etc/sysconfig/modules/ipvs.modules
$ sudo bash /etc/sysconfig/modules/ipvs.modules</code>

Configure kernel parameters for networking and performance:

<code>$ cat <<-EOF | sudo tee /etc/sysctl.d/kubernetes.conf > /dev/null
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.ip_local_port_range = 32768 65535
net.ipv4.tcp_max_tw_buckets = 65535
net.ipv4.conf.all.rp_filter = 0
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.all.forwarding = 1
net.ipv4.tcp_fin_timeout = 15
EOF

$ sudo sysctl -p /etc/sysctl.d/kubernetes.conf</code>

Deploy Runtime

Choose one of the following runtimes:

containerd : Follow the article on installing the runtime and then integrate it with Kubernetes.

Docker : Install Docker, then install

cri-dockerd

to enable Kubernetes 1.24+ compatibility.

Installation steps for

cri-dockerd

:

Download the package:

<code>$ curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd-0.3.15.amd64.tgz</code>

Extract and move the binary:

<code>$ tar xvf cri-dockerd-0.3.15.amd64.tgz -C /tmp
$ sudo cp /tmp/cri-dockerd/cri-dockerd /usr/local/bin/</code>

Create a systemd service file:

<code>$ cat <<'EOF' | sudo tee /usr/lib/systemd/system/cri-dockerd.service
[Unit]
Description=CRI interface for Docker
Requires=docker.service
After=docker.service
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --log-level=info --pod-infra-container-image 172.139.20.170:5000/library/pause:3.9 --runtime-cgroups=systemd --docker-endpoint=unix:///var/run/docker.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStopSec=5
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=default.target
EOF</code>

Enable and start the service:

<code>$ sudo systemctl daemon-reload
$ sudo systemctl enable cri-dockerd.service --now</code>

Deploy Node

Add the Kubernetes repository:

<code>$ cat <<-EOF | sudo tee /etc/yum.repos.d/kubernetes.repo > /dev/null
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.27/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.27/rpm/repodata/repomd.xml.key
EOF

$ sudo yum clean all && sudo yum makecache</code>

Install required packages and enable kubelet:

<code>$ sudo yum install -y kubelet kubeadm ipvsadm conntrack-tools
$ sudo systemctl enable kubelet.service</code>

Join the cluster (run on the master to obtain the join command):

<code># Generate token on a master node
$ sudo kubeadm token create --print-join-command
# Example output (replace with actual values)
kubeadm join 172.139.20.100:6443 --token yisepg.0zaq3448x5ihzt3q \
  --discovery-token-ca-cert-hash sha256:752b20050de2f36a3d3ef1ce420adf4eed60afc526d5e0ff6a672053033b4169 \
  --certificate-key e86a5e6da3fe716aaba2072989749bf35cb86569ba055d4f2bc6c6969982722a \
  --cri-socket /run/containerd/containerd.sock   # for containerd
# or
  --cri-socket /var/run/cri-dockerd.sock          # for Docker</code>

Deploy Calico CNI Plugin

Download the Calico manifest:

<code>$ sudo mkdir -p /etc/kubernetes/addons
$ sudo curl https://raw.githubusercontent.com/projectcalico/calico/v3.27.4/manifests/calico.yaml -o /etc/kubernetes/addons/calico.yaml</code>

Replace the default image registry if needed:

<code>$ sudo sed -ri '[email protected]/[email protected]:5000/library@g' /etc/kubernetes/addons/calico.yaml</code>

Apply the manifest:

<code>$ kubectl apply -f /etc/kubernetes/addons/calico.yaml</code>

Verification

Check node status:

<code>$ kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
k8s-master01  Ready    control-plane   43h   v1.27.16
k8s-master02  Ready    control-plane   43h   v1.27.16
k8s-master03  Ready    control-plane   43h   v1.27.16
k8s-node01    Ready    <none>          18h   v1.27.16
k8s-node02    Ready    <none>          18h   v1.27.16</code>

Verify that Calico and CoreDNS pods are running:

<code>$ kubectl -n kube-system get pod | egrep 'calico|coredns'
calico-kube-controllers-...   1/1   Running   ...
calico-node-...               1/1   Running   ...
... (other Calico nodes)
coredns-5c9cc79fcb-...       1/1   Running   ...
</code>

Conclusion

Deploying production‑grade Kubernetes nodes with proper runtime, CNI, and system tuning ensures a reliable, secure, and scalable cloud‑native environment for enterprise workloads.

Cloud NativekubernetesCNIcontainer runtimeNode Deployment
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.