Understanding Docker Shim Deprecation & Installing Kubernetes with Containerd
This article explains why the Kubernetes dockershim component is being phased out, compares Docker and containerd command usage, and provides a step‑by‑step guide to set up a Kubernetes 1.20.5 cluster with containerd as the container runtime, including network plugin installation.
Introduction
When the Kubernetes community announced that the
dockershimcomponent would be deprecated after version 1.20, many media outlets claimed that Kubernetes was abandoning Docker. In fact,
dockershimis simply a bridge that allows Kubernetes to operate Docker, and its removal is part of the effort to support multiple container runtimes directly via the CRI.
dockershimwas created because early Kubernetes versions assumed Docker as the runtime. As the project matured, the Docker‑specific logic was extracted into
dockershimso that the core could work with any CRI‑compatible runtime such as
containerd.
Since both Kubernetes and Docker evolve independently, maintaining
dockershimensures compatibility, but it also adds an unnecessary layer when using a native CRI runtime. Therefore the community aims to remove
dockershimand interact with
containerddirectly.
What Is Containerd?
Containerd is a project spun out of Docker that provides a lightweight container runtime for Kubernetes. Its main features include:
Support for OCI image specification (runc)
Support for OCI runtime specification
Image pulling
Container network management
Multi‑tenant storage
Lifecycle management of containers and runtimes
Network namespace management
Typical command differences between Docker and containerd (via
crictl) are:
<code>Function Docker Containerd (crictl)
----------------------------------------------------------
List images docker images crictl images
Pull image docker pull crictl pull
Push image docker push (none)
Remove image docker rmi crictl rmi
Inspect image docker inspect ID crictl inspecti ID
List containers docker ps crictl ps
Create container docker create crictl create
Start container docker start crictl start
Stop container docker stop crictl stop
Remove container docker rm crictl rm
Inspect container docker inspect crictl inspect
Attach docker attach crictl attach
Exec docker exec crictl exec
Logs docker logs crictl logs
Stats docker stats crictl stats</code>The usage patterns are largely similar.
Environment Description
Host Nodes
Two CentOS 7.6 machines are used:
<code>IP Address OS Kernel
192.168.0.5 CentOS7.6 3.10
192.168.0.125 CentOS7.6 3.10</code>Software Versions
<code>Software Version
kubernetes 1.20.5
containerd 1.4.4</code>Environment Preparation
Execute the following steps on every node.
<code># 1. Add hosts entries
cat /etc/hosts
192.168.0.5 k8s-master
192.168.0.125 k8s-node01
# 2. Disable firewall
systemctl stop firewalld
systemctl disable firewalld
# 3. Disable SELinux
setenforce 0
cat /etc/selinux/config
SELINUX=disabled
# 4. Create sysctl config for Kubernetes networking
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 5. Apply sysctl settings
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
# 6. Install and load ipvs modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 7. Install ipset and ipvsadm
yum install -y ipset ipvsadm
# 8. Synchronize time
yum install -y chrony
systemctl enable chronyd
systemctl start chronyd
chronyc sources
# 9. Disable swap
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
# 10. Reduce swappiness
echo 'vm.swappiness=0' >> /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
# 11. Install Containerd
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list | grep containerd
yum install -y containerd.io-1.4.4
# 12. Configure Containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
sed -i '/containerd.runtimes.runc.options/a\ SystemdCgroup = true' /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g" /etc/containerd/config.toml
# 13. Start Containerd
systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd</code>Install kubeadm, kubelet and kubectl
<code># Add Alibaba Cloud repo for Kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
# Install components (version 1.20.5)
yum install -y kubelet-1.20.5 kubeadm-1.20.5 kubectl-1.20.5
# Set container runtime for kubelet
crictl config runtime-endpoint /run/containerd/containerd.sock
# Enable kubelet service
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet</code>Initialize the Cluster
Master Initialization
Export the default kubeadm configuration and modify it to use Containerd and systemd cgroup driver.
<code>kubeadm config print init-defaults > kubeadm.yaml</code>Key modifications (excerpt):
<code>apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.5
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.5
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
podSubnet: 172.16.0.0/16
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd</code>Run the initialization:
<code>kubeadm init --config=kubeadm.yaml</code>After a successful init, copy the admin kubeconfig to the regular user’s home directory:
<code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config</code>All the above operations must be performed on every node.
Join Worker Nodes
On each worker node, install the same Kubernetes packages, copy the
kubeadm joincommand displayed at the end of the master init, and execute it. Example:
<code>kubeadm join 192.168.0.5:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:446623b965cdb0289c687e74af53f9e9c2063e854a42ee36be9aa249d3f0ccec</code>Verify the nodes:
<code>kubectl get nodes</code>If the join command is lost, retrieve it with kubeadm token create --print-join-command .
Install a Network Plugin
Install Calico as the pod network:
<code>wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
# Edit calico.yaml to set the correct interface and pod CIDR if needed
sed -i 's/interface=.*/interface=eth0/' calico.yaml
sed -i 's|CALICO_IPV4POOL_CIDR.*|CALICO_IPV4POOL_CIDR="172.16.0.0/16"|' calico.yaml
kubectl apply -f calico.yaml</code>After a short wait, the nodes should show
Readystatus.
<code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 47m v1.20.5
k8s-node01 Ready <none> 46m v1.20.5</code>Optional: Enable Bash Completion for kubectl
<code>yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc</code>Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.