Understanding Container Runtimes: From Low‑Level to High‑Level and Kubernetes CRI
This article explains the evolution and classification of container runtimes, detailing low‑level and high‑level implementations, demoing low‑level runtime setup with runc, and describing how Kubernetes CRI integrates with runtimes like containerd, Docker, and CRI‑O.
01 Requirement Overview
Note: Container runtime refers to the software that runs and manages containers.
In the Docker era, the term "container runtime" was clearly defined as the software that runs and manages containers. As Docker’s scope expanded and various orchestration tools emerged, the definition became blurred.
Typical steps to run a Docker container are:
Download the image.
Extract the image into a bundle (flatten layers into a single filesystem).
Run the container.
Initially, specifications only defined the container‑running part as the runtime, but users generally assume all three steps are required capabilities of a container runtime, making the definition confusing.
Common runtimes include runc, runv, lxc, lmctfy, Docker (containerd), rkt, cri‑o, each built for different scenarios and offering different functions. For example, containerd and cri‑o can use runc to run containers but also provide image management and container APIs, which are higher‑level features.
Container runtimes are complex, covering low‑level to high‑level functionalities as illustrated below.
Based on functionality, runtimes are divided into Low‑level Container Runtime and High‑level Container Runtime. Low‑level runtimes focus solely on running the container itself, while high‑level runtimes add features such as image management and APIs.
02 Low‑level Container Runtime
Low‑level runtimes have limited functionality, typically handling only the low‑level tasks of running a container. They follow the OCI specification, accept a rootfs and config.json, and run isolated processes without providing storage or network implementations. They are lightweight and have clear limitations:
Only understand rootfs and config.json; no image capabilities.
No network implementation.
No persistent storage implementation.
Not cross‑platform.
Low‑level Runtime Demo
The demo uses root privileges with Linux
cgcreate,
cgset,
cgexec,
chroot, and
unshareto create a simple container.
<code>$ CID=$(docker create busybox)</code><code>$ ROOTFS=$(mktemp -d)</code><code>$ docker export $CID | tar -xf - -C $ROOTFS</code>Next, create a UUID and set memory and CPU limits (100 MB memory, 2 CPU cores).
<code>$ UUID=$(uuidgen)</code><code>$ cgcreate -g cpu,memory:$UUID</code><code>$ cgset -r memory.limit_in_bytes=100000000 $UUID</code><code>$ cgset -r cpu.shares=512 $UUID</code> <code>$ cgset -r cpu.cfs_period_us=1000000 $UUID</code><code>$ cgset -r cpu.cfs_quota_us=2000000 $UUID</code>Run a command inside the container:
<code>$ cgexec -g cpu,memory:$UUID \
unshare -uinpUrf --mount-proc \
sh -c "/bin/hostname $UUID && chroot $ROOTFS /bin/sh"</code><code>/ # echo "Hello from in a container"</code><code>Hello from in a container</code><code>/ # exit</code>Finally, clean up the cgroup and temporary directory:
<code>$ cgdelete -r -g cpu,memory:$UUID</code><code>$ rm -r $ROOTFS</code>Representative Low‑level Runtimes
runC – The most widely used OCI runtime, originally part of Docker and later extracted as a standalone tool.
Creating a root filesystem with busybox:
<code>$ mkdir rootfs</code><code>$ docker export $(docker create busybox) | tar -xf - -C rootfs</code>Generate a
config.jsontemplate:
<code>$ runc spec</code>Inspect the generated
config.json(truncated for brevity):
<code>{
"ociVersion": "1.0.2",
"process": {
"terminal": true,
"user": {"uid": 0, "gid": 0},
"args": ["sh"],
"env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "TERM=xterm"],
"cwd": "/",
...
}
}</code>Run the container:
<code>$ sudo runc run mycontainerid</code><code>/ # echo "Hello from in a container"</code><code>Hello from in a container</code>rkt (deprecated) – Provides both low‑level and high‑level functionality, similar to Docker.
runV – A hypervisor‑based runtime that has been superseded by Kata Containers.
youki – An OCI runtime implemented in Rust, similar to runC.
03 High‑level Container Runtime
High‑level runtimes handle image transfer and management, unpack images, and pass them to low‑level runtimes for execution. They typically expose a daemon and API for remote clients to run and monitor containers.
They also manage network namespaces, allowing containers to join other containers' networks.
High‑level Runtime Examples
Docker – One of the earliest open‑source container runtimes, originally a monolithic daemon (
dockerd) with a client. Modern Docker splits functionality into
containerd(high‑level) and
runc(low‑level).
containerd – Extracted from Docker, it downloads, manages, and runs images. It unpacks an image into an OCI bundle and invokes
runcto start the container. It provides an API and CLI tools (
ctr,
nerdctl).
<code>$ sudo ctr images pull docker.io/library/redis:latest</code> <code>$ sudo ctr images list</code> <code>$ sudo ctr container create docker.io/library/redis:latest redis</code> <code>$ sudo ctr container list</code> <code>$ sudo ctr container delete redis</code>CRI‑O – A lightweight CRI runtime that supports OCI, providing image management, container process management, logging, and resource isolation.
Kubernetes CRI
CRI was introduced in Kubernetes 1.5 as a bridge between kubelet and container runtimes. A runtime that implements CRI must be a high‑level runtime because it handles image management, pod support, and container lifecycle.
CRI defines a gRPC API with services such as ImageService and RuntimeService. Example RPC calls:
<code>ImageService.PullImage({image: "image1"})
ImageService.PullImage({image: "image2"})
podID = RuntimeService.RunPodSandbox({name: "mypod"})
id1 = RuntimeService.CreateContainer({pod: podID, name: "container1", image: "image1"})
id2 = RuntimeService.CreateContainer({pod: podID, name: "container2", image: "image2"})
RuntimeService.StartContainer({id: id1})
RuntimeService.StartContainer({id: id2})</code>Use
crictlto interact with a CRI runtime:
<code>cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF</code> <code>crictl --runtime-endpoint unix:///run/containerd/containerd.sock …</code>Runtimes Supporting CRI
containerd – The most popular CRI runtime, implements CRI as a plugin and listens on a Unix socket. Since version 1.2 it supports multiple low‑level runtimes via runtime handlers (e.g., runc, gVisor, Kata Containers) using Kubernetes RuntimeClass.
Docker – The original docker‑shim was the first CRI shim. Modern Docker installs containerd, and CRI now talks directly to containerd, making docker‑shim obsolete.
CRI‑O – A lightweight CRI runtime that implements OCI and provides image and container management, logging, and isolation. Its default socket is
/var/run/crio/crio.sock.
References
1. https://blog.mobyproject.org/where-are-containerds-graph-drivers-145fc9b7255 2. https://insujang.github.io/2019-10-31/container-runtime/ 3. https://github.com/cri-o/cri-o
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.