Cloud Native 8 min read

Why Kubernetes Does Not Use Docker’s Libnetwork/CNM

Although Docker’s libnetwork/CNM offers a network plugin model, Kubernetes has chosen not to adopt it due to fundamental design mismatches, reliance on low‑level key‑value stores, security and scalability concerns, and a strategic shift toward the simpler, more portable CNI plugin framework.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Why Kubernetes Does Not Use Docker’s Libnetwork/CNM

Since the release of version 1.0, Kubernetes has provided a very basic network‑plugin model that was introduced alongside Docker’s Libnetwork and the Container Network Model (CNM), but the Kubernetes plugin system remains marked as “alpha”.

Even after Docker’s network‑plugin support became stable, Kubernetes did not adopt it because the platform supports multiple container runtimes and the real question is whether Kubernetes can use the CNM driver that belongs to the Docker runtime. The goal of a universal network layer across runtimes is not a concrete objective.

Kubernetes deliberately avoided using Docker’s CNM/Libnetwork and instead investigated the Container Network Interface (CNI) model proposed by CoreOS, which aligns better with Kubernetes’ architecture.

The Docker network driver makes several assumptions that conflict with Kubernetes: it distinguishes between “local” (single‑node) and “global” (multi‑node) drivers, and the global drivers rely on a low‑level key‑value store abstraction (Libkv). To run a Docker overlay driver in a Kubernetes cluster, administrators would need to deploy an additional Consul, etcd, or Zookeeper instance, adding unnecessary complexity.

While users who are willing to provision the required infrastructure could make Docker networking work, the default Kubernetes installation would burden users with extra components, so the Docker global drivers (including Overlay) were not adopted.

Docker’s network model also contains flawed assumptions, such as a broken “discovery” implementation that corrupts /etc/hosts in certain Docker versions and an embedded DNS server that cannot easily be disabled.

Kubernetes already provides its own service naming, discovery, and DNS (SkyDNS), making Docker’s container‑level naming unsuitable. Moreover, the local/global split in Docker creates both in‑process and out‑of‑process plugins, which are difficult to map to Kubernetes’ control plane.

In contrast, CNI follows a simpler philosophy: it requires no daemon, works across runtimes (Docker, rkt, etc.), and can be extended with lightweight shell scripts. Early prototypes showed that almost all of the hard‑coded networking logic in the kubelet could be moved into CNI plugins.

Attempts to write a CNM bridge driver for Docker proved extremely complex because CNM and CNI differ fundamentally, and Docker drivers expose only internal IDs rather than meaningful network names, making integration with Kubernetes problematic.

Docker developers have been reluctant to deviate from their existing process, which further hinders third‑party integration. Consequently, the Kubernetes community invested in CNI as the preferred plugin model, accepting a few minor side effects while gaining a simpler and more flexible networking stack.

Without a CNI driver, containers started by Docker may not communicate with those started by Kubernetes, so providing a CNI driver is essential for full integration. The community invites ideas and feedback via the Kubernetes Slack channel and mailing list.

KubernetesCNIcontainer runtimeNetwork Plugins
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.