Operations 12 min read

Docker Image Storage Showdown: overlayfs vs device-mapper & the Speedy System

This article examines Docker image storage technologies, compares overlayfs and device-mapper drivers, and introduces Speedy, an open‑source distributed backend storage system for Docker images, detailing its architecture, modules, and upload/download workflows.

Efficient Ops
Efficient Ops
Efficient Ops
Docker Image Storage Showdown: overlayfs vs device-mapper & the Speedy System

Docker Image Storage Technology Exploration

Docker relies on several underlying technologies, including Linux kernel features (Namespace, Cgroup), storage mechanisms (overlayfs, aufs, dm), and networking solutions (libnetwork, flannel). This discussion focuses on storage, specifically the CoW file systems for container rootfs and the need for a scalable distributed storage system for Docker images.

The primary storage drivers considered are overlayfs and device‑mapper (dm). overlayfs provides a layered file system with CoW at the file level, while dm offers block‑level thin provisioning with finer‑grained CoW.

overlayfs Overview

overlayfs merges multiple file systems into a single view. A typical mount command is:

mount -t overlay overlay -olowerdir=/lower,upperdir=/upper,workdir=/work /merged

It uses a lower read‑only layer and an upper writable layer; modifications are copied up to the upper layer, enabling efficient CoW.

device‑mapper Overview

Device‑mapper operates in the kernel block layer, providing features such as thin provisioning, which allocates disk space on demand and offers block‑level CoW. This can reduce data duplication compared to overlayfs when large files are modified across many containers.

Choosing Between dm and overlayfs

dm’s finer‑grained CoW can be more space‑efficient, but it lacks awareness of the upper‑level file system and may have performance limitations. overlayfs requires kernel 3.18+ and may not be supported by all distributions without custom compilation.

Overall, dm is an acceptable choice today, with a possible future shift to overlayfs.

Speedy – An Open‑Source Docker Image Backend Storage System

After evaluating existing Docker Registry storage options (local filesystem, S3, Swift), the team developed Speedy, a distributed object storage system tailored for Docker images.

Speedy architecture diagram
Speedy architecture diagram

Speedy Components

Docker Registry Driver (implements Registry 1.0 protocol)

ChunkMaster (central node tracking ChunkServers)

ChunkServer (stores image chunks; multiple servers form a group)

ImageServer (stateless proxy that selects ChunkServers for upload/download)

Upload Process

When a user runs

docker push

, the Docker client interacts with the Registry, which forwards image layers to the custom driver. The driver splits the data into fixed‑size chunks, sends them concurrently to the ImageServer, which selects a group of ChunkServers via ChunkMaster. All ChunkServers must store the chunk successfully before the driver reports success to Docker.

Download Process

During

docker pull

, the Registry queries the MetaServer via ImageServer to obtain chunk metadata, then the ImageServer retrieves the chunks from the appropriate ChunkServers. The driver reassembles the chunks in order and streams the complete layer back to Docker.

The Speedy project is open‑source on GitHub: https://github.com/jcloudpub/speedy .

Speedy’s design mirrors mature distributed storage architectures (e.g., MongoDB clusters) and offers a production‑grade alternative to default Docker storage solutions.
DockerOverlayFSdevice-mapperimage storageContainer RegistrySpeedy
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.