Cloud Native 10 min read

How Docker Image Layering Boosts CI/CD Speed and Reduces Storage

This article explains Docker's layered image storage, shows how improper layering inflates image size and download time, and provides a practical design method using shared base layers to streamline CI/CD pipelines and improve resource efficiency.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How Docker Image Layering Boosts CI/CD Speed and Reduces Storage

1. Docker Image Layered Storage

To maximize image reuse, speed up container startup, and reduce memory and disk usage, a Docker container’s runtime environment is composed of multiple dependent layers. Each numeric ID in the diagram represents a Docker image layer, and pulling an image downloads all its dependent layers.

For example, an application image may be built on a base image, then layered with tools such as Anaconda, followed by a layer containing model files and dependencies, and finally a writable layer for runtime changes. Each new image adds a new read‑only layer on top of the previous ones.

2. Docker Image Deriving from a Single Base Image

As Docker usage grows, the number of images increases, raising the challenge of maintaining upgrades. If every image is built directly from a basic OS image, updating all images requires rebuilding each one, leading to repeated downloads of identical layers.

When many images share the same base, identical layers are downloaded repeatedly, inflating update time, especially for large images (>1 GB). The diagram shows two images built on the same base layer.

On a single Docker host, layers that already exist are not re‑downloaded, but different layers—even if they contain the same content—are still fetched again.

3. Optimizing Docker Images with Layered Mechanism

Poorly designed images hinder maintenance and CI/CD efficiency. By planning images according to layer reuse, we can improve sustainability and speed of the pipeline.

3.1 Designing Layer‑Based Docker Images

Consider two applications, App1 and App2, with the following environment details:

Both share the same OS base (Python 3.7), security tools, general tools, and library installations; they differ only in the final code and configuration files. By extracting the common parts into shared layers, we can reuse them across images.

Combine infrequently changing commands into a single layer, as illustrated below.

Overlay the two tree structures to merge duplicate nodes, resulting in the final configuration tree.

Based on this design, we create three base images and a final business image that adds the application code. The Dockerfile examples below illustrate the layered build process.

<code># f1: add security components
FROM python3
RUN apt install -y some-security-framework
# f2: install base infrastructure
FROM abc.hub.com/libary/python3
RUN wget -c anaconda12.sh && ./anaconda12.sh && rm -f anaconda12.sh
# f3: build model image
FROM abc.hub.com/ai-tools/env-anaconda:12
RUN pip install -y some-dependences
RUN wget -c s3.xx.com/some-path/dust.model -O /some/path
# f4: build business image
FROM abc.hub.com/rk-ai-tools/env-anaconda-dust:runtime
ADD code /workspace/code
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]</code>

3.2 Practical Layer‑Based Docker Image

In experiments, the layered images (Security tools, General tools, Library) total about 1.8 GB, and the resulting application image is around 1.9 GB.

When downloading the app image on a host that already has the base image, only the new layer is fetched, taking about 1 minute 33 seconds.

Downloading the same app image on a host without the base image requires over 7 minutes, because all layers must be retrieved.

Other layers take more than 4 minutes to download, showing that repeated downloads of duplicate layers severely affect update efficiency. The larger the differences between images, the higher the download cost.

4. Summary

Properly planning Docker image layers shortens pull times, improves CI/CD efficiency, and enables clear role separation among teams. Each team can focus on its own layer, while other teams build on top of shared layers, resulting in a streamlined, incremental image production workflow.

Cloud NativeDockeroptimizationCI/CDImage Layering
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.