Cloud Native 6 min read

Understanding CSI Driver Workflow with an NFS Example

This article explains the architecture, components, and operational flow of a CSI driver using NFS as a concrete example, covering CSI fundamentals, sidecar containers, dynamic volume provisioning, pod creation, and the role of the Linux VFS in exposing remote storage to applications.

System Architect Go
System Architect Go
System Architect Go
Understanding CSI Driver Workflow with an NFS Example

This article uses the CSI driver for NFS as an example to illustrate the workflow and underlying principles of CSI drivers in Kubernetes.

CSI Overview

CSI drivers are divided into two main components:

Controller plugin : responsible for managing storage resources such as volume creation, deletion, expansion, and snapshots.

Node plugin : handles node‑level storage operations, performing mount and unmount tasks on the actual node.

The CSI driver interacts with Kubernetes components as follows:

Controller plugin communicates with the kube‑api‑server to watch for changes in storage resources and act accordingly.

Node plugin registers itself with the kubelet , which later invokes the node plugin for volume operations.

To simplify driver development, the community provides several CSI sidecar containers, including:

external‑provisioner : watches PVC objects and calls CreateVolume or DeleteVolume on the CSI driver.

node‑driver‑registrar : registers the CSI driver with the kubelet .

external‑attacher : implements attach/detach hooks.

external‑resizer : handles volume expansion.

external‑snapshotter : manages volume snapshots.

livenessprobe : monitors the health of the CSI driver.

CSI Usage Process

1. CSI Driver Preparation Phase

Deploy the Controller plugin as a Deployment or StatefulSet , and the Node plugin as a DaemonSet so that each node runs a pod of the driver.

After installation, the Controller plugin registers with the kube‑api‑server , while the Node plugin registers with the kubelet .

2. Dynamic Volume Provisioning

Cluster administrators create a StorageClass to declare a storage type. Users then create a PVC that references this StorageClass . The provisioner inside the Controller plugin detects the new PVC, sends a gRPC CreateVolume request to the NFS driver, which prepares a shared directory on the NFS server and creates a PV . The PVC and PV become bound.

3. Pod Creation and Volume Use

When a user creates a Pod that references the PVC, the kube‑api‑server receives the request, the scheduler places the Pod on a node, and the node’s kubelet calls the Node plugin via gRPC NodePublishVolume . The NFS driver mounts the remote directory, making it appear as a local filesystem inside the Pod.

Thus, applications inside the Pod operate on an NFS‑backed remote share transparently.

Linux VFS

Linux adds a Virtual File System (VFS) layer above actual file systems. VFS provides a uniform set of system calls for both local and remote file systems (e.g., NFS). When a system with an NFS mount accesses a file, VFS forwards the request to the NFS client, which communicates with the NFS server via RPC.

Summary

Besides NFS, other distributed storage systems such as CephFS follow a similar architecture. CSI defines a standardized gRPC protocol and driver interaction model, enabling diverse storage solutions to integrate uniformly with Kubernetes.

cloud nativeKubernetesStorageCSIVFSNFS
System Architect Go
Written by

System Architect Go

Programming, architecture, application development, message queues, middleware, databases, containerization, big data, image processing, machine learning, AI, personal growth.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.