Integrate CephFS with Kubernetes: Static & Dynamic Provisioning Guide
This tutorial walks through creating CephFS pools, configuring Ceph CSI drivers, setting up Kubernetes resources, and verifying static and dynamic provisioning of CephFS storage for containers, complete with command‑line examples and practical tips.
Introduction
The article follows a previous guide on integrating Kubernetes with Ceph RBD and now explains how to integrate Kubernetes with CephFS, covering both static and dynamic provisioning methods.
CephFS Storage Pool Creation
<code>$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created</code>Create CephFS Filesystem
<code>$ ceph fs new cephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8</code>Kubernetes Integration with CephFS
Download the CephCSI release that contains the dynamic provisioning manifests, extract it, and prepare a directory for the add‑ons.
<code>$ curl -LO https://github.com/ceph/ceph-csi/archive/refs/heads/release-v3.9.zip
$ unzip release-v3.9.zip
$ sudo mkdir -p /etc/kubernetes/addons/cephfs</code>Create the CSI driver resource.
<code>$ sudo cp ~/ceph-csi-release-v3.9/deploy/cephfs/kubernetes/csidriver.yaml /etc/kubernetes/addons/cephfs/
$ kubectl apply -f /etc/kubernetes/addons/cephfs/csidriver.yaml</code>Configure the Ceph cluster information.
<code># Create ConfigMap with clusterID and monitor addresses
cat <<EOF | sudo tee /etc/kubernetes/addons/cephfs/csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |
[
{
"clusterID": "a43fa047-755e-4208-af2d-f6090154f902",
"monitors": [
"172.139.20.20:6789",
"172.139.20.94:6789",
"172.139.20.208:6789"
]
}
]
metadata:
name: ceph-csi-config
namespace: kube-system
EOF
$ kubectl apply -f /etc/kubernetes/addons/cephfs/csi-config-map.yaml
# Create ConfigMap with ceph.conf and admin keyring
$ kubectl -n kube-system create configmap ceph-config \
--from-file=/etc/ceph/ceph.conf \
--from-file=keyring=/etc/ceph/ceph.client.admin.keyring</code>Create the secret that holds CephFS credentials.
<code>cat <<EOF | sudo tee /etc/kubernetes/addons/cephfs/csi-cephfs-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: cephfs-csi-secret
namespace: kube-system
stringData:
# Required for statically provisioned volumes
userID: admin
userKey: AQBiarRmA+FiDRAAH9TqQmxuF+iiJR0jM17Pdw==
# Required for dynamically provisioned volumes
adminID: admin
adminKey: AQBiarRmA+FiDRAAH9TqQmxuF+iiJR0jM17Pdw==
EOF
$ kubectl apply -f /etc/kubernetes/addons/cephfs/csi-cephfs-secret.yaml</code>Deploy CSI Provisioner and Node Plugin
Deploy the RBAC rules and the provisioner deployment, adjusting the image registry to a private mirror.
<code># Provisioner RBAC
$ sudo cp ceph-csi-release-v3.9/deploy/cephfs/kubernetes/csi-provisioner-rbac.yaml /etc/kubernetes/addons/cephfs/
$ sudo sed -ri 's/(namespace):.*/\1: kube-system/g' /etc/kubernetes/addons/cephfs/csi-provisioner-rbac.yaml
$ kubectl apply -f /etc/kubernetes/addons/cephfs/csi-provisioner-rbac.yaml
# Provisioner Deployment
$ sudo cp ceph-csi-release-v3.9/deploy/cephfs/kubernetes/csi-cephfsplugin-provisioner.yaml /etc/kubernetes/addons/cephfs/
$ sudo sed -ri '[email protected]/[email protected]:5000/library@g' /etc/kubernetes/addons/cephfs/csi-cephfsplugin-provisioner.yaml
$ sudo sed -ri '[email protected]/cephcsi/cephcsi:[email protected]:5000/library/cephcsi:v3.9.0@g' /etc/kubernetes/addons/cephfs/csi-cephfsplugin-provisioner.yaml
$ kubectl -n kube-system apply -f /etc/kubernetes/addons/cephfs/csi-cephfsplugin-provisioner.yaml
# Node plugin RBAC
$ sudo cp ceph-csi-release-v3.9/deploy/cephfs/kubernetes/csi-nodeplugin-rbac.yaml /etc/kubernetes/addons/cephfs/
$ sudo sed -ri 's/(namespace):.*/\1: kube-system/g' /etc/kubernetes/addons/cephfs/csi-nodeplugin-rbac.yaml
$ kubectl apply -f /etc/kubernetes/addons/cephfs/csi-nodeplugin-rbac.yaml
# Node plugin DaemonSet
$ sudo cp ceph-csi-release-v3.9/deploy/cephfs/kubernetes/csi-cephfsplugin.yaml /etc/kubernetes/addons/cephfs/
$ sudo sed -ri '[email protected]/[email protected]:5000/library@g' /etc/kubernetes/addons/cephfs/csi-cephfsplugin.yaml
$ sudo sed -ri '[email protected]/cephcsi/cephcsi:[email protected]:5000/library/cephcsi:v3.9.0@g' /etc/kubernetes/addons/cephfs/csi-cephfsplugin.yaml
$ kubectl -n kube-system apply -f /etc/kubernetes/addons/cephfs/csi-cephfsplugin.yaml</code>Create StorageClass for Dynamic Provisioning
<code>cat <<EOF | sudo tee /etc/kubernetes/addons/cephfs/storageclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-fs-storage
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: a43fa047-755e-4208-af2d-f6090154f902
fsName: cephfs
csi.storage.k8s.io/provisioner-secret-name: cephfs-csi-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-expand-secret-name: cephfs-csi-secret
csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: cephfs-csi-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Retain
allowVolumeExpansion: true
EOF
$ kubectl apply -f /etc/kubernetes/addons/cephfs/storageclass.yaml</code>Verification
Create a PersistentVolumeClaim that uses the new StorageClass, then deploy a pod that mounts the claim.
<code># Create PVC
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-ceph-fs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ceph-fs-storage
EOF
$ kubectl get pvc test-ceph-fs-pvc
# Deploy a pod using the PVC
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tools
spec:
replicas: 1
selector:
matchLabels:
app: tools
template:
metadata:
labels:
app: tools
spec:
containers:
- name: tools
image: registry.cn-guangzhou.aliyuncs.com/jiaxzeng6918/tools:v1.1
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: test-ceph-fs-pvc
EOF
$ kubectl exec -it $(kubectl get pods -l app=tools -o jsonpath='{.items[0].metadata.name}') -- df -h /data</code>Tips
clusterID should be the value returned by
ceph mon dump 2>/dev/null | grep fsid.
monitors should be the list from
ceph mon dump 2>/dev/null | awk '/ mon/ {print $0}'; only v1 address format is supported.
CephFS PVCs can enforce storage size limits.
Supported access modes: ReadWriteOnce, ReadOnlyMany, ReadWriteMany (see Kubernetes documentation).
Conclusion
Integrating external storage such as CephFS with Kubernetes expands the storage capabilities of containerized applications, offering flexible and efficient data management. As cloud‑native technologies evolve, more storage solutions will seamlessly integrate with Kubernetes, unlocking data’s potential to drive business innovation.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.