How to Deploy CephFS Provisioner on Kubernetes with Helm
This guide walks through creating Ceph storage pools, configuring CephFS, and using Helm to deploy the CephFS CSI provisioner on a Kubernetes cluster, including verification steps and tips for reliable, high‑performance persistent storage.
Introduction
In the era of cloud‑native and containerized applications, Kubernetes needs efficient storage solutions. Deploying the CephFS CSI provisioner with Helm provides a simple yet powerful way to improve data management and access performance.
1. Create Kubernetes storage pools
<code>$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
$ ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created</code>2. Create CephFS
<code>$ ceph fs new cephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8</code>3. Retrieve Ceph information
<code>$ ceph mon dump
dumped monmap epoch 2
epoch 2
fsid a43fa047-755e-4208-af2d-f6090154f902
last_changed 2024-08-12T20:34:52.706720+0800
created 2024-08-08T14:48:39.332770+0800
min_mon_release 15 (octopus)
0: [v2:172.139.20.20:3300/0,v1:172.139.20.20:6789/0] mon.storage-ceph01
1: [v2:172.139.20.94:3300/0,v1:172.139.20.94:6789/0] mon.storage-ceph03
2: [v2:172.139.20.208:3300/0,v1:172.139.20.208:6789/0] mon.storage-ceph02</code>Helm deployment of CephFS provisioner
1. Download chart and push to private Harbor
<code>$ curl -L -O https://github.com/ceph/ceph-csi/archive/refs/tags/v3.9.0.tar.gz
$ tar xvf v3.9.0.tar.gz -C /tmp/
$ cd /tmp/ceph-csi-3.9.0/charts
$ helm package ceph-csi-cephfs
$ helm push ceph-csi-cephfs-3.9.0.tgz oci://core.jiaxzeng.com/plugins</code>2. Prepare values.yaml
<code>$ cat <<'EOF' | sudo tee /etc/kubernetes/addons/ceph-csi-cephfs-values.yaml > /dev/null
nodeplugin:
fullnameOverride: ceph-csi-cephfs-nodeplugin
registrar:
image:
repository: 172.139.20.170:5000/library/csi-node-driver-registrar
plugin:
image:
repository: 172.139.20.170:5000/library/cephcsi
tag: v3.9.0
tolerations:
- operator: Exists
provisioner:
fullnameOverride: ceph-csi-cephfs-provisioner
provisioner:
image:
repository: 172.139.20.170:5000/library/csi-provisioner
resizer:
image:
repository: 172.139.20.170:5000/library/csi-resizer
snapshotter:
image:
repository: 172.139.20.170:5000/library/csi-snapshotter
kubeletDir: /var/lib/kubelet
driverName: cephfs.csi.ceph.com
configMapName: cephfs-csi-config
cephConfConfigMapName: cephfs-config
cephconf: |
[global]
fsid = a43fa047-755e-4208-af2d-f6090154f902
cluster_network = 172.139.20.0/24
mon_initial_members = storage-ceph01, storage-ceph02, storage-ceph03
mon_host = 172.139.20.20,172.139.20.208,172.139.20.94
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
csiConfig:
- clusterID: a43fa047-755e-4208-af2d-f6090154f902
monitors:
- "172.139.20.20:6789"
- "172.139.20.94:6789"
- "172.139.20.208:6789"
storageClass:
create: true
name: ceph-fs-storage
clusterID: a43fa047-755e-4208-af2d-f6090154f902
fsName: cephfs
fstype: xfs
reclaimPolicy: Retain
allowVolumeExpansion: true
secret:
create: true
name: csi-cephfs-secret
adminID: admin
adminKey: AQBiarRmA+FiDRAAH9TqQmxuF+iiJR0jM17Pdw==
EOF</code>3. Deploy the provisioner
<code>$ helm -n storage-system upgrade csi-cephfs -f /etc/kubernetes/addons/ceph-csi-cephfs-values.yaml oci://core.jiaxzeng.com/plugins/ceph-csi-cephfs --version 3.9.0
Pulled: core.jiaxzeng.com/plugins/ceph-csi-cephfs:3.9.0
Digest: sha256:092b853cde5870b709845aff209a336c8f9d15b5c9b02f57ed03fcfd93caf4c6
Release "csi-cephfs" has been upgraded. Happy Helming!</code>Tip: If you originally deployed with manifests and recreated PVCs, do not delete the secret used by the original StorageClass; otherwise pods will stay in CreateContainer state.
Verification
1. Check pod status
<code>$ kubectl -n storage-system get pod | grep cephfs
ceph-csi-cephfs-nodeplugin-2p2ll 3/3 Running 3 (5m41s ago) 17h
... (other nodeplugin and provisioner pods) ...</code>2. Create PVC and Deployment
<code>$ cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tools-cephfs
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 3Gi
storageClassName: ceph-fs-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tools-cephfs
spec:
replicas: 1
selector:
matchLabels:
app: tools-cephfs
template:
metadata:
labels:
app: tools-cephfs
spec:
containers:
- image: core.jiaxzeng.com/library/tools:v1.3
name: tools
volumeMounts:
- name: data
mountPath: /app
volumes:
- name: data
persistentVolumeClaim:
claimName: tools-cephfs
EOF</code>3. Verify pod and storage
<code>$ kubectl get pod -l app=tools-cephfs
NAME READY STATUS RESTARTS AGE
tools-cephfs-d4b6748ff-vnppd 1/1 Running 0 78s
$ kubectl exec -it deploy/tools-cephfs -- df -h /app
Filesystem Size Used Avail Use% Mounted on
172.139.20.20:6789,172.139.20.94:6789,172.139.20.208:6789:/volumes/csi/csi-vol-e4c8c737-6b6a-4fb7-b4f2-243a9200b1da/d3606737-4783-445d-8f5d-2c62398164fe 3.0G 0 3.0G 0% /app</code>Conclusion
Deploying the CephFS provisioner with Helm simplifies the complex storage deployment process and provides stable, high‑performance persistent storage for Kubernetes environments, allowing development teams to focus on application innovation without worrying about underlying infrastructure.
Linux Ops Smart Journey
The operations journey never stops—pursuing excellence endlessly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.