Cloud Native 9 min read

Deploy Ceph RBD Provisioner on Kubernetes with Helm in Minutes

This step‑by‑step guide explains how to set up a Ceph RBD provisioner on a Kubernetes cluster using Helm, covering storage pool creation, user configuration, Helm chart values, installation commands, and verification procedures to ensure reliable persistent storage.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
Deploy Ceph RBD Provisioner on Kubernetes with Helm in Minutes

When building modern cloud‑native applications, Kubernetes relies on reliable storage; Ceph provides a distributed RBD solution. This guide shows how to quickly deploy the Ceph RBD provisioner using a Helm chart, covering pool creation, user setup, chart configuration, installation, and verification.

CSI diagram
CSI diagram

Ceph RBD Operations

Create Kubernetes pool:

<code>$ ceph osd pool create kubernetes 128 128</code>

Initialize the pool:

<code>$ rbd pool init kubernetes</code>

Create a new client user:

<code>$ sudo ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes' -o /etc/ceph/ceph.client.kubernetes.keyring</code>

Retrieve Ceph monitor information:

<code>$ ceph mon dump</code>

Helm Deploy Ceph RBD Provisioner

Download the chart files:

<code>$ curl -L -O https://github.com/ceph/ceph-csi/archive/refs/tags/v3.9.0.tar.gz
$ sudo tar xvf v3.9.0.tar.gz -C /etc/kubernetes/addons/</code>

Create the Helm values file (excerpt):

<code>nodeplugin:
  fullnameOverride: ceph-csi-rbd-nodeplugin
  registrar:
    image:
      repository: 172.139.20.170:5000/library/csi-node-driver-registrar
  plugin:
    image:
      repository: 172.139.20.170:5000/library/cephcsi
      tag: v3.9.0
  tolerations:
    - operator: Exists

provisioner:
  fullnameOverride: ceph-csi-rbd-provisioner
  provisioner:
    image:
      repository: 172.139.20.170:5000/library/csi-provisioner
  attacher:
    image:
      repository: 172.139.20.170:5000/library/csi-attacher
  resizer:
    image:
      repository: 172.139.20.170:5000/library/csi-resizer
  snapshotter:
    image:
      repository: 172.139.20.170:5000/library/csi-snapshotter
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - csi-rbdplugin-provisioner
          topologyKey: "kubernetes.io/hostname"

kubeletDir: /var/lib/kubelet
driverName: rbd.csi.ceph.com

cephconf: |
  [global]
  fsid = a43fa047-755e-4208-af2d-f6090154f902
  cluster_network = 172.139.20.0/24
  mon_initial_members = storage-ceph01, storage-ceph02, storage-ceph03
  mon_host = 172.139.20.20,172.139.20.208,172.139.20.94
  auth_cluster_required = cephx
  auth_service_required = cephx
  auth_client_required = cephx

csiConfig:
- clusterID: a43fa047-755e-4208-af2d-f6090154f902
  monitors:
    - "172.139.20.20:6789"
    - "172.139.20.94:6789"
    - "172.139.20.208:6789"

storageClass:
  create: true
  name: ceph-rbd-storage
  clusterID: a43fa047-755e-4208-af2d-f6090154f902
  pool: kubernetes
  fstype: xfs
  reclaimPolicy: Retain
  allowVolumeExpansion: true

secret:
  create: true
  name: csi-rbd-secret
  userID: kubernetes
  userKey: AQArDbpmYEqxJhAAUP26aPfoHHr+saBtkjdTIw==</code>

Install the provisioner with Helm:

<code>$ helm -n storage-system install csi-rbd -f /etc/kubernetes/addons/ceph-csi-rbd-values.yaml /etc/kubernetes/addons/ceph-csi-3.9.0/charts/ceph-csi-rbd</code>

Verification

Check pod status:

<code>$ kubectl -n storage-system get pod</code>

Create a PVC and a Deployment that mounts it (excerpt):

<code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tools
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 3Gi
  storageClassName: ceph-rbd-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tools
  template:
    metadata:
      labels:
        app: tools
    spec:
      containers:
      - image: core.jiaxzeng.com/library/tools:v1.3
        name: tools
        volumeMounts:
        - name: data
          mountPath: /app
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: tools</code>

Validate the pod and storage usage:

<code>$ kubectl get pod -l app=tools
$ kubectl exec -it deploy/tools -- df -h /app</code>

Tip: If you previously deployed via manifests and rebuilt PVCs, do not delete the original StorageClass secret; otherwise pods may remain in the CreateContainer state.

By following these steps, you can efficiently manage persistent storage in a Kubernetes cluster, improving flexibility, reliability, and operational efficiency.

cloud nativeKubernetesStorageCephHelmRBD
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.