Cloud Native 13 min read

How to Integrate Ceph RBD with Kubernetes: A Step‑by‑Step Guide

This tutorial walks you through configuring static and dynamic Ceph RBD storage for Kubernetes, covering version compatibility, required Ceph pool setup, CSI configuration, deployment of provisioner and node plugins, and validation with a PVC and pod, all with detailed command examples.

Linux Ops Smart Journey
Linux Ops Smart Journey
Linux Ops Smart Journey
How to Integrate Ceph RBD with Kubernetes: A Step‑by‑Step Guide

This article follows a previous guide on Kubernetes NFS integration and now explains how to integrate Ceph RBD with Kubernetes, covering both static and dynamic provisioning.

Version Compatibility

The Ceph‑CSI versions that have been tested with specific Kubernetes releases are listed, recommending matching versions to access the latest features.

Tip

All listed Kubernetes and Ceph versions have been validated by the Ceph‑CSI provider; for newer features, use the corresponding versions. Ceph Pacific does not provide a CentOS 7 RPM, so the Octopus release is used with Kubernetes 1.27 and Ceph‑CSI v3.9.x.

Ceph RBD Related Information

1. Create a Kubernetes storage pool:

<code>$ ceph osd pool create kubernetes 128 128
pool 'kubernetes' created</code>

2. Initialize the pool:

<code>$ rbd pool init kubernetes</code>

3. Create a new Ceph user for Kubernetes:

<code>$ sudo ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes' -o /etc/ceph/ceph.client.kubernetes.keyring</code>

4. Retrieve Ceph cluster information (e.g., monmap, fsid, monitors).

<code>$ ceph mon dump
... (output omitted for brevity)</code>

Kubernetes Integration Steps

1. Download Ceph‑CSI files for dynamic provisioning:

<code>$ curl -LO https://github.com/ceph/ceph-csi/archive/refs/heads/release-v3.9.zip
$ unzip release-v3.9.zip
$ sudo mkdir -p /etc/kubernetes/addons/cephrbd</code>

2. Create the CSI ConfigMap with cluster ID and monitor addresses:

<code># ceph-csi config map
cat <<EOF | sudo tee /etc/kubernetes/addons/cephrbd/csi-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ceph-csi-config
  namespace: kube-system
data:
  config.json: |
    [{
      "clusterID": "a43fa047-755e-4208-af2d-f6090154f902",
      "monitors": ["172.139.20.20:6789","172.139.20.94:6789","172.139.20.208:6789"]
    }]
EOF
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-config-map.yaml</code>

3. Create the CSI KMS ConfigMap (empty for now):

<code>cat <<EOF | sudo tee /etc/kubernetes/addons/cephrbd/csi-kms-config-map.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ceph-csi-encryption-kms-config
  namespace: kube-system
data:
  config.json: |- {}
EOF
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-kms-config-map.yml</code>

4. Create the CSI RBD secret with the Ceph user credentials:

<code>cat <<EOF | sudo tee /etc/kubernetes/addons/cephrbd/csi-rbd-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: kube-system
stringData:
  userID: kubernetes
  userKey: AQD9o0Fd6hQRChAAt7fMaSZXduT3NWEqylNpmg==
EOF
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-rbd-secret.yaml</code>

5. Upload the Ceph configuration and keyring to a ConfigMap:

<code>$ kubectl -n kube-system create configmap ceph-config --from-file=/etc/ceph/ceph.conf --from-file=keyring=/etc/ceph/ceph.client.kubernetes.keyring</code>

6. Deploy the CSI RBD provisioner (RBAC and deployment files), adjusting the namespace and image registry as needed, then apply them:

<code># Example commands (paths adjusted for v3.9 release)
sudo cp ceph-csi-release-v3.9/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml /etc/kubernetes/addons/cephrbd/
sed -ri 's/(namespace):.*/\1: kube-system/' /etc/kubernetes/addons/cephrbd/csi-provisioner-rbac.yaml
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-provisioner-rbac.yaml

sudo cp ceph-csi-release-v3.9/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml /etc/kubernetes/addons/cephrbd/
sed -ri 's/(namespace):.*/\1: kube-system/' /etc/kubernetes/addons/cephrbd/csi-rbdplugin-provisioner.yaml
# replace image registry if needed
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-rbdplugin-provisioner.yaml</code>

7. Deploy the CSI RBD node plugin similarly.

<code># RBAC
sudo cp ceph-csi-release-v3.9/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml /etc/kubernetes/addons/cephrbd/
sed -ri 's/(namespace):.*/\1: kube-system/' /etc/kubernetes/addons/cephrbd/csi-nodeplugin-rbac.yaml
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-nodeplugin-rbac.yaml

# Deployment
sudo cp ceph-csi-release-v3.9/deploy/rbd/kubernetes/csi-rbdplugin.yaml /etc/kubernetes/addons/cephrbd/
sed -ri 's/(namespace):.*/\1: kube-system/' /etc/kubernetes/addons/cephrbd/csi-rbdplugin.yaml
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-rbdplugin.yaml</code>

8. Create a StorageClass for Ceph RBD:

<code>cat <<EOF | sudo tee /etc/kubernetes/addons/cephrbd/csi-rbd-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-rbd-storage
provisioner: rbd.csi.ceph.com
parameters:
  clusterID: a43fa047-755e-4208-af2d-f6090154f902
  pool: kubernetes
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
  - discard
EOF
kubectl apply -f /etc/kubernetes/addons/cephrbd/csi-rbd-sc.yaml</code>

Validation

1. Create a PersistentVolumeClaim using the new StorageClass:

<code>cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-ceph-rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Gi
  storageClassName: ceph-rbd-storage
EOF
kubectl get pvc test-ceph-rbd-pvc</code>

2. Deploy a pod that mounts the PVC and verify the storage is accessible:

<code>cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tools
  template:
    metadata:
      labels:
        app: tools
    spec:
      containers:
      - name: tools
        image: registry.cn-guangzhou.aliyuncs.com/jiaxzeng6918/tools:v1.1
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: test-ceph-rbd-pvc
EOF
kubectl exec -it $(kubectl get pods -l app=tools -o jsonpath='{.items[0].metadata.name}') -- df -h /data</code>

Conclusion

Integrating external storage like Ceph RBD with Kubernetes expands containerized applications' storage capabilities, offering flexible and efficient data management. As cloud‑native technologies evolve, more storage solutions will seamlessly integrate with Kubernetes, unlocking data’s potential to drive business innovation.

cloud nativeKubernetesStorageCSICephRBD
Linux Ops Smart Journey
Written by

Linux Ops Smart Journey

The operations journey never stops—pursuing excellence endlessly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.