Connecting Ceph Storage to Kubernetes for Dynamic PVC Provisioning
This guide walks through configuring a Ceph cluster as the backend storage for a Kubernetes environment, covering host preparation, creating storage pools, secrets, StorageClass, PVCs, deploying a test pod, troubleshooting common errors, and enabling dynamic provisioning via an external rbd provisioner.
This article explains how to integrate a Ceph cluster with a Kubernetes cluster to use Ceph as the backend storage and achieve dynamic PVC provisioning.
Host List
master-1 (Ceph mon, osd) – IP 172.16.200.101 – Kernel 4.4.247-1.el7.elrepo.x86_64
master-2 (Ceph mon, osd) – IP 172.16.200.102 – Kernel 4.4.247-1.el7.elrepo.x86_64
master-3 (Ceph mon, osd) – IP 172.16.200.103 – Kernel 4.4.247-1.el7.elrepo.x86_64
node-1 (Ceph osd) – IP 172.16.200.104 – Kernel 4.4.247-1.el7.elrepo.x86_64
Prerequisites
Upgrade the kernel to version 4.1.4 or newer as recommended by Ceph.
<code>root ~ >>> ceph health
HEALTH_OK
</code>Ensure the Ceph cluster is healthy before proceeding.
Integration Process
Create a storage pool
<code>root ~ >>> ceph osd pool create mypool 128 128 # set PG and PGP to 128
pool 'mypool' created
root ~ >>> ceph osd pool ls
mypool
</code>Create secret objects
Create a Ceph user for Kubernetes and grant appropriate permissions.
<code>root ~ >>> ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=mypool'
</code>Generate the secret YAML files.
<code>apiVersion: v1
kind: Secret
metadata:
name: ceph-kube-secret
namespace: default
data:
key: "ceph auth get-key client.kube | base64"
type: kubernetes.io/rbd
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: default
data:
key: "ceph auth get-key client.admin | base64"
type: kubernetes.io/rbd
</code>Apply the secrets to the cluster.
<code>root ~/k8s/ceph >>> kubectl apply -f ceph-secret.yaml
</code>Create a StorageClass
<code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-storageclass
provisioner: kubernetes.io/rbd
parameters:
monitors: 172.16.200.101:6789,172.16.200.102:6789,172.16.200.103:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: default
pool: mypool
userId: kube
userSecretName: ceph-kube-secret
userSecretNamespace: default
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
</code> <code>root ~/k8s/ceph >>> kubectl apply -f ceph-storageclass.yaml
root ~/k8s/ceph >>> kubectl get sc
NAME PROVISIONER AGE
ceph-storageclass kubernetes.io/rbd 87s
</code>Create a PVC
<code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ceph-test-claim
spec:
storageClassName: ceph-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code>ReadWriteOnce: read‑write, mounted by a single node ReadOnlyMany: read‑only, can be mounted by multiple nodes ReadWriteMany: read‑write, can be mounted by multiple nodes
Create a test pod
<code>apiVersion: v1
kind: Pod
metadata:
name: ceph-pod
spec:
containers:
- name: ceph-busybox
image: busybox:1.32.0
command: ["/bin/sh","-c","tail -f /etc/resolv.conf"]
volumeMounts:
- name: ceph-volume
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-volume
persistentVolumeClaim:
claimName: ceph-test-claim
</code>Check the pod status; if it does not reach
running, the default
kubernetes.io/rbdprovisioner may be missing the
rbdbinary.
<code>rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH:
</code>Troubleshooting
The error occurs because the
gcr.iokube‑controller‑manager image does not contain the
rbdcommand. Deploy an external provisioner to handle RBD volumes.
Deploy external provisioner
<code>apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccountName: persistent-volume-binder
</code>Note: the ServiceAccount must be
persistent-volume-binder; the default account lacks permission to list resources.
<code>root ~/k8s/ceph >>> kubectl apply -f storageclass-fix-deployment.yaml
</code>Update the existing StorageClass
<code># modify provisioner
metadata:
name: ceph-storageclass
provisioner: ceph.com/rbd
</code> <code>root ~/k8s/ceph >>> kubectl get sc
NAME PROVISIONER AGE
ceph-storageclass ceph.com/rbd 64m
</code>Recreate PVC and pod
<code>root ~/k8s/ceph >>> kubectl delete -f ceph-storageclass-pvc.yaml && kubectl apply -f ceph-storageclass-pvc.yaml
root ~/k8s/ceph >>> kubectl delete -f ceph-busybox-pod.yaml && kubectl apply -f ceph-busybox-pod.yaml
</code>The pod reaches
Runningstate.
<code>root ~/k8s/ceph >>> kubectl get pods
NAME READY STATUS RESTARTS AGE
ceph-pod 1/1 Running 0 47s
</code>Verify PV
<code>root ~/k8s/ceph >>> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-... 5Gi RWO Delete Bound default/ceph-test-claim ceph-storageclass 112s
</code>The RBD image is mapped as a block device under
/usr/share/busybox/.
<code>root ~/k8s/ceph >>> kubectl exec -it ceph-pod -- /bin/sh
# mount | grep share
/dev/rbd0 on /usr/share/busybox type ext4 (rw,seclabel,relatime,stripe=1024,data=ordered)
</code>List images in the
mypoolpool:
<code>root ~/k8s/ceph >>> rbd ls -p mypool
kubernetes-dynamic-pvc-...
</code>Common Errors
1. RBD image cannot be mapped as a block device.
<code>Warning FailedMount ... rbd: map failed: exit status 110, rbd output: rbd: sysfs write failed
...</code>Cause: Linux kernel < 4.5 lacks required feature flags. Disable the unsupported feature:
<code>ceph osd crush tunables hammer
</code>After applying the above steps, the Ceph storage is successfully integrated with Kubernetes, and dynamic PVC provisioning works as expected.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.