Deploying a Rook‑Ceph Cluster on Kubernetes with Operator, Toolbox and Dashboard
This tutorial walks through installing Rook as a cloud‑native storage orchestrator on Kubernetes, deploying the Rook operator, creating a CephCluster CRD, verifying OSD and monitor pods, setting up the Rook toolbox, exposing the Ceph dashboard via a NodePort service, and configuring monitoring and access credentials.
Rook is an open‑source cloud‑native storage orchestration tool that turns storage software into self‑managing, self‑scaling, and self‑healing services by leveraging Kubernetes capabilities.
The example environment uses Kubernetes v1.16.2, Docker 18.09.9, and Rook release‑1.1.
Deploy Rook Operator
Apply the Rook manifests for the selected release:
$ kubectl apply -f common.yaml
$ kubectl apply -f operator.yamlVerify the rook-ceph-operator pod is in the Running state:
$ kubectl get pod -n rook-cephCreate CephCluster
Prepare a cluster.yaml file (excerpt):
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v14.2.4-20190917
dataDirHostPath: /data/rook
mon:
count: 3
dashboard:
enabled: true
storage:
useAllNodes: true
useAllDevices: false
directories:
- path: /var/lib/rookCreate the cluster:
$ kubectl apply -f cluster.yamlCheck the pods in the rook-ceph namespace; all should show Running status, confirming a healthy Ceph deployment.
Rook Toolbox
Deploy the toolbox for debugging and testing:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-tools
namespace: rook-ceph
labels:
app: rook-ceph-tools
spec:
selector:
matchLabels:
app: rook-ceph-tools
template:
metadata:
labels:
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: rook-ceph-tools
image: rook/ceph:v1.1.0
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
env:
- name: ROOK_ADMIN_SECRET
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: admin-secret
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev
name: dev
- mountPath: /sys/bus
name: sysbus
- mountPath: /lib/modules
name: libmodules
- mountPath: /etc/rook
name: mon-endpoint-volume
volumes:
- name: dev
hostPath:
path: /dev
- name: sysbus
hostPath:
path: /sys/bus
- name: libmodules
hostPath:
path: /lib/modules
- name: mon-endpoint-volume
configMap:
name: rook-ceph-mon-endpoints
items:
- key: data
path: mon-endpointsApply the toolbox deployment:
$ kubectl apply -f toolbox.yamlEnter the toolbox pod to run Ceph commands such as ceph status , ceph osd status , ceph df , and rados df for health checks.
Ceph Dashboard
Enable the dashboard by setting dashboard.enable=true in the CephCluster spec. The operator creates a rook-ceph-mgr-dashboard service on port 7000.
To expose the dashboard outside the cluster, create a NodePort service:
apiVersion: v1
kind: Service
metadata:
name: rook-ceph-mgr-dashboard-external
namespace: rook-ceph
labels:
app: rook-ceph-mgr
rook_cluster: rook-ceph
spec:
ports:
- name: dashboard
port: 7000
protocol: TCP
targetPort: 7000
selector:
app: rook-ceph-mgr
rook_cluster: rook-ceph
type: NodePortApply the service and note the allocated node port (e.g., 32381):
$ kubectl apply -f dashboard-external.yaml
$ kubectl get service -n rook-cephAccess the dashboard at http://<NodeIP>:32381 . The default admin user is admin ; retrieve the password with:
$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echoDashboard Configuration
The CephCluster CRD allows customizing the dashboard URL prefix, port, and SSL settings:
spec:
dashboard:
urlPrefix: /ceph-dashboard
port: 8443
ssl: trueMonitoring
Rook ships built‑in Prometheus exporters. Follow the official monitoring guide to scrape metrics from the Rook cluster.
Overall, this guide provides a complete, step‑by‑step procedure to deploy a production‑ready Rook‑Ceph storage solution on a Kubernetes cluster, including operator installation, cluster creation, toolbox usage, dashboard exposure, and monitoring configuration.
DevOps Cloud Academy
Exploring industry DevOps practices and technical expertise.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.