Operations 10 min read

How to Benchmark Ceph Cluster Performance: A Step‑by‑Step Guide

This guide explains how to benchmark a Ceph storage cluster before production by measuring hardware baselines, disk read/write performance, network throughput, and using Ceph's built‑in tools like rados bench, load‑gen, rbd bench‑write, and fio for comprehensive performance testing.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How to Benchmark Ceph Cluster Performance: A Step‑by‑Step Guide

Before deploying a Ceph cluster in production, you should benchmark it to obtain rough results for read, write, latency, and other workloads. First establish a hardware baseline for disks and network, then perform disk performance tests using

dd

with flags that bypass cache.

Test Single Disk Write Performance

Clear page cache

<code>echo 3 > /proc/sys/vm/drop_caches</code>

Write a 10 GB file named

zero

filled with zeros to the Ceph OSD directory:

<code>dd if=/dev/zero of=/var/lib/ceph/osd/ceph-0/zero bs=10G count=1</code>

Repeat the test and average the results; note that kernel cache was not bypassed, so numbers may be high.

<code>for i in `mount | grep osd | awk '{print $3}'`; do (dd if=/dev/zero of=$i/zero bs=10G count=1 &) ; done</code>

OSD Single Disk Read Performance

Clear page cache

<code>echo 3 > /proc/sys/vm/drop_caches</code>
<code>dd if=/var/lib/ceph/osd/ceph-0/deleteme of=/dev/null bs=10G count=1 iflag=direct</code>
<code>for i in `mount | grep osd | awk '{print $3}'`; do (dd if=$i/zero of=/dev/null bs=10G count=1 &) ; done</code>

Network Baseline Performance

Test the network between Ceph OSD nodes using

iperf

. Install iperf, start a server on node 1 (port 6900), and run a client on node 2.

<code>apt install iperf</code>

rados bench Benchmark

Ceph includes the

rados bench

tool for pool performance testing. Example commands:

<code>rados bench -p libvirt-pool 10 write --no-cleanup</code>
<code>rados bench -p libvirt-pool 10 seq</code>
<code>rados bench -p libvirt-pool 10 rand</code>

Syntax:

<code>rados bench -p &lt;pool_name&gt; &lt;seconds&gt; &lt;write|seq|rand&gt; -b &lt;block size&gt; -t &lt;threads&gt; --no-cleanup</code>

RADOS load‑gen

The

rados load-gen

tool generates load on a Ceph cluster for stress testing.

<code>rados -p libvirt-pool load-gen --num-objects 200 --min-object-size 4M --max-object-size 8M --max-ops 10 --read-percent 0 --min-op-len 1M --max-op-len 4M --target-throughput 2G --run-length 20</code>

Block Device Benchmark

Use the RBD

bench-write

command to benchmark a Ceph RADOS Block Device.

<code>rbd create libvirt-pool/289 --size 10240 --image-feature layering
rbd info -p libvirt-pool --image 289
rbd map libvirt-pool/289
rbd showmapped
mkfs.xfs /dev/rbd2
mkdir -p /mnt/289
mount /dev/rbd2 /mnt/289
df -h /mnt/289
rbd bench-write libvirt-pool/289 --io-total 5368709200</code>

Syntax of

rbd bench-write

:

<code>rbd bench-write &lt;RBD image name&gt; --io-size 4M --io-threads 16 --io-total 1024M --io-pattern seq|rand</code>

Using fio to Benchmark Ceph RBD

Install

fio

and create a configuration file

write.fio

:

<code>apt install fio -y

[write-4M]
ioengine=rbd
direct=1
size=5g
lockmem=1G
runtime=30
group_reporting
numjobs=1
iodepth=32
pool=libvirt-pool
clientname=admin
rbdname=289
rw=write
bs=4M
filename=/dev/rbd0
</code>

Run the test:

<code>fio write.fio</code>
PerformancebenchmarkCephBlock DevicefioRADOS
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.