Step-by-Step Guide to Deploy a Ceph Cluster with cephadm on CentOS
This tutorial walks through the prerequisites, host configuration, installation of Docker and cephadm, bootstrapping a Ceph cluster, and deploying monitors, OSDs, MDS, and RGW services on three CentOS nodes, including detailed commands and screenshots for each step.
Prerequisites
Cephadm uses containers and systemd to install and manage a Ceph cluster, tightly integrating with the CLI and dashboard GUI.
cephadm only supports Octopus v15.2.0 and later.
It fully integrates with the new orchestration API and supports the new CLI and dashboard features for cluster deployment.
cephadm requires container support (podman or docker) and Python 3.
Time synchronization.
Basic Configuration
This guide uses CentOS 8, which already includes Python 3; CentOS 7 would require a separate Python 3 installation.
Configure hosts resolution
<code>cat >> /etc/hosts <<EOF
192.168.2.16 node1
192.168.2.19 node2
192.168.2.18 node3
EOF</code>Disable firewall and SELinux
<code>systemctl stop firewalld && systemctl disable firewalld
setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config</code>Set hostname on each node
<code>hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3</code>Configure host time synchronization
<code>systemctl restart chronyd.service && systemctl enable chronyd.service</code>Install Docker CE
<code>dnf config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
dnf install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
dnf -y install docker-ce --nobest
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<'EOF'
{
"registry-mirrors": ["https://s7owcmp8.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker</code>Install cephadm
The cephadm command can:
Bootstrap a new cluster.
Launch a containerized shell with the full Ceph CLI.
Help debug containerized Ceph daemons.
The following operations need to be performed on a single node.
Use curl to fetch the latest script
<code>curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm</code>Install cephadm
<code>./cephadm add-repo --release octopus
./cephadm install</code>Bootstrap a new cluster
Create the directory for configuration:
<code>mkdir -p /etc/ceph</code>Run the bootstrap command (replace the IP with your monitor address):
<code>cephadm bootstrap --mon-ip 192.168.2.16</code>This command will:
Create monitor and manager daemons on the local host.
Generate a new SSH key for the cluster and add it to
/root/.ssh/authorized_keys.
Save a minimal
/etc/ceph/ceph.confconfiguration file.
Write the privileged
client.adminkeyring to
/etc/ceph/ceph.client.admin.keyring.
Write the public key to
/etc/ceph/ceph.pub.
After installation a dashboard interface is available.
Running
ceph.confcan be verified as written.
Enable CEPH CLI
The
cephadm shellcommand starts a bash shell inside a container that has all Ceph packages installed. If
/etc/cephcontains configuration and keyring files on the host, they are passed into the container.
<code>cephadm shell</code>You can install packages that contain all Ceph commands, including those for CephFS.
<code>cephadm add-repo --release octopus
cephadm install ceph-common</code>The installation can be slow; you can switch the repository to an Aliyun mirror.
Add hosts to the cluster
Add the public key to new hosts
<code>ssh-copy-id -f -i /etc/ceph/ceph.pub node2
ssh-copy-id -f -i /etc/ceph/ceph.pub node3</code>Tell Ceph the new node is part of the cluster
<code>[root@localhost ~]# ceph orch host add node2
Added host 'node2'
[root@localhost ~]# ceph orch host add node3
Added host 'node3'</code>Adding hosts automatically expands monitor and manager daemons.
Deploy additional monitors (optional)
A typical Ceph cluster has three or five monitor daemons distributed across different hosts. If the cluster has five or more nodes, deploying five monitors is recommended.
Ceph can automatically deploy and scale monitors as the cluster grows, assuming other monitors use the same subnet as the first monitor.
If you need a specific subnet for monitors, configure it with CIDR notation:
<code>ceph config set mon public_network 10.1.2.0/24</code>Cephadm will only deploy monitor daemons on hosts that have the configured subnet IP. To adjust the default number of monitors for a specific subnet:
<code>ceph orch apply mon *<number-of-monitors>*</code>To deploy monitors on a specific set of hosts:
<code>ceph orch apply mon *host1,host2,host3*</code>List current hosts and labels:
<code>[root@node1 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
node1 node1
node2 node2
node3 node3</code>Disable automatic monitor deployment:
<code>ceph orch apply mon --unmanaged</code>Add monitors in different networks:
<code>ceph orch apply mon --unmanaged
ceph orch daemon add mon newhost1:10.1.2.123
ceph orch daemon add mon newhost2:10.1.2.0/24</code>Deploy monitors on multiple hosts with a single command:
<code>ceph orch apply mon "host1,host2,host3"</code>Deploy OSD
List storage devices in the cluster:
<code>ceph orch device ls</code>A device is considered usable when all of the following conditions are met:
The device has no partitions.
The device has no LVM state.
The device is not in use.
The device does not contain a filesystem.
The device does not contain a Ceph BlueStore OSD.
The device is larger than 5 GB.
Automatically create OSD on unused devices
<code>[root@node1 ~]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...</code>OSDs are created on the three disks.
Create OSD from a specific host/device
<code>ceph orch daemon add osd host1:/dev/sdb</code>Deploy MDS
CephFS requires one or more MDS daemons. When a new CephFS volume is created via the new CephFS API, the necessary MDS daemons are automatically deployed.
<code>ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* <host1> ..."</code>CephFS needs two pools,
cephfs_dataand
cephfs_metadata, to store file data and metadata respectively.
<code>[root@node1 ~]# ceph osd pool create cephfs_data 64 64
[root@node1 ~]# ceph osd pool create cephfs_metadata 64 64
[root@node1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
[root@node1 ~]# ceph orch apply mds cephfs --placement="3 node1 node2 node3"
Scheduled mds.cephfs update...</code>Verify that at least one MDS is in the active state (by default Ceph supports a single active MDS, others act as standby).
<code>ceph fs status cephfs</code>Deploy RGW
RGW (RADOS Gateway) provides a RESTful object storage interface built on the LIBRADOS API.
Deploy three RGW daemons for the
myorgrealm and
cn-east-1zone on nodes 1‑3:
<code>ceph orch apply rgw myorg cn-east-1 --placement="3 node1 node2 node3"</code>Alternatively, create the realm, zonegroup, and zone manually with
radosgw-admin:
<code>radosgw-admin realm create --rgw-realm=myorg --default
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default
radosgw-admin period update --rgw-realm=myorg --c</code>RGW is now created.
Cephadm also automatically installs Prometheus and Grafana; the default Grafana credentials are admin/admin, and a pre‑imported Ceph monitoring dashboard is available.
The next article will cover monitoring Ceph distributed storage with Zabbix.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.