Operations 14 min read

Step‑by‑Step Guide to Install a Ceph Cluster on Kylin v10

This article walks through the complete process of setting up a Ceph storage cluster on the domestically‑produced Kylin v10 operating system, covering hardware preparation, host configuration, manual installation of Ceph packages, monitor, OSD, and manager deployment with all necessary command‑line examples.

Ops Development Stories
Ops Development Stories
Ops Development Stories
Step‑by‑Step Guide to Install a Ceph Cluster on Kylin v10

This article explains how to install a Ceph cluster on a domestically‑produced operating system and server.

Basic Configuration

The OS is Kylin v10 ("Galaxy Kylin"), running on Phytium S2500 CPUs.

<code># cat /etc/kylin-release
Kylin Linux Advanced Server release V10 (Sword)
# lscpu
Architecture:          aarch64
CPU mode:              64-bit
Byte Order:            Little Endian
CPU(s):                128
CPU online list:       0-127
Thread(s) per core:    1
Core(s) per socket:    64
Socket(s):             2
NUMA node(s):          16
Vendor ID:             Phytium
Model:                3
Model name:           Phytium,S2500/64 C00
Stepping:             0x1
CPU max MHz:          2100.0000
CPU min MHz:          1100.0000
BogoMIPS:             100.00
L1d cache:            4 MiB
L1i cache:            4 MiB
L2 cache:              64 MiB
L3 cache:              128 MiB
...</code>

Cephadm does not support Kylin v10, so manual deployment or compilation is required. Kylin v10 already provides Ceph Luminous RPMs; to use a newer version, compilation is needed.

<code>CEPH_GIT_VER="ae699615bac534ea496ee965ac6192cb7e0e07c0"
CEPH_GIT_NICE_VER="12.2.8"
CEPH_RELEASE="12"
CEPH_RELEASE_NAME="luminous"
CEPH_RELEASE_TYPE="stable"
</code>

Configure /etc/hosts

<code>cat >> /etc/hosts <<EOF
192.168.2.16 node1
192.168.2.19 node2
192.168.2.18 node3
EOF
</code>

Disable firewall

<code>systemctl stop firewalld && systemctl disable firewalld
</code>

Set hostnames

<code>hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3
</code>

Configure time synchronization

<code>vi /etc/chrony.conf
server ntp1.aliyun.com iburst
allow 192.168.2.0/24
systemctl restart chronyd.service && systemctl enable chronyd.service
</code>

Install via yum

Install Ceph

Kylin v10 ships with Ceph 12 RPMs.

<code>yum install -y ceph
</code>

Install the Python PrettyTable module required by Ceph commands.

<code>pip install PrettyTable
</code>

Deploy monitor nodes

At least one monitor (MON) is required, and the number of OSDs must be at least equal to the replica count.

Add monitor on node1

Generate a unique FSID for the cluster.

<code>uuidgen
</code>

Create the Ceph configuration file and add the FSID.

<code>vim /etc/ceph/ceph.repo
[global]
fsid=9c079a1f-6fc2-4c59-bd4d-e8bc232d33a4
mon initial members = node1
mon host = 192.168.2.16
public network = 192.168.2.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 1
osd pool default min size = 1
osd pool default pg num = 8
osd pool default pgp num = 8
osd crush chooseleaf type = 1
</code>

Create monitor keyring.

<code>ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
</code>

Create admin keyring.

<code>ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin \
  --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
</code>

Create bootstrap OSD keyring.

<code>ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \
  --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
</code>

Import the generated keys.

<code>ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
</code>

Set ownership.

<code>chown ceph:ceph /tmp/ceph.mon.keyring
</code>

Create the monitor map.

<code>monmaptool --create --add node1 192.168.2.16 --fsid 9c079a1f-6fc2-4c59-bd4d-e8bc232d33a4 /tmp/monmap
</code>

Create the monitor data directory.

<code>sudo -u ceph mkdir /var/lib/ceph/mon/ceph-`hostname`
</code>

Initialize the monitor on node1.

<code>sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
</code>

Start the monitor service.

<code>systemctl start ceph-mon@node1 && systemctl enable ceph-mon@node1
</code>

Deploy monitors on node2 and node3

Copy the keyring and configuration files to the other nodes.

<code>scp /tmp/ceph.mon.keyring node2:/tmp/ceph.mon.keyring
scp /etc/ceph/* root@node2:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node2:/var/lib/ceph/bootstrap-osd/
scp /tmp/ceph.mon.keyring node3:/tmp/ceph.mon.keyring
scp /etc/ceph/* root@node3:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node3:/var/lib/ceph/bootstrap-osd/
</code>

Set correct ownership on the copied keyrings.

<code>chown ceph.ceph /tmp/ceph.mon.keyring
</code>

Retrieve the monitor map.

<code>ceph mon getmap -o /tmp/ceph.mon.map
</code>

Initialize monitors on node2 and node3.

<code>sudo -u ceph ceph-mon --mkfs -i node2 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring
sudo -u ceph ceph-mon --mkfs -i node3 --monmap /tmp/ceph.mon.map --keyring /tmp/ceph.mon.keyring
</code>

Add the new monitors to the cluster.

<code># On node1
ceph mon add node2 192.168.2.17:6789
ceph mon add node3 192.168.2.18:6789
</code>

Start the monitors on node2 and node3.

<code>systemctl start ceph-mon@`hostname` && systemctl enable ceph-mon@`hostname`
</code>

Update

ceph.conf

on all nodes and restart the monitor services.

<code>vim /etc/ceph/ceph.conf
# Ensure the following lines are present:
mon initial members = node1,node2,node3
mon host = 192.168.2.16,192.168.2.17,192.168.2.18
systemctl restart ceph-mon@`hostname`
</code>

Add OSDs

Ceph provides the

ceph-volume

utility to prepare disks for OSDs.

Create OSDs

On node1, create an OSD on /dev/sdb.

<code>ceph-volume lvm create --data /dev/sdb
</code>

The process consists of a preparation stage and an activation stage:

<code>ceph-volume lvm prepare --data /dev/sdb
ceph-volume lvm list
ceph-volume lvm activate {ID} {FSID}
</code>

Start the OSD services on each node.

<code># node1
systemctl restart ceph-osd@0 && systemctl enable ceph-osd@0
# node2
systemctl restart ceph-osd@1 && systemctl enable ceph-osd@1
# node3
systemctl restart ceph-osd@2 && systemctl enable ceph-osd@2
</code>

Create MGR daemons

Each node running a monitor should also run a manager daemon.

Create key directory

<code>sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-`hostname -s`
cd /var/lib/ceph/mgr/ceph-`hostname -s`
</code>

Create authentication key

<code>ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' > keyring
chown ceph.ceph /var/lib/ceph/mgr/ceph-`hostname -s`/keyring
</code>

Start the manager daemon

<code>systemctl enable ceph-mgr@`hostname -s` && systemctl start ceph-mgr@`hostname -s`
# or
ceph-mgr -i `hostname`
</code>

Finally, check the cluster status (the example shows only two OSDs were added).

Ceph cluster status diagram
Ceph cluster status diagram
operationsLinuxClusterstorageinstallationCephKylin
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.