Tag

OSD

1 views collected around this technical thread.

Ops Development Stories
Ops Development Stories
Sep 28, 2022 · Operations

How to Separate SSD and SATA OSDs in Ceph Using Custom CRUSH Rules

This guide demonstrates how to customize Ceph's CRUSH map to separate SSD and SATA OSDs into distinct buckets, create dedicated crush rules, compile and apply the new map, and verify that data is correctly placed on the appropriate storage devices.

CRUSHCephOSD
0 likes · 6 min read
How to Separate SSD and SATA OSDs in Ceph Using Custom CRUSH Rules
Ops Development Stories
Ops Development Stories
Nov 1, 2021 · Operations

How to Perform Offline Ceph Octopus Deployment with cephadm on Ubuntu

This guide walks through creating an offline installation package, caching required Debian packages and Docker images, installing Docker and cephadm, bootstrapping a Ceph cluster, and deploying OSD, MDS, and RGW services on Ubuntu nodes without internet access.

CephDockerOSD
0 likes · 13 min read
How to Perform Offline Ceph Octopus Deployment with cephadm on Ubuntu
Ops Development Stories
Ops Development Stories
Dec 14, 2020 · Operations

Step-by-Step Guide to Deploy a Ceph Cluster with cephadm on CentOS

This tutorial walks through the prerequisites, host configuration, installation of Docker and cephadm, bootstrapping a Ceph cluster, and deploying monitors, OSDs, MDS, and RGW services on three CentOS nodes, including detailed commands and screenshots for each step.

CephDockerMDS
0 likes · 15 min read
Step-by-Step Guide to Deploy a Ceph Cluster with cephadm on CentOS
Didi Tech
Didi Tech
Aug 28, 2020 · Operations

Ceph Performance Optimization: Lock-Related Issues and Solutions

The article details how Didi’s large‑scale Ceph deployment suffered from high tail latency due to long‑held and coarse‑grained locks, and describes a series of fixes—including asynchronous read threads, fine‑grained object caches, per‑thread lock‑free logging, and lock‑free filestore apply—that cut latency by up to 90 % and more than doubled read throughput.

BluestoreCephDistributed Storage
0 likes · 12 min read
Ceph Performance Optimization: Lock-Related Issues and Solutions
Architects' Tech Alliance
Architects' Tech Alliance
May 14, 2019 · Fundamentals

Understanding Ceph Architecture: RADOS, OSD, Monitor, and Data Mapping

The article provides a comprehensive overview of Ceph’s distributed storage architecture, explaining the roles of RADOS, OSD, Monitor, and Metadata Cluster, and detailing the three-step data mapping process from file to object, to placement group, and finally to OSD storage.

CRUSHCephDistributed Storage
0 likes · 9 min read
Understanding Ceph Architecture: RADOS, OSD, Monitor, and Data Mapping
Architects' Tech Alliance
Architects' Tech Alliance
Oct 21, 2018 · Fundamentals

Understanding Ceph Architecture: RADOS, OSD, PG Mapping and Data Placement

This article explains Ceph's distributed storage architecture, covering its origins, RADOS client interactions, cluster map updates, the roles of OSDs, Monitors, metadata clusters, and the three-step mapping process from files to objects, placement groups, and finally to storage devices using the CRUSH algorithm.

CRUSHCephData mapping
0 likes · 8 min read
Understanding Ceph Architecture: RADOS, OSD, PG Mapping and Data Placement
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
Feb 14, 2017 · Operations

How Ceph Detects Node Failures: Heartbeat, Reporting, and Monitor Strategies

This article explains Ceph's fault detection mechanism, detailing how OSD peers exchange heartbeats, report failures to the Monitor, and how the Monitor aggregates reports and applies configurable thresholds to reliably identify and handle downed OSD nodes in a distributed storage cluster.

CephDistributed SystemsMonitor
0 likes · 8 min read
How Ceph Detects Node Failures: Heartbeat, Reporting, and Monitor Strategies