Operations 12 min read

Mastering RAID: Definitions, Levels, and Linux Setup Commands

This article explains RAID concepts, outlines common RAID levels and their pros and cons, compares configurations, describes hot spare and hot swap mechanisms, and provides step‑by‑step Linux commands for creating, testing, and removing RAID arrays, especially RAID 5.

Raymond Ops
Raymond Ops
Raymond Ops
Mastering RAID: Definitions, Levels, and Linux Setup Commands

RAID Definition

RAID (Redundant Array of Independent Disks) combines multiple physical disks into a logical unit to improve performance and data safety.

RAID Levels:

RAID 0 – Striping without parity.

RAID 1 – Mirroring without parity.

RAID 3 – Striping with dedicated parity disk.

RAID 5 – Distributed parity across all disks.

RAID 6 – Distributed double parity.

Combined Levels:

RAID 0+1 – Stripe then mirror.

RAID 10 – Mirror then stripe.

RAID 50 – Stripe RAID 5 groups.

RAID 0

Definition:

Striped array without fault tolerance; data is evenly spread across disks.

Advantages:

Very high read/write efficiency.

Fast speed, no parity calculation, low CPU usage.

Simple deployment.

Disadvantages:

No redundancy; usually combined with other levels.

Not suitable for critical data.

Minimum disks: 2
RAID 0 illustration
RAID 0 illustration

RAID 1

Definition:

Mirroring; data is written identically to primary and mirror disks.

Advantages:

High data safety and availability.

100% data redundancy.

Simple design and use.

Low CPU usage (no parity calculation).

Disadvantages:

Effective capacity is only half of total.

Does not improve write performance compared to a single disk.

RAID 1 illustration
RAID 1 illustration

RAID 5

Definition:

Similar to RAID 3 but parity is distributed across all disks; widely used.

Advantages:

High read speed, moderate write speed.

Provides a level of data protection.

Disadvantages:

When a disk fails, read/write performance of remaining disks drops sharply.

Minimum disks: 3
RAID 5 illustration
RAID 5 illustration

Common RAID Comparison

RAID 0 (Striping): No fault tolerance, highest read/write performance, requires at least 2 disks, usable capacity = N × disk size.

RAID 1 (Mirroring): Fault tolerant, 50% usable capacity, requires at least 2 disks, low performance.

RAID 3 (Dedicated parity): Fault tolerant, high read performance, low random write, requires 3 disks.

RAID 5 (Distributed parity): Fault tolerant, good read performance, lower write performance, requires 3 disks, usable capacity = (N‑1) × disk size.

RAID 10 (Mirrored stripe): Fault tolerant, balanced performance, requires 4 disks, usable capacity = (N/2) × disk size.

RAID 0+1 (Stripe‑mirror): Fault tolerant, similar to RAID 10, requires 4 disks.

Common RAID Selection

RAID 5 offers a compromise between RAID 0 speed and RAID 1 safety, with higher storage efficiency than RAID 1.

RAID 5 provides read speeds comparable to RAID 0, with a modest write penalty due to parity; it also reduces storage cost.

Hot Spare

Definition: When a disk in a redundant RAID set fails, a designated spare disk automatically replaces it without interrupting the array.

Global spare: shared by all RAID groups.

Dedicated spare: assigned to a specific RAID group.

Usable capacity reduces from (N‑1)×size to (N‑2)×size (e.g., RAID 5).

Hot Swap

Definition: Replace a failed disk with a healthy one without shutting down the system, relying on protection mechanisms for the hot‑swap process.
Hot spare illustration
Hot spare illustration

Experiment Goal

Create a RAID 5 array with four disks on a server.

Experiment Commands

<code>lsblk</code>
<code>mdadm -Cv /dev/md0 -n 4 -l 10 /dev/sdc /dev/sdd /dev/sde /dev/sdf</code>
<code># Set up RAID instance</code>
<code>mdadm -Q /dev/md0</code>
<code># Verify md device</code>
<code>mdadm -D /dev/md0</code>
<code>mkfs.ext4 /dev/md0</code>
<code># Create filesystem</code>
<code>mkdir /Raid</code>
<code>mount /dev/md0 /Raid/</code>
<code># Mount</code>
<code>df -h</code>
Use /dev/sd{b,c,d,e} to create a RAID 5 array.
<code>mdadm -Cv /dev/md0 -n 3 -l 5 -a yes -x 1 /dev/sd{b,c,d,e}</code>
<code># -C: create /dev/md0</code>
<code># -v: verbose</code>
<code># -n: number of disks</code>
<code># -l: RAID level</code>
<code># -a: auto‑create device file</code>
<code># -x: number of hot spares</code>
<code># Note: n + x equals total physical disks used</code>
<code>mdadm -D /dev/md0</code>
<code># View RAID info</code>
Simulate disk failure in RAID.
<code>mdadm /dev/md0 -f /dev/sdb</code>
<code># Simulate failure of /dev/sdb in /dev/md0</code>
<code># Observe array status</code>
Format and mount the array.
<code>mkfs.[desired_fs] /dev/md0</code>
<code># Format RAID array</code>
<code>mount /dev/md0 /mnt/md0</code>
<code># Mount to /mnt/md0</code>
<code>df -Th</code>
<code># Check mount status</code>
<code># Add to /etc/fstab for auto‑mount; use mount -a to verify</code>
Delete the RAID array (ensure data backup first).
<code>umount /mnt/md0 /dev/md0</code>
<code># Unmount first</code>
<code>mdadm -s /dev/md0</code>
<code>mdadm -r /dev/md0</code>
<code># -s: stop RAID</code>
<code># -r: remove RAID</code>
<code># Data remains on disks</code>

Common Linux Disk Commands

df – display disk usage (e.g.,

df -h

).

du – show file or directory disk usage (e.g.,

du -h

).

fdisk – partitioning tool.

mkfs – create a filesystem (specify type and device).

mount – mount a filesystem.

umount – unmount a filesystem.

lsblk – list block devices in a tree view.

blkid – display UUID and filesystem type.

badblocks – check and mark bad blocks.

smartctl – read SMART info to assess disk health.

LinuxstorageRAIDhot sparedisk arraymdadm
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.