Operations 14 min read

Understanding Linux CFS Scheduler: Event‑Driven Scheduling, PELT vs WALT, and Big‑LITTLE Challenges

This article explains Linux's CFS scheduler mechanics, including the impact of priority, virtual runtime, event‑driven scheduling, the differences between PELT and WALT load‑tracking algorithms, and the complexities introduced by big‑LITTLE architectures on task placement, throughput, and latency.

OPPO Kernel Craftsman
OPPO Kernel Craftsman
OPPO Kernel Craftsman
Understanding Linux CFS Scheduler: Event‑Driven Scheduling, PELT vs WALT, and Big‑LITTLE Challenges

Linux's Completely Fair Scheduler (CFS) aims to allocate CPU time proportionally based on task priority and virtual runtime, but practical implementation relies on event‑driven mechanisms rather than a continuous “god‑view”.

Key events that drive the scheduler include task wake‑up, task migration, task update, and IRQ updates, as illustrated by the task_event_names array:

const char *task_event_names[] = {
    "PUT_PREV_TASK",
    "PICK_NEXT_TASK",
    "TASK_WAKE",
    "TASK_MIGRATE",
    "TASK_UPDATE",
    "IRQ_UPDATE"
};

When two threads have equal priority, they share CPU time 1:1 with a 4 ms round‑robin interval, determined by Linux's 250 Hz timer tick. The tick provides a precision baseline for throughput and latency measurements.

Throughput (the amount of useful work) and latency (response time) are often at odds: higher throughput favors minimal scheduler intervention, while lower latency requires frequent monitoring and quick context switches.

Modern ARM big‑LITTLE (heterogeneous multi‑processor) designs introduce additional complexity, as big and little cores have different performance‑power curves, making task placement decisions non‑trivial.

Load‑tracking algorithms such as PELT (per‑entity load tracking) and WALT (window‑assisted load tracking) attempt to quantify task load over time. PELT uses an exponential decay model, while WALT divides time into fixed windows to compute load percentages, each with trade‑offs in responsiveness and accuracy.

Both algorithms struggle with sudden load spikes or tasks that cross window boundaries, leading to potential scheduling jitter and perceived stalls.

Understanding these mechanisms is essential for optimizing scheduler behavior on heterogeneous CPUs and for future improvements like Energy‑Aware Scheduling (EAS).

performanceschedulerLinuxCFSPELTbig.LITTLEWALT
OPPO Kernel Craftsman
Written by

OPPO Kernel Craftsman

Sharing Linux kernel-related cutting-edge technology, technical articles, technical news, and curated tutorials

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.