Backend Development 9 min read

Understanding Go's CSP Concurrency Model and Scheduler (MPG)

This article explains Go's concurrency foundations, detailing the difference between concurrency and parallelism, the CSP model using goroutines and channels, and the internal M‑P‑G scheduler architecture that balances work across processors and system threads.

Beike Product & Technology
Beike Product & Technology
Beike Product & Technology
Understanding Go's CSP Concurrency Model and Scheduler (MPG)

1. Background Go was designed with concurrency as a core feature, attracting developers worldwide due to its built‑in support for concurrent execution.

2. Concurrency and Parallelism Concurrency refers to multiple tasks making progress over time, regardless of whether they run simultaneously, while parallelism means tasks execute at the exact same instant; concurrency includes parallelism.

3. Go's CSP Concurrency Model Go supports two concurrency styles: the traditional shared‑memory multithreading model and the CSP (Communicating Sequential Processes) model, which emphasizes communication over shared memory. The key principle is “Do not communicate by sharing memory; instead, share memory by communicating.” In Go, CSP is realized with goroutine and channel . A goroutine is a lightweight thread‑like execution unit, and a channel provides a pipe‑like communication mechanism between goroutines.

Creating a goroutine is as simple as calling a function with the go keyword, e.g., go f(); . Sending data uses channel <- data and receiving uses <-channel . Both send and receive operations block until the counterpart is ready, ensuring synchronized communication.

4. Go Concurrency Model Implementation At the OS level, all concurrency ultimately maps to threads. Go’s runtime distinguishes user‑space threads (goroutines) from kernel threads (M). The thread model can be classified as user‑level, kernel‑level, or two‑level; Go adopts a special two‑level model called the MPG model.

5. Go Thread Implementation Model (MPG) M stands for Machine and maps to a kernel thread. P stands for Processor, providing the execution context for user‑level code. G stands for Goroutine, the lightweight thread. The relationship is visualized as M ↔ P ↔ multiple Gs. The number of Ps is set by the GOMAXPROCS environment variable or runtime.GOMAXPROCS() . Active Gs run on Ps, while idle Gs are placed in runqueues (local and global) for scheduling.

6. Dropping a Processor (P) When a system call blocks, the current M discards its P, allowing other goroutines to be scheduled on a different M. After the syscall returns, the original M may steal a P or place its goroutine back into a global runqueue.

7. Balanced Work Distribution If a P’s local runqueue is empty, it steals half of the goroutines from another P’s queue, ensuring load balancing across processors.

References: The Go scheduler, 《Go并发编程第一版》.

ConcurrencyGoschedulerCSPThread modelgoroutine
Beike Product & Technology
Written by

Beike Product & Technology

As Beike's official product and technology account, we are committed to building a platform for sharing Beike's product and technology insights, targeting internet/O2O developers and product professionals. We share high-quality original articles, tech salon events, and recruitment information weekly. Welcome to follow us.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.