Fundamentals 105 min read

Understanding Processes, Threads, Synchronization, and Scheduling in Operating Systems

This article provides a comprehensive overview of operating system concepts including processes, threads, interprocess communication, synchronization mechanisms such as mutexes and semaphores, and various scheduling algorithms for batch, interactive, and real‑time systems.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Understanding Processes, Threads, Synchronization, and Scheduling in Operating Systems

Process and Thread Basics

The operating system views a running program as a process , an abstract entity that owns resources such as memory, open files, and CPU time. Modern OSes also support threads , lightweight units of execution that share a process's address space but have independent registers and stacks.

Process Model

A process is created at system start (the init process) and can spawn child processes via system calls like fork (Unix) or CreateProcess (Windows). The OS maintains a process table that records each process's state, program counter, registers, and scheduling information.

Processes transition among three primary states: running , ready , and blocked . The scheduler decides which ready process receives the CPU, while I/O or explicit waits cause a process to become blocked.

Thread Model

Threads are created with library calls such as pthread_create (POSIX) or system calls like CreateThread (Windows). Each thread has its own program counter and stack but shares the process's memory, file descriptors, and other resources. Threads can be scheduled independently, allowing true parallelism on multi‑core CPUs.

Synchronization Primitives

Concurrent access to shared resources can lead to race conditions . To avoid them, OSes provide several synchronization mechanisms:

Mutexes : binary locks that guarantee exclusive access to a critical section. Typical usage involves pthread_mutex_lock and pthread_mutex_unlock .

Semaphores : integer counters supporting down (wait) and up (signal) operations. They can represent resource pools (e.g., empty/full slots in a producer‑consumer buffer).

Condition Variables : used together with a mutex to block a thread until a specific condition becomes true, via pthread_cond_wait and pthread_cond_signal / pthread_cond_broadcast .

Monitors (e.g., Java synchronized methods): language‑level constructs that combine mutual exclusion and condition variables.

Futexes (Fast Userspace Mutexes): Linux’s hybrid approach that performs the fast path entirely in user space and falls back to a kernel wait queue only when contention occurs.

Example: Producer‑Consumer with Semaphores

#define N 100
int mutex = 1;      // binary semaphore for mutual exclusion
int empty = N;      // counts empty slots
int full  = 0;      // counts filled slots

void producer(void) {
    while (TRUE) {
        int item = produce_item();
        down(&empty);
        down(&mutex);
        insert_item(item);
        up(&mutex);
        up(&full);
    }
}

void consumer(void) {
    while (TRUE) {
        down(&full);
        down(&mutex);
        int item = remove_item();
        up(&mutex);
        up(&empty);
        consume_item(item);
    }
}

This classic solution guarantees that producers block when the buffer is full and consumers block when it is empty, while the mutex prevents simultaneous buffer modifications.

Inter‑Process Communication (IPC)

When processes cannot share memory, they communicate via message passing primitives send and receive . Reliable IPC often uses acknowledgments and sequence numbers to avoid lost or duplicated messages. Mailboxes or bounded buffers can be built on top of these primitives.

Scheduling Algorithms

The scheduler decides which runnable entity (process or thread) gets the CPU. Algorithms differ by system type:

Batch systems : often use non‑preemptive First‑Come‑First‑Served or Shortest Job First to maximize throughput and minimize turnaround time.

Interactive systems : employ pre‑emptive round‑robin with a time quantum (typically 20–50 ms) to ensure low response time and fairness.

Real‑time systems : require meeting hard deadlines; common approaches include Rate‑Monotonic Scheduling (RMS) for periodic tasks and Earliest‑Deadline‑First (EDF) for dynamic workloads.

Priority‑based scheduling can be combined with multiple queues (multilevel feedback queues) to balance responsiveness and throughput. Lottery scheduling offers a probabilistic fairness model where each process holds a number of tickets proportional to its share of CPU time.

Thread vs. Process Scheduling

With user‑level threads, the kernel schedules the containing process; the thread library performs its own round‑robin or priority scheduling without kernel involvement, resulting in low context‑switch overhead. Kernel‑level threads are scheduled directly by the OS, allowing true pre‑emptive multitasking but incurring higher switch costs.

Conclusion

The article ties together core OS concepts—process creation, thread models, synchronization primitives, IPC mechanisms, and scheduling strategies—providing a solid foundation for understanding how modern operating systems manage concurrency and resource allocation.

schedulingSynchronizationOperating SystemsProcessesThreads
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.