Understanding Synchrony, Blocking, Process Switching, File Descriptors, and I/O Models (select, poll, epoll)
This article explains the differences between synchronous and asynchronous execution, blocking and non‑blocking operations, user and kernel space, process switching, file descriptors, cache I/O, and compares various I/O models—including blocking, non‑blocking, multiplexing, signal‑driven, and asynchronous—while highlighting the characteristics of select, poll, and epoll.
1. Synchronous vs Asynchronous – Synchronous tasks wait for dependent tasks to finish before completing, ensuring consistent success or failure, while asynchronous tasks proceed without waiting, making the outcome of dependent tasks uncertain.
2. Blocking vs Non‑Blocking – Blocking calls suspend the thread until the operation finishes; non‑blocking calls return immediately, requiring the program to poll for readiness, which can increase CPU usage.
3. User Space vs Kernel Space – In a 32‑bit OS, the upper 1 GB of virtual address space (0xC0000000‑0xFFFFFFFF) is reserved for the kernel, and the lower 3 GB (0x00000000‑0xBFFFFFFF) is for user processes.
4. Process Switching – The kernel saves the CPU context, updates the PCB, moves the process to appropriate queues, selects another process, updates memory structures, and restores the context; this operation is resource‑intensive.
5. Process Blocking – A running process may block voluntarily when awaiting resources or events, releasing the CPU while in the blocked state.
6. File Descriptors – A non‑negative integer that indexes the kernel’s per‑process file table, representing an open file; primarily relevant to Unix/Linux systems.
7. Cache I/O – Most file‑system I/O is cached; data is first copied to the kernel page cache before being transferred to user space, incurring extra copy overhead.
II. I/O Models
Network I/O is fundamentally socket reads/writes. A read operation involves two stages: waiting for data to become ready and copying data from kernel to user space.
1. Blocking I/O Model – The calling process blocks until data is ready and then copies it from kernel to user space.
2. Non‑Blocking I/O Model – The socket is set to non‑blocking; if the operation cannot complete, the kernel returns an error and the process must repeatedly poll, consuming CPU cycles.
3. I/O Multiplexing Model – Uses system calls such as select , poll , and epoll to monitor multiple sockets; the kernel notifies the process when any socket becomes readable, reducing unnecessary polling.
4. Signal‑Driven I/O Model – The process registers a signal handler (e.g., for SIGIO) and continues execution; the kernel sends a signal when data is ready, allowing the handler to perform I/O.
5. Asynchronous I/O Model – The process issues an aio_read (or similar) and returns immediately; the kernel notifies the process when the operation completes, eliminating the need for the process to poll or block.
Comparison of I/O Models – The main differences lie in the waiting phase and the data‑copy phase; non‑blocking and asynchronous I/O reduce waiting time but differ in CPU usage patterns.
III. select, poll, epoll Differences
Key factors include maximum connections per process, handling of large numbers of file descriptors, and notification mechanisms. epoll generally offers the best performance for high‑concurrency scenarios, while select/poll may be preferable for low‑connection, high‑activity workloads.
Additional Notes – epoll supports both level‑triggered and edge‑triggered modes; level‑triggered repeatedly notifies about ready descriptors, whereas edge‑triggered notifies only on state changes, offering higher efficiency in many cases.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.