Fundamentals 12 min read

Traditional System Call I/O, Read/Write Operations, and High‑Performance Optimizations in Linux

This article explains how Linux implements traditional system‑call I/O using read() and write(), details the data‑copy and context‑switch overhead of read and write operations, describes network and disk I/O, and introduces high‑performance techniques such as zero‑copy, multiplexing, and page‑cache optimizations.

Architecture Digest
Architecture Digest
Architecture Digest
Traditional System Call I/O, Read/Write Operations, and High‑Performance Optimizations in Linux

Traditional System Call I/O

In Linux, the classic way to access files or network sockets is through the read() and write() system calls. A read copies data from the kernel into a user‑space buffer, while a write copies data from user space into a kernel‑space socket buffer before sending it to the NIC.

The traditional I/O path involves two CPU copies, two DMA copies, and four context switches (user→kernel and kernel→user for each call).

Read Operation

If the requested data is already in the process's page cache, it is read directly from memory. Otherwise the kernel loads the data from disk into a read buffer, then copies it to the user buffer.

read(file_fd, tmp_buf, len);

The read system call triggers two context switches, one DMA copy, and one CPU copy. The detailed steps are:

User process calls read() , causing a switch from user space to kernel space.

CPU uses DMA to move data from main memory or disk into the kernel’s read buffer.

CPU copies data from the read buffer to the user buffer.

Context switches back to user space and the call returns.

Write Operation

When a process calls write() , data is first copied from the user buffer to the kernel’s socket buffer, then DMA transfers it to the NIC.

The write system call also incurs two context switches, one CPU copy, and one DMA copy. The steps are:

User process calls write() , switching to kernel space.

CPU copies data from the user buffer to the kernel socket buffer.

DMA moves the data from the socket buffer to the NIC for transmission.

Context switches back to user space and the call returns.

Network I/O

Disk I/O

High‑Performance Optimizations

Zero‑copy techniques

I/O multiplexing

Page‑Cache (PageCache) optimizations

The page cache stores file data in memory, reducing disk I/O. When a read request hits the page cache, the kernel serves data directly from memory; otherwise it reads the required blocks from disk into the cache, often pre‑fetching adjacent pages.

Write operations mark pages as “dirty”. A background flusher thread writes dirty pages back to disk when memory is low, when a page has been dirty for too long, or when the application calls sync() or fsync() .

Storage Device I/O Stack

The Linux I/O stack consists of three layers:

File‑system layer – copies user data into the file‑system cache.

Block layer – manages I/O queues, merges and schedules requests.

Device layer – uses DMA to transfer data between memory and the storage device.

These layers relate to Buffered I/O, mmap , and Direct I/O. Buffered I/O follows the full stack, mmap maps the page cache directly into user space (eliminating one copy), and Direct I/O bypasses the page cache, copying data straight between user space and the block device.

I/O Buffering

At the user level, the C stdio library provides its own buffers to reduce the number of system calls. The kernel also maintains a buffer cache (PageCache) for file data and a separate buffer cache for raw device blocks.

Understanding these buffers, the page‑cache hierarchy, and the I/O stack is essential for designing high‑performance Linux applications.

PerformanceI/OLinuxOperating Systemspage cachesystem calls
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.