Backend Development 32 min read

Server‑Side Network Concurrency Models and Linux I/O Multiplexing (select, epoll)

This article explains fundamental concepts of streams, I/O operations, blocking and non‑blocking behavior, compares blocking wait with busy polling, and then details five practical solutions—including multithreading, select, and epoll—while presenting Linux epoll API usage, code examples, and a comprehensive overview of seven common server concurrency models.

TAL Education Technology
TAL Education Technology
TAL Education Technology
Server‑Side Network Concurrency Models and Linux I/O Multiplexing (select, epoll)

The chapter begins with a brief introduction to the basic concepts of a stream, I/O operations, and the notion of blocking wait, using everyday analogies to illustrate how a full transmission medium causes write blocking and an empty medium causes read blocking.

It then contrasts blocking wait with non‑blocking busy‑polling, highlighting the CPU waste of the latter and the resource‑saving advantage of the former.

To overcome the drawbacks of blocking wait, five practical approaches are presented:

Multi‑threading / multi‑process (creating multiple "shadow selves" to handle concurrent I/O).

Non‑blocking busy‑polling (illustrated with pseudo‑code).

The select system call (described with pseudo‑code and its limitation of scanning the whole descriptor set).

The epoll system call (explained with diagrams and detailed API prototypes).

A brief mention of the epoll trigger modes: level‑triggered (LT) and edge‑triggered (ET).

Linux epoll API details are provided, including the three core functions:

/**
 * @param size 告诉内核监听的数目
 * @returns 返回一个epoll句柄(即一个文件描述符)
 */
int epoll_create(int size);

/**
 * @param epfd 用epoll_create所创建的epoll句柄
 * @param op 表示对epoll监控描述符控制的动作
 * @param fd 需要监听的文件描述符
 * @param event 告诉内核需要监听的事件
 * @returns 成功返回0,失败返回-1
 */
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);

/**
 * @param epfd 用epoll_create所创建的epoll句柄
 * @param event 从内核得到的事件集合
 * @param maxevents 告知内核这个events有多大
 * @param timeout 超时时间(-1 永久阻塞,0 立即返回,>0 微秒)
 * @returns 成功返回就绪的文件描述符数量,失败返回-1
 */
int epoll_wait(int epfd, struct epoll_event *event, int maxevents, int timeout);

A complete example of a simple epoll‑based server is shown, preserving the original C source code:

#include
#include
#include
#include
#include
#include
#include
#define SERVER_PORT (7778)
#define EPOLL_MAX_NUM (2048)
#define BUFFER_MAX_LEN (4096)

char buffer[BUFFER_MAX_LEN];

void str_toupper(char *str) {
    for (int i = 0; i < strlen(str); i++) {
        str[i] = toupper(str[i]);
    }
}

int main(int argc, char **argv) {
    int listen_fd = 0, client_fd = 0;
    struct sockaddr_in server_addr, client_addr;
    socklen_t client_len;
    int epfd = epoll_create(EPOLL_MAX_NUM);
    struct epoll_event event, *my_events;
    // socket, bind, listen omitted for brevity
    event.events = EPOLLIN;
    event.data.fd = listen_fd;
    epoll_ctl(epfd, EPOLL_CTL_ADD, listen_fd, &event);
    my_events = malloc(sizeof(struct epoll_event) * EPOLL_MAX_NUM);
    while (1) {
        int active_fds_cnt = epoll_wait(epfd, my_events, EPOLL_MAX_NUM, -1);
        for (int i = 0; i < active_fds_cnt; i++) {
            if (my_events[i].data.fd == listen_fd) {
                client_fd = accept(listen_fd, (struct sockaddr*)&client_addr, &client_len);
                event.events = EPOLLIN | EPOLLET;
                event.data.fd = client_fd;
                epoll_ctl(epfd, EPOLL_CTL_ADD, client_fd, &event);
            } else if (my_events[i].events & EPOLLIN) {
                int n = read(client_fd, buffer, 5);
                if (n > 0) {
                    buffer[n] = '\0';
                    str_toupper(buffer);
                    write(client_fd, buffer, strlen(buffer));
                } else if (n == 0) {
                    epoll_ctl(epfd, EPOLL_CTL_DEL, client_fd, &event);
                    close(client_fd);
                }
            }
        }
    }
    close(epfd);
    close(listen_fd);
    return 0;
}

A matching client implementation is also provided, again kept verbatim:

#include
#include
#include
#include
#include
#include
#include
#include
#define MAX_LINE (1024)
#define SERVER_PORT (7778)

int main(int argc, char **argv) {
    int sockfd;
    char recvline[MAX_LINE + 1] = {0};
    struct sockaddr_in server_addr;
    // argument check, socket, connect omitted for brevity
    setnoblocking(sockfd);
    char input[100];
    while (fgets(input, 100, stdin) != NULL) {
        send(sockfd, input, strlen(input), 0);
        int count = 0;
        while (1) {
            int n = read(sockfd, recvline + count, MAX_LINE);
            if (n == MAX_LINE) { count += n; continue; }
            if (n < 0) break;
            count += n; recvline[count] = '\0';
            printf("[recv] %s\n", recvline);
            break;
        }
    }
    return 0;
}

Beyond the epoll example, the article enumerates seven server concurrency models (single‑thread accept, single‑thread accept + multithreaded business, single‑thread with I/O multiplexing, I/O multiplexing + worker pool, I/O multiplexing + thread pool, process‑based pool, and a hybrid model with per‑connection threads), describing their workflows, advantages, and disadvantages, and concluding that model 5 (or its process‑based variant) is most widely adopted in high‑performance servers such as Nginx.

LinuxmultithreadingepollselectServer ArchitectureIO Multiplexingnetwork concurrency
TAL Education Technology
Written by

TAL Education Technology

TAL Education is a technology-driven education company committed to the mission of 'making education better through love and technology'. The TAL technology team has always been dedicated to educational technology research and innovation. This is the external platform of the TAL technology team, sharing weekly curated technical articles and recruitment information.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.