Backend Development 26 min read

Design and Implementation of iQiyi's libfiber Network Coroutine Library

Using iQiyi’s open‑source libfiber as a case study, the article explains how network coroutines combine sequential programming simplicity with non‑blocking I/O scalability, detailing libfiber’s single‑threaded scheduler, cross‑platform event engine, coroutine‑aware synchronization, API hooking, and its deployment in high‑performance CDN caching and DNS services.

iQIYI Technical Product Team
iQIYI Technical Product Team
iQIYI Technical Product Team
Design and Implementation of iQiyi's libfiber Network Coroutine Library

This article uses iQiyi's open‑source network coroutine library libfiber as a case study to explain the design principles, programming practices, and performance optimizations of network coroutines.

1. Overview Early high‑concurrency services relied on a process‑per‑connection model, then moved to multi‑threaded servers such as Apache and Nginx. Both models use blocking I/O, which wastes CPU while waiting for slow connections. Non‑blocking network programming (e.g., epoll, kqueue) improves concurrency but is hard to program because business logic becomes fragmented across many callbacks.

2. Why Coroutines? Coroutines combine the simplicity of sequential code with the scalability of non‑blocking I/O. The concept dates back to early Windows NT fibers and BSD ucontext.h . Go’s goroutine popularized coroutine‑based high‑concurrency networking.

2.1 Non‑blocking Network Programming The typical design registers socket read/write events with an OS event engine (select/poll/epoll/kqueue), receives readiness notifications, and processes data in multiple I/O stages, requiring buffering and state management.

2.2 Network Coroutine Programming A network coroutine transforms a blocking I/O call into a non‑blocking one internally, allowing developers to write code in a straightforward, sequential style while the runtime handles context switches.

2.3 Coroutine Switching Switching can be "star‑shaped" (via a central scheduler) or "ring‑shaped" (direct hand‑off). Ring‑shaped switching reduces the number of context switches and improves efficiency.

2.4 Example A simple echo server built with libfiber creates a listening coroutine that blocks on accept() , spawns a client coroutine for each connection, and performs blocking reads/writes inside the coroutine.

3. Core Design of libfiber

Single‑threaded scheduler to avoid cache‑coherency penalties and lock contention.

Event‑engine abstraction supporting Linux (epoll), BSD/macOS (kqueue), and Windows (iocp, message loop).

Optimizations such as merging consecutive epoll_ctl calls to reduce CPU usage.

Synchronization primitives tailored for coroutine environments: coroutine mutexes, event‑based locks, condition variables, and semaphores.

3.1 Coroutine Scheduling libfiber runs one scheduler per thread. Multi‑threaded deployment can start multiple processes or threads, each with its own scheduler, to utilize multiple cores.

3.2 Event Engine Design Cross‑platform support for select/poll/epoll/kqueue/iocp, with optional Windows message‑loop integration.

3.3 Synchronization Mechanisms Includes coroutine mutexes (single‑thread), event‑based locks for inter‑thread coroutine coordination, condition variables for producer‑consumer patterns, and semaphores to limit concurrent access to backend resources.

3.4 DNS Integration libfiber integrates a third‑party DNS resolver to coroutine‑ify hostname lookups, avoiding thread‑per‑lookup overhead.

3.5 System API Hooking On Unix, libfiber hooks common I/O and network APIs (read, write, socket, epoll, etc.) so existing blocking libraries (MySQL client, HTTP libraries, Redis client) can run under the coroutine model without source changes.

4. Real‑World Applications at iQiyi

4.1 CDN Cache/Origin Fetch ("奇迅") The cache‑origin fetcher uses a multi‑threaded, multi‑coroutine architecture to achieve high concurrency, low latency, and efficient bandwidth usage. Features include request merging, resumable downloads, random‑position fetching, and data integrity checks. libfiber’s event locks solve cross‑thread coroutine contention.

4.2 High‑Performance DNS (HPDNS) iQiyi’s custom DNS service processes over 2 million queries per second on a single machine, supports hot view updates via RCU, automatic IP rebinding, and high availability through Keepalived. The TCP path of HPDNS is built with libfiber to handle massive concurrent connections.

5. Summary The article presents the design rationale and core components of libfiber, demonstrates how coroutine‑based networking simplifies high‑concurrency server development, and shares practical lessons from iQiyi’s CDN and DNS systems.

performance optimizationhigh concurrencynetwork programmingevent-drivencoroutinelibfiber
iQIYI Technical Product Team
Written by

iQIYI Technical Product Team

The technical product team of iQIYI

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.