Backend Development 11 min read

Understanding Apache’s Three MPMs and Why Nginx Excels in High‑Concurrency Scenarios

The article explains Apache’s three processing models (prefork, worker, event), compares their resource usage with Nginx, and demonstrates how Nginx’s event‑driven, single‑threaded architecture achieves far higher concurrent connection capacity and better performance for web services.

Architecture Digest
Architecture Digest
Architecture Digest
Understanding Apache’s Three MPMs and Why Nginx Excels in High‑Concurrency Scenarios

1. Apache Three Working Modes

Apache provides three Multi‑Processing Modules (MPMs): prefork, worker, and event.

prefork: a multi‑process model where each request is handled by a separate process, using the select mechanism for notifications.

worker: a hybrid multi‑threaded model; a process spawns multiple threads, each handling a request, still relying on select but supporting more simultaneous connections.

event: an asynchronous I/O model based on epoll; a single process or thread can handle many requests using an event‑driven callback mechanism.

1.1 prefork Details

If no MPM is explicitly specified, prefork is the default on Unix. It creates a pool of child processes, each handling one request, without using threads, which makes it very stable and compatible with older Apache versions.

1.2 worker Details

Worker combines multiple processes with multiple threads per process, allowing a larger number of simultaneous requests while keeping the stability of a process‑based server.

1.3 event Details

The event MPM uses a single process to handle many connections via epoll callbacks, keeping the process idle when no data is ready and thus supporting massive concurrency with minimal resource consumption.

2. How to Improve Web Server Concurrency Handling

Key conditions for high concurrency include:

Thread‑based processing (multiple threads per process).

Event‑driven models (e.g., epoll) that notify when sockets are ready.

Asynchronous I/O (AIO) support.

Memory‑mapped file access (mmap) to avoid extra copying.

Nginx supports all these features, which is why its official documentation claims it can handle 50,000 concurrent connections.

3. Advantages of Nginx

Traditional process‑ or thread‑based servers suffer from blocking I/O, high context‑switch overhead, and low CPU/memory utilization. Creating a new process or thread requires allocating stack and heap memory and setting up execution contexts, which consumes CPU cycles.

Nginx adopts a modular, event‑driven, asynchronous, single‑threaded architecture with non‑blocking I/O and multiplexing, allowing each worker process (usually one thread) to handle thousands of simultaneous connections efficiently.

4. Nginx Working Principle

Nginx runs a master process and several worker processes; optional cache loader and cache manager processes appear when caching is configured. All processes are single‑threaded and communicate via shared memory. The master runs as root, while workers run as non‑privileged users.

In high‑concurrency scenarios, Nginx can replace Apache effectively.

5. Nginx Was Created to Solve the C10K Problem

The article compares the I/O multiplexing models used by Apache and Nginx:

select: limited by a maximum number of file descriptors; linear scanning of all descriptors incurs growing overhead.

poll: removes the descriptor limit but still requires scanning all descriptors.

epoll (used by Nginx): provides edge‑triggered notifications, returns only ready descriptor counts, and can use mmap to avoid copying large descriptor tables, dramatically improving performance on Linux.

Although epoll is Linux‑specific, its efficiency makes Nginx a superior choice for high‑traffic web services.

Performance Comparison Example

On a 4 GB memory server, Apache (prefork) can handle roughly 3,000 concurrent connections, consuming over 3 GB RAM and causing crashes when MaxClients is set high. In contrast, an Nginx + PHP‑FastCGI setup handled more than 30,000 concurrent connections with only about 150 MB for 10 Nginx processes ( 15M*10=150M ) and 1,280 MB for 64 php‑cgi processes ( 20M*64=1280M ), staying under 2 GB total memory usage.

Even under 30,000 concurrent connections, PHP scripts executed via Nginx + FastCGI remained fast.

performancebackend developmentconcurrencyNginxweb serverApacheevent-driven
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.