Operations 4 min read

Nginx High-Concurrency Optimization Techniques

This article explains how to achieve million‑level concurrent connections with Nginx by tuning OS limits, worker processes, epoll event handling, gzip compression, and zero‑copy file transfer, providing concrete configuration snippets and performance rationale for each optimization.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Nginx High-Concurrency Optimization Techniques

Hello, I am mikechen.

Achieving millions of concurrent connections relies not only on Nginx's own performance but also on comprehensive coordination of the operating system, network, and configuration.

Connection Configuration Optimization

The theoretical maximum number of concurrent connections is worker_processes × worker_connections .

worker_connections 65535;

Each worker process can handle multiple connections independently, so increasing the number of workers improves parallel processing capability. Additionally, raising the maximum number of file descriptors a worker can open supports more concurrent connections.

worker_rlimit_nofile 65535;

Each connection requires a file descriptor; increasing this limit allows more connections.

Connection Handling Optimization

Using epoll , the most efficient event handling mechanism on Linux, is suitable for high concurrency.

events {
    use epoll;
    multi_accept on;
}

In the events block, enabling use epoll; lets Nginx automatically select the best available module. Unlike select (limited by file descriptor count, typically 1024) and poll (no such limit but still constrained), epoll notifies the application only when events occur, avoiding unnecessary polling and greatly improving efficiency.

Cache and Compression Optimization

Enable Gzip compression to reduce bandwidth usage:

gzip on;
gzip_min_length 1k;
gzip_types text/plain text/css application/json;
gzip_comp_level 5;

Gzip reduces response size and improves transmission efficiency, while gzip_min_length avoids compressing files that are too small, saving CPU cycles.

File Transfer Optimization

Use the kernel's zero‑copy mechanism to send files directly from disk to the network socket, minimizing copies between user and kernel space:

sendfile on;

This reduces data copying overhead and speeds up static file delivery.

By combining these optimization strategies, Nginx performance in high‑concurrency scenarios can be significantly improved.

Promotional Note: The author also offers a 300,000‑word Alibaba architecture specialist collection and a comprehensive Java interview Q&A set; interested readers can add the author's WeChat with the note "资料" to receive the materials.

PerformanceoptimizationLinuxHigh ConcurrencysendfileNginxGzipepoll
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.