Backend Development 6 min read

Five Key Nginx Configuration Tweaks to Boost High‑Concurrency Performance

This article explains five essential Nginx parameters—worker_processes, worker_connections, keepalive_timeout, gzip compression, and file‑caching settings—along with practical examples and code snippets to dramatically improve high‑concurrency handling on servers.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Five Key Nginx Configuration Tweaks to Boost High‑Concurrency Performance

Nginx is one of the most popular web servers worldwide, known for its event‑driven architecture that uses a master process to manage multiple worker processes, each handling many active connections via a non‑blocking event loop.

Nginx high‑concurrency optimization

The following five core directives are critical for tuning Nginx in high‑traffic scenarios:

worker_processes

The worker_processes directive sets the number of worker processes. It can be configured manually or automatically using the auto keyword, which detects the number of CPU cores.

worker_processes auto;

events {
    worker_connections 1024;
}

For a server with four CPU cores, you might set:

worker_processes 4;

events {
    worker_connections 1024;
}

Example for a medium‑size server (8 CPU cores, 16 GB RAM):

worker_processes 8;

events {
    worker_connections 16384;
    multi_accept on;
    use epoll;
}

worker_connections

The worker_connections directive defines the maximum number of simultaneous connections each worker can handle. The default is 1024, but it should be increased based on system resources.

events {
    worker_connections 65535;
}

Maximum theoretical connections = worker_processes * worker_connections . Adjust worker_connections according to CPU, memory, and network bandwidth.

Recommended ranges:

Small servers: 4096 ~ 8192

Medium servers: 16384 ~ 32768

Large servers: 65536 or higher

keepalive_timeout

This directive controls the HTTP persistent‑connection timeout. A typical value is 60 seconds; in very high‑concurrency environments a shorter timeout (e.g., 30 seconds) reduces idle connection overhead.

keepalive_timeout 60;
keepalive_timeout 30;

gzip compression

Enabling gzip reduces the amount of data transferred, improving bandwidth utilization and request handling capacity.

gzip on;

gzip_min_length 1k;

gzip_comp_level 2;

gzip_types text/plain application/javascript text/css application/xml;

File caching

Cache open file descriptors to avoid frequent file‑open operations, which speeds up static resource delivery.

open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 1;
open_file_cache_errors on;

Static file caching example:

location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 365d;
    add_header Cache-Control public;
}

Additionally, increase the file‑descriptor limit for each worker using worker_rlimit_nofile and the system ulimit command.

worker_rlimit_nofile 65535;

By applying these configurations—optimizing process count, connection limits, keep‑alive settings, compression, and caching—Nginx can handle significantly higher concurrent loads while maintaining low CPU and memory usage.

Finally, the author offers a free collection of advanced architecture materials and a comprehensive Java interview guide, encouraging readers to add their WeChat for access.

cachingPerformance Tuninghigh concurrencyNginxworker_processesgzipkeepalive_timeout
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.