Linux System Parameter and Nginx Configuration Optimization Guide
This guide explains how to improve web service performance by tuning Linux system parameters and Nginx configuration, covering file descriptor limits, TCP connection queues, temporary port ranges, worker processes, keepalive settings, and access‑log buffering, with concrete sysctl and Nginx directives.
Web service performance tuning is a systematic engineering effort; fixing a single weak link can degrade overall performance, while strengthening a weak link can meet requirements without chasing extreme optimization.
Below are Linux system parameters that require a kernel version 2.6+ (the author used CentOS 7.4, kernel 3.10). Typical adjustments include file descriptor limits, buffer queue lengths, and temporary port ranges.
File Descriptor Limits
Each TCP connection consumes a file descriptor; exhausting them yields “Too many open files”. Modify the system‑wide limits in /etc/sysctl.conf :
fs.file-max = 10000000
fs.nr_open = 10000000And the user‑level limits in /etc/security/limits.conf :
*hard nofile 1000000
*soft nofile 1000000After editing, apply with:
$ sysctl -pVerify with ulimit -a .
TCP Connection Queue Length
Edit /etc/sysctl.conf to increase the SYN backlog and accept queue:
# The length of the syn queue
net.ipv4.tcp_max_syn_backlog = 65535
# The length of the tcp accept queue
net.core.somaxconn = 65535tcp_max_syn_backlog controls the half‑open SYN queue; when full, new SYN packets are dropped, reflected in ListenOverflows and ListenDrops counters. somaxconn sets the full‑connection accept queue; if it fills, clients see “connection reset by peer” and Nginx logs “no live upstreams while connecting to upstreams”.
Temporary Port Range
For Nginx acting as a proxy, each upstream TCP connection consumes a temporary port. Adjust ip_local_port_range in /etc/sysctl.conf :
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_local_reserved_ports = 8080,8081,9000-9010ip_local_reserved_ports reserves ports to avoid conflicts with services.
Nginx Parameter Optimization
Worker Processes and Connections
Nginx’s strength lies in its multi‑process, non‑blocking I/O model. Set the number of workers to match CPU cores:
worker_processes auto;Increase the per‑worker connection limit:
worker_connections 4096;Select the most efficient I/O multiplexing method for Linux, epoll :
use epoll;KeepAlive
Enable HTTP/1.1 keep‑alive to reduce connection churn. The keepalive directive defines the maximum idle upstream connections per worker:
upstream BACKEND {
keepalive 300;
server 127.0.0.1:8081;
}
server {
listen 8080;
location / {
proxy_pass http://BACKEND;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}The official description: the keepalive parameter sets the maximum number of idle keep‑alive connections cached per worker; excess connections are closed.
For a target QPS of 6000 and 200 ms response time, about 1200 long connections are needed; a keepalive value of 10‑30 % of that (e.g., 300) works well.
Access‑Log Buffering
Logging I/O can be costly. Enable buffering to reduce write frequency:
access_log /var/logs/nginx-access.log buffer=64k gzip flush=1m;buffer defines the size before flushing; flush defines the timeout.
Worker File Descriptor Limit
Mirror the system file‑descriptor limit in Nginx with:
worker_rlimit_nofile 1000000;Summary
The author’s Nginx tuning experience focuses on addressing major bottlenecks such as file descriptor limits, connection queues, keep‑alive settings, and log buffering. While many more knobs exist, the presented adjustments are sufficient for typical usage scenarios.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.