Backend Development 10 min read

Configuring Nginx Reverse Proxy for Persistent (Keep‑Alive) Connections and Performance Optimization

This article explains how to configure Nginx as a reverse proxy to maintain long‑lived client and server connections, optimize keep‑alive parameters, and avoid connection churn in high‑QPS scenarios, providing practical code examples and advanced tuning tips.

JD Tech
JD Tech
JD Tech
Configuring Nginx Reverse Proxy for Persistent (Keep‑Alive) Connections and Performance Optimization

After HTTP/1.1, the protocol supports persistent (keep‑alive) connections, allowing multiple requests and responses over a single TCP connection; when Nginx is used as a reverse proxy or load balancer, client‑side long connections are normally translated into short connections to the backend, so specific Nginx settings are required to preserve the long‑connection behavior.

Requirements : (i) the client‑to‑Nginx link must stay alive, which means the client must send a Connection: keep-alive header; (ii) the Nginx‑to‑server link must also stay alive, so Nginx must be configured to support keep‑alive on both sides.

HTTP configuration : By default Nginx enables keep‑alive for client connections. For special cases you can adjust the parameters, for example:

http {
    keepalive_timeout 120s;      # client‑side timeout, 0 disables keep‑alive
    keepalive_requests 10000;   # max requests per long‑lived connection (default 100)
}

In high‑QPS environments the default keepalive_requests may be insufficient, leading to many connections being closed and recreated, which increases TIME_WAIT sockets.

Upstream configuration : The keepalive directive inside an upstream block defines the maximum number of idle connections kept in the pool. Example:

http {
    upstream backend {
        server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;
        server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;
        keepalive 300;   # very important!
    }
    server {
        listen 8080 default_server;
        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;               # ensure HTTP/1.1
            proxy_set_header Connection "";      # force long connection
        }
    }
}

Consider a scenario where a service responds in 100 ms and we need 10 000 QPS; roughly 1 000 concurrent connections are required (100 ms → 10 requests per connection). Setting keepalive to an appropriate value (e.g., 10‑30 % of the expected concurrent connections) prevents the connection pool from repeatedly closing and opening sockets.

Scenario analysis : When keepalive is set too low (e.g., 10) and request patterns are uneven, Nginx may close many idle connections and then have to recreate them, causing resource waste. Two cases are illustrated: (1) stable responses but bursty requests, and (2) stable requests but bursty responses. Both show that an insufficient keepalive leads to connection churn.

Location configuration : Ensure the proxy uses HTTP/1.1 and sets the Connection header to keep the upstream link alive:

location / {
    proxy_pass http://backend;
    proxy_http_version 1.1;
    proxy_set_header Connection "";   # keep‑alive (default is close)
}

Advanced method : Use a map to adjust the Connection header based on the client’s Upgrade header, which is essential for WebSocket upgrades:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}
upstream backend {
    server 192.168.0.1:8080;
    server 192.168.0.2:8080;
    keepalive 300;
}
server {
    listen 8080 default_server;
    location / {
        proxy_pass http://backend;
        proxy_connect_timeout 15;
        proxy_read_timeout    60s;
        proxy_send_timeout    12s;
        proxy_http_version    1.1;
        proxy_set_header Upgrade        $http_upgrade;
        proxy_set_header Connection     $connection_upgrade;
    }
}

The map makes the Connection header follow the value of Upgrade : if the client requests an upgrade (e.g., to WebSocket), the header becomes upgrade ; otherwise it is set to close .

Notes : Header inheritance in Nginx follows the order http → server → location . If a proxy_set_header is defined at a lower level, it overrides all inherited values, so it is advisable to set all needed headers in the same block to avoid unexpected changes.

References :

Nginx Chinese official documentation: http://www.nginx.cn/doc/

Testing reference: https://www.lijiaocn.com/

Keep‑alive reference: https://wglee.org/2018/12/02/nginx-keepalive/

PerformanceconfigurationHTTPnginxreverse proxykeepaliveupstream
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.