Backend Development 14 min read

How to Build a High‑Performance gRPC Gateway with OpenResty and Optimize TIME_WAIT

This article explains how to design and implement a secure gRPC gateway using OpenResty, analyzes performance bottlenecks such as excessive TIME_WAIT connections, and provides practical kernel and Nginx/OpenResty tuning steps to achieve long‑lived connections and lower latency.

Efficient Ops
Efficient Ops
Efficient Ops
How to Build a High‑Performance gRPC Gateway with OpenResty and Optimize TIME_WAIT

Author Tom introduces a project to create a gateway that receives and forwards gRPC requests, which is required for security reasons because external services cannot directly access internal compute nodes.

Why a gateway?

The gateway adds a layer for reverse proxy, request routing, data communication, and monitoring, but also introduces an extra hop and potential single‑point‑of‑failure.

Problem abstraction and technology selection

The initial plan was to use Netty, but the gRPC proto templates differed from the project's definition, making parsing costly. The alternative was to use Nginx, but because Nginx is written in C and the solution needed dynamic routing and Lua scripting, OpenResty (Nginx + Lua) was chosen.

OpenResty configuration (code snippet)

<code>http {
    include       mime.types;
    default_type  application/octet-stream;
    access_log  logs/access.log  main;
    sendfile        on;
    keepalive_timeout  120;
    client_max_body_size 3000M;
    server {
        listen   8091   http2;
        location / {
            set $target_url  '' ;
            access_by_lua_block{
                local headers = ngx.req.get_headers(0)
                local jobid= headers["jobid"]
                local redis = require "resty.redis"
                local red = redis:new()
                red:set_timeouts(1000) -- 1 sec
                local ok, err = red:connect("156.9.1.2", 6379)
                local res, err = red:get(jobid)
                ngx.var.target_url = res
            }
            grpc_pass   grpc://$target_url;
        }
    }
}
</code>

Performance testing

Three‑step request flow:

Client sends gRPC request (HTTP/2) to the gateway.

Gateway looks up the target server address in Redis.

Gateway forwards the request to the target server and returns the response.

Initial tests showed many connections in

TIME_WAIT

state (≈27 500) on both Redis (port 6379) and the backend server (port 40928), indicating short‑lived connections.

What is TIME_WAIT?

TIME_WAIT is the final state of the active closer in the TCP four‑handshake termination process; it ensures any lost ACKs can be retransmitted and that the socket cannot be reused until the 2 MSL timeout expires.

Optimizing excessive TIME_WAIT

Kernel parameters can be tuned, e.g.:

<code>net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_tw_buckets = 5000
</code>

More effective is converting short connections to long connections.

Redis long‑connection optimization

Use

set_keepalive

in Lua to pool Redis connections:

<code>local res, err = red:get(jobid)
-- return connection to pool, keepalive 10 s, pool size 40
red:set_keepalive(10000, 40)
</code>

After applying this, Redis connections stayed below 40.

Backend server connection optimization

Configure Nginx

upstream

with

keepalive

and dynamic routing via

balancer_by_lua_block

:

<code>upstream grpcservers {
    balancer_by_lua_block{
        local balancer = require "ngx.balancer"
        local host = ngx.var.target_ip
        local port = ngx.var.target_port
        local ok, err = balancer.set_current_peer(host, port)
        if not ok then
            ngx.log(ngx.ERR, "failed to set the current peer: ", err)
            return ngx.exit(500)
        end
    }
    keepalive 40;
}
</code>

Post‑tuning results showed most connections in

ESTABLISHED

state and a drastic reduction of

TIME_WAIT

(e.g., 86 ESTABLISHED vs 242 TIME_WAIT).

Conclusion

The gRPC gateway was built with OpenResty, preserving Nginx performance while adding dynamic routing via Lua. Through iterative testing and tuning—kernel tweaks, Redis connection pooling, and Nginx keepalive settings—the TCP connection lifecycle was optimized, eliminating most TIME_WAIT sockets and achieving efficient request forwarding.

backendPerformance TestinggRPCTCPgatewayOpenResty
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.