Backend Development 9 min read

Building a Consistent‑Hashing Router with Nginx, Redis Protocol, and Go for High‑Performance Load Balancing

The article describes how the Nitro team designed and implemented a Go‑based consistent‑hashing router that leverages Nginx, the Redis protocol, Envoy sidecars, and Mesos to provide low‑latency, high‑throughput request routing and caching for compute‑intensive services.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Building a Consistent‑Hashing Router with Nginx, Redis Protocol, and Go for High‑Performance Load Balancing

In Nitro we needed a professional load balancer, so Mihai Todor and I built a router using Go that communicates with Nginx via the Redis protocol; Nginx handles the heavy lifting while the router itself does not carry traffic. The solution has run smoothly in production for the past year.

Why – Our new service sits behind a load‑balancing pool and performs expensive computations, requiring local caching and request affinity so that identical resources are served by the same host when possible. Existing approaches (sticky sessions via cookies, headers, source‑IP, or HTTP redirects) were either unsuitable or required shared state across load balancers.

Architecture – We run a set of front‑end load balancers on Mesos, each backed by Envoy sidecars for service discovery. The front‑ends forward inbound traffic to an Envoy node, which queries the Nginx router to determine the target endpoint. Nginx then forwards the request to the appropriate service instance.

Design – We chose a consistent‑hashing ring (implemented in a Go library we call Ringman ) backed by Sidecar or Hashicorp Memberlist for membership. The router uses the Redis protocol (via the Redeo library) to expose two commands, GET and SELECT , allowing Nginx to look up the correct host:port for a given URL.

Implementation – The Ringman library provides the hash ring and membership management. We embed it in a Go service that speaks Redis commands; Nginx is configured (see the included images) to issue a Redis GET with the request URL and receive a string like 10.10.10.5:23453 indicating the target endpoint.

Performance – In our own environment the round‑trip from Nginx to the Go router over Redis averages 0.2‑0.3 ms, negligible compared with the ~70 ms median upstream service latency. The system has been stable for over a year with consistent performance.

Conclusion – The solution, though somewhat hacky, has become a core part of our infrastructure. The components (Ringman, Redeo, Nginx configuration) are reusable for similar use‑cases, and we welcome contributions to add K8s or Mesos support.

load balancingGonginxEnvoyMesosconsistent hashingredis protocol
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.