Design and Implementation of a Spring Cloud Gateway Sidecar for 58 Anjuke Real Estate Platform
This article details the background, selection, sidecar‑based deployment, custom features, performance testing, and production rollout of a Spring Cloud Gateway API gateway used to unify authentication, anti‑scraping, and routing logic across Node.js, Java, and PHP services in the 58 Anjuke second‑hand housing business line.
After merging the technology stacks of 58 Anjuke's second‑hand housing line, many legacy pages needed to be rebuilt into a unified system. The web front‑end uses Node.js, the mobile back‑end API uses Java, and some old pages still run on PHP, leading to duplicated implementations of common gateway functions such as signature verification, ticket validation, anti‑scraping, parameter handling, and city domain parsing.
To avoid three separate implementations, the team evaluated three Java‑based API‑gateway solutions: a custom Netty implementation, Spring Cloud Zuul, and Spring Cloud Gateway. Considering the need for high throughput, low latency, and asynchronous I/O, Spring Cloud Gateway—built on Project Reactor and Spring WebFlux—was chosen.
The sidecar pattern from service‑mesh architectures was adopted: an API‑gateway process runs inside each container alongside the existing web process, intercepting HTTP traffic to perform authentication, logging, and service‑discovery before forwarding requests.
Deployment leveraged the cloud platform’s startup hooks to install the gateway package and launch the process. The gateway runs on JDK 11 with G1 GC and a 512 MB heap to minimize resource usage, and the image is baked into a base container for easy inheritance by business clusters.
Four core features were implemented: HTTP→HTTPS redirection, user ticket validation, city domain/path parsing, and anti‑scraping integration. These features support both 58.com and Anjuke.com domains and are configurable via templated configuration files.
Performance testing using ab on a 4‑core, 4 GB container showed that the gateway can handle up to 2 200 QPS with P99 latency under 35 ms, and GC overhead remained low (Young GC 2‑5 ms, no major GC events in a million‑request run).
In production, the gateway was first rolled out to a low‑traffic cluster and then to the core mobile‑web detail page. After migration, page response times improved by over 20 % on average, with P95 latency gains of 5‑10 %, and CPU usage increased by only ~5 % while memory grew less than 1 GB.
Future work focuses on further reducing logging overhead, enhancing gateway process monitoring and alerting, and making the component more customizable and extensible for other business lines.
58 Tech
Official tech channel of 58, a platform for tech innovation, sharing, and communication.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.