Operations 8 min read

Smooth Web Service Deployment with Nginx dyups: Dynamic Upstream Management

This article explains how Zhaozhuan uses Nginx load‑balancing, the dyups module, and a custom deployment workflow to achieve zero‑downtime service upgrades, dynamic scaling, and graceful reloads without losing any requests.

Zhuanzhuan Tech
Zhuanzhuan Tech
Zhuanzhuan Tech
Smooth Web Service Deployment with Nginx dyups: Dynamic Upstream Management

Background

We use Nginx as a load balancer; all business traffic is routed through an Nginx proxy cluster, which forwards requests to backend service clusters according to predefined routing and round‑robin policies.

Health checks and retry mechanisms in Nginx ensure that service updates return normal responses, but to further improve availability and avoid losing any request we optimized the Nginx proxy layer and the release system for smoother service changes.

This article shares how Zhaozhuan solves smooth web‑service changes.

Nginx Configuration Management

Zhaozhuan NGINX proxy layer cluster architecture

Nginx configuration is divided into four modules based on update frequency

Basic configuration, event, http belong to the low‑frequency main‑conf .

Certificates have extremely low update frequency.

server and location belong to the relatively low‑frequency server‑conf .

High‑frequency upstream configuration.

General process for publishing Nginx configuration changes

Modify Nginx configuration on the management platform.

Start a Docker container to pull the corresponding configuration.

Test the configuration to ensure it meets expectations.

Publish the configuration to the online cluster.

Online Nginx cluster performs nginx reload to apply the new configuration.

Nginx consists of a master process and worker processes. During a reload, workers are set to “shutting down” and stop accepting new requests, while new workers start handling incoming traffic; the shutting‑down workers exit only after completing existing connections, ensuring no impact at the reload moment.

The first three configuration types can use this publishing flow.

However, for upstream configuration, frequent IP changes, service upgrades, or scaling that trigger reloads can create many shutting‑down workers, causing load spikes, reduced capacity, response instability, or even a cluster avalanche.

Dynamic Up/Down Using Nginx dyups Module

To address the above, we need a tool that enables service up/down and dynamic scaling without reloading or restarting Nginx, while guaranteeing no request loss.

Service Routing Strategy Deployment & Update

When a new service is launched or routing logic changes, operations add a new server in Nginx and associate it with the appropriate upstream.

After the service is deployed and started, the release system calls Nginx’s “online” API to add or update the service IP in the upstream, making the service reachable.

Service Update

If a service needs to be updated, the following steps are performed:

The release system marks the service as offline, invoking Nginx’s “offline” API to set the target server IP as down.

After Nginx removes the node, it callbacks the release system, allowing the service instance to be updated.

Once the update finishes, the service is brought back online via Nginx’s “online” API, restoring the IP.

Service Runtime

During normal operation a health‑checking service monitors all service nodes; if a problem is detected, the node status is set to down.

Encountered Issues

The ngx_http_dyups_module provides methods to add or delete entire upstreams, but our release system works on per‑node up/down, requiring additional handling.

Dynamic changes made by the dyups module are stored only in Nginx memory; a configuration reload would lose this data.

When publishing server‑conf , the in‑memory upstream must be synchronized with the file before reload.

Practical Implementation

Convert per‑node up/down requests from the release system into full upstream updates via a queue.

Use locks and flush operations to ensure the in‑memory upstream matches the file; during an Nginx reload, the latest upstream data is loaded from the file.

We refactored the dyups module’s API to support JSON and a web UI for managing upstream node weights. We also wrapped Nginx reload to synchronize upstream memory and file before reloading.

Conclusion

Nginx is a web server that can also serve as a load balancer and reverse proxy; it is widely used for HTTP traffic forwarding, unified traffic scheduling, business load balancing, and high availability.

The ngx_http_dyups_module is a powerful plugin that enables dynamic configuration changes without restarting Nginx.

We hope this article provides useful references for readers handling HTTP service up/down processes.

Any questions can be discussed via the public account’s chat window.

operationsload balancingNginxservice scalingzero-downtime deploymentdynamic upstreamdyups
Zhuanzhuan Tech
Written by

Zhuanzhuan Tech

A platform for Zhuanzhuan R&D and industry peers to learn and exchange technology, regularly sharing frontline experience and cutting‑edge topics. We welcome practical discussions and sharing; contact waterystone with any questions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.