Backend Development 10 min read

Nginx Knowledge Map: Reverse Proxy, Load Balancing, Static/Dynamic Separation, High Availability and Practical Configuration

This article provides a comprehensive guide to Nginx, covering its architecture, forward and reverse proxy concepts, load‑balancing strategies, static‑dynamic separation, installation commands, configuration file structure, practical reverse‑proxy and load‑balancing examples, and high‑availability setup using Keepalived.

Top Architect
Top Architect
Top Architect
Nginx Knowledge Map: Reverse Proxy, Load Balancing, Static/Dynamic Separation, High Availability and Practical Configuration

Nginx is a high‑performance HTTP server and reverse proxy that consumes little memory and supports massive concurrent connections, often up to 50,000.

Proxy Types : Forward proxy is used when internal users cannot directly access the Internet and must go through a proxy server; reverse proxy hides the real server IP, allowing clients to access services without configuring a proxy.

Load Balancing : By adding multiple servers and distributing requests, Nginx can balance load across back‑end servers, improving scalability as traffic grows.

Static/Dynamic Separation : Static files are served directly by Nginx while dynamic requests are forwarded to application servers (e.g., Tomcat), reducing load on a single server.

Installation on Linux :

./nginx -v
./nginx
./nginx -s stop
./nginx -s quit
./nginx -s reload

Configuration File Structure :

① Global block – sets overall server parameters. ② events block – configures network connection handling. ③ http block – contains reverse‑proxy, load‑balancing, and other directives.

Location Directive Example :

location [ = | ~ | ~* | ^~ ] url { }

Reverse Proxy Practical Example : Configure Nginx to listen on port 80 and forward www.123.com to a Tomcat instance on port 8080, then extend to route /edu/ to 8080 and /vod/ to 8081 using regex‑based location blocks.

Load Balancing Practical Example : Modify nginx.conf to define an upstream pool and use round‑robin (default), weight, fair, or ip_hash methods to distribute traffic among multiple Tomcat servers.

High Availability with Keepalived :

Install Keepalived, configure a virtual IP (e.g., 192.168.25.50), and define a VRRP instance that monitors Nginx health via a script. The master node holds the virtual IP; if it fails, the backup takes over, ensuring continuous service.

# yum install keepalived -y
# systemctl start keepalived.service
global_defs {
  notification_email { [email protected] }
  smtp_server 192.168.25.147
  router_id LVS_DEVEL
}

vrrp_script chk_nginx {
  script "/usr/local/src/nginx_check.sh"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  state BACKUP
  interface ens33
  virtual_router_id 51
  priority 90
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  virtual_ipaddress { 192.168.25.50 }
}

Conclusion : Nginx’s worker processes should match CPU cores; a master can manage multiple workers for hot deployment, and a failure of one worker does not affect others.

High Availabilityload balancingNginxreverse proxyKeepalivedstatic dynamic separation
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.