Operations 11 min read

Master Nginx Load Balancing: Step‑by‑Step Configuration Guide

This article explains how to configure Nginx as a load balancer for web applications, covering upstream and proxy_pass definitions, the three built‑in balancing methods, weight and connection settings, fail‑over options, and practical code examples for both HTTP and HTTPS deployments.

Raymond Ops
Raymond Ops
Raymond Ops
Master Nginx Load Balancing: Step‑by‑Step Configuration Guide

You can place Nginx in front of a web application as a load balancer.

For example, if your enterprise application runs on Apache (or Tomcat), you can deploy a second instance of the application on different servers and let Nginx balance traffic between the two Apache/Tomcat servers.

If you are new to Nginx, it is important to understand the differences between Nginx, Apache, and Nginx architecture.

Nginx supports three types of load balancing:

round-robin – the default algorithm that cycles through servers.

least-connected – sends requests to the server with the fewest active connections.

ip-hash – uses the client’s IP address to consistently route requests to the same server.

1. Define upstream and proxy_pass in the Nginx config file

For load balancing you need to add two directives: upstream and proxy_pass .

Upstream: give a unique name (e.g., the application name) and list all servers that Nginx will balance.

<code>upstream crmdev {
    server 192.168.101.1;
    server 192.168.101.2;
}</code>

If servers listen on non‑default ports, specify the port number:

<code>upstream crmdev {
    server 192.168.101.1:8080;
    server 192.168.101.2:8080;
}</code>

proxy_pass: reference the upstream name in a

location

block inside a

server

block.

<code>server {
    listen 80;
    location / {
        proxy_pass http://crmdev;
    }
}</code>

Note: In this example Nginx itself listens on port 80.

You can also use

proxy_pass

to reverse‑proxy Apache/PHP.

2. Define upstream and proxy_pass in the default Nginx configuration

Usually the definitions belong in the

http

context:

<code>http {
    upstream crmdev {
        server 192.168.101.1:8080;
        server 192.168.101.2:8080;
    }
    server {
        listen 80;
        location / {
            proxy_pass http://crmdev;
        }
    }
}</code>

If you edit the supplied

default.conf

, you do not need the outer

http

block because it is already present.

For HTTPS replace the

http

context with

https

and use

https://

in

proxy_pass

. Otherwise you may see an error like:

<code>Starting nginx: nginx: [emerg] "http" directive is not allowed here in /etc/nginx/conf.d/default.conf:1</code>

3. Set the least‑connected algorithm for minimal connections

Add the

least_conn

directive at the top of the upstream block:

<code>upstream crmdev {
    least_conn;
    server 192.168.101.1:8080;
    server 192.168.101.2:8080;
}</code>

If several servers have similar connection counts, Nginx falls back to weighted round‑robin.

4. Configure a sticky (session‑persistent) algorithm

The round‑robin and least‑connected methods do not guarantee that subsequent requests from the same client go to the same server. For session‑dependent applications, use

ip_hash

:

<code>upstream crmdev {
    ip_hash;
    server 192.168.101.1:8080;
    server 192.168.101.2:8080;
}</code>

For IPv4 the hash uses the first three octets; for IPv6 it uses the full address.

5. Assign weights to individual servers

You can give specific servers a higher weight (default weight is 1). Example with five servers where the third has weight 2:

<code>upstream crmdev {
    server 192.168.101.1:8080;
    server 192.168.101.2:8080;
    server 192.168.101.3:8080 weight 2;
    server 192.168.101.4:8080;
    server 192.168.101.5:8080;
}</code>

This means that out of every six new requests, two will be sent to server 3, while the others receive one each, allowing more load on a more powerful server. Weights can also be used with

least_conn

and

ip_hash

.

6. Configure per‑server timeout options – max_fails and fail_timeout

You can set

max_fails

and

fail_timeout

for each server:

<code>upstream crmdev {
    server 192.168.101.1:8080 max_fails=3 fail_timeout=30s;
    server 192.168.101.2:8080;
    server 192.168.101.3:8080 weight 2;
    server 192.168.101.4:8080;
    server 192.168.101.5:8080;
}</code>

The default

fail_timeout

is 10 seconds; the example sets it to 30 seconds. After the defined number of failures (here 3), the server is marked unavailable for 30 seconds.

The default

max_fails

is 1; the example raises it to 3, meaning three consecutive connection failures trigger the timeout.

7. Add a backup server to the Nginx load‑balancing pool

Mark a server as a backup by adding the

backup

keyword:

<code>upstream crmdev {
    server 192.168.101.1:8080 max_fails=3 fail_timeout=30s;
    server 192.168.101.2:8080;
    server 192.168.101.3:8080 weight 2;
    server 192.168.101.4:8080;
    server 192.168.101.5:8080 backup;
}</code>

The fifth server will only receive traffic if all other four servers are down.

operationsLoad BalancingconfigurationNginxupstreamproxy_pass
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.