Backend Development 20 min read

High‑Concurrency Ticket Spike System Architecture and Implementation with Nginx, Redis, and Go

The article analyzes the extreme‑traffic challenges of China’s 12306 ticket‑spike service, presents a layered load‑balancing architecture using OSPF, LVS, and Nginx weighted round‑robin, and demonstrates a Go‑based prototype that combines local in‑memory stock deduction with Redis‑backed global stock control to achieve stable, high‑throughput ticket purchasing without overselling.

Architect
Architect
Architect
High‑Concurrency Ticket Spike System Architecture and Implementation with Nginx, Redis, and Go

During peak periods such as Chinese New Year, the 12306 train‑ticket service experiences millions of simultaneous requests, making it one of the world’s most demanding flash‑sale systems.

The author studies the 12306 backend architecture and shares insights, then builds a simplified example that can handle one million users competing for ten thousand tickets while keeping the service stable.

Load‑Balancing Overview

The request flow passes through three layers of load balancers. The three common techniques are introduced:

OSPF (Open Shortest Path First) : an interior gateway protocol that builds a link‑state database and computes shortest‑path trees, allowing up to six equal‑cost paths for load distribution.

LVS (Linux Virtual Server) : an IP‑level load‑balancing cluster that forwards traffic to healthy backend servers and masks failures.

Nginx : a high‑performance HTTP reverse proxy that supports round‑robin, weighted round‑robin, and IP‑hash scheduling.

For the demo, Nginx weighted round‑robin is configured as follows:

#配置负载均衡
    upstream load_rule {
        server 127.0.0.1:3001 weight=1;
        server 127.0.0.1:3002 weight=2;
        server 127.0.0.1:3003 weight=3;
        server 127.0.0.1:3004 weight=4;
    }
    ...
    server {
        listen 80;
        server_name load_balance.com www.load_balance.com;
        location / {
            proxy_pass http://load_rule;
        }
    }

Four local HTTP services are started on ports 3001‑3004, each with a different weight.

Local Stock Deduction (Go)

A simple Go program listens on a port and records each request in ./stat.log :

package main

import (
    "net/http"
    "os"
    "strings"
)

func main() {
    http.HandleFunc("/buy/ticket", handleReq)
    http.ListenAndServe(":3001", nil)
}

func handleReq(w http.ResponseWriter, r *http.Request) {
    failedMsg := "handle in port:"
    writeLog(failedMsg, "./stat.log")
}

func writeLog(msg string, logPath string) {
    fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644)
    defer fd.Close()
    content := strings.Join([]string{msg, "\r\n"}, "3001")
    buf := []byte(content)
    fd.Write(buf)
}

Requests are stress‑tested with ApacheBench:

ab -n 1000 -c 100 http://www.load_balance.com/buy/ticket

The log shows that ports 3001‑3004 receive 100, 200, 300, and 400 requests respectively, matching the configured weights.

Distributed Stock Management

To avoid database bottlenecks, the system uses a two‑stage stock deduction strategy:

Pre‑allocate a portion of tickets to each machine’s local memory (local stock).

Maintain a global stock counter in Redis (hash with fields ticket_total_nums and ticket_sold_nums ).

If a local request succeeds, the service also decrements the global Redis counter atomically using a Lua script:

const LuaScript = `
    local ticket_key = KEYS[1]
    local ticket_total_key = ARGV[1]
    local ticket_sold_key = ARGV[2]
    local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
    local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
    if (ticket_total_nums >= ticket_sold_nums) then
        return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
    end
    return 0
`

func (RemoteSpikeKeys *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, RemoteSpikeKeys.SpikeOrderHashKey, RemoteSpikeKeys.TotalInventoryKey, RemoteSpikeKeys.QuantityOfOrderKey))
    if err != nil {
        return false
    }
    return result != 0
}

Redis is initialized with the total ticket count:

hmset ticket_hash_key "ticket_total_nums" 10000 "ticket_sold_nums" 0

Full Request Handler

The final handler combines local and remote deduction, returns JSON to the client, and logs the outcome:

func handleReq(w http.ResponseWriter, r *http.Request) {
    redisConn := redisPool.Get()
    LogMsg := ""
    <-done // channel lock
    if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
        util.RespJson(w, 1, "抢票成功", nil)
        LogMsg = "result:1,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
    } else {
        util.RespJson(w, -1, "已售罄", nil)
        LogMsg = "result:0,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
    }
    done <- 1
    writeLog(LogMsg, "./stat.log")
}

Performance Test

Running ApacheBench with 10 000 requests and 100 concurrent connections yields about 4 300 requests per second and a mean latency of 23 ms, confirming that a single machine can handle thousands of requests; scaling to dozens of machines via Nginx weighted balancing further multiplies capacity.

Conclusion

The article demonstrates that a high‑traffic flash‑sale system can be built without heavy database I/O by combining local in‑memory stock, Redis‑backed global stock, and proper load‑balancing. It also highlights two key lessons: distribute load across many servers (divide‑and‑conquer) and leverage asynchronous, lock‑free concurrency (e.g., Go channels) to maximize CPU utilization.

load balancingRedisGohigh concurrencynginxticketing system
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.