Backend Development 20 min read

Designing a High-Concurrency Ticketing System with Load Balancing, Redis, and Go

This article explores the architecture and implementation of a high‑concurrency train ticket flash‑sale system, detailing load‑balancing strategies, Nginx weighted round‑robin, Redis‑based inventory management, and Go code examples that demonstrate local and remote stock deduction, performance testing, and fault‑tolerant design.

php中文网 Courses
php中文网 Courses
php中文网 Courses
Designing a High-Concurrency Ticketing System with Load Balancing, Redis, and Go

During holidays, millions of users in China compete for train tickets, creating a massive spike in traffic that challenges the 12306 ticketing platform. The article examines the backend architecture required to handle 1 million concurrent users purchasing 10,000 tickets while maintaining stability and correctness.

1. Large‑Scale High‑Concurrency Architecture The system uses a distributed cluster with multiple layers of load balancers, redundancy (dual data centers, node fault tolerance), and traffic distribution to ensure high availability. A simplified diagram illustrates the multi‑level load‑balancing setup.

1.1 Load Balancing Overview Three types of load balancing are introduced:

OSPF – an interior gateway protocol that builds a link‑state database and can perform equal‑cost load balancing across up to six links.

LVS – Linux Virtual Server, an IP‑level load balancer that distributes requests among a pool of servers while masking failures.

Nginx – a high‑performance HTTP reverse proxy; the article focuses on weighted round‑robin configuration.

1.2 Nginx Weighted Round‑Robin Demo

<code>#配置负载均衡
upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}
</code>

Four local Go services listen on ports 3001‑3004 with weights matching the Nginx configuration, ensuring traffic is proportionally distributed.

2. Flash‑Sale System Design Choices

The article compares three inventory handling strategies:

Order‑First, Decrease‑Stock – create an order then deduct stock. Simple but causes heavy DB I/O and risk of “few‑sell” due to malicious orders.

Pay‑First, Decrease‑Stock – deduct stock after payment. Prevents few‑sell but can lead to “oversell” under extreme concurrency.

Pre‑Deduction (Reserve Stock) – reserve inventory first, then create orders asynchronously via a message queue; un‑paid orders expire and release stock. This approach minimizes DB I/O and balances oversell/few‑sell risks.

Images illustrate each strategy’s flow.

3. Stock Deduction Techniques The article details local in‑memory stock deduction, remote Redis‑based deduction, and a hybrid approach that combines both for fault tolerance. Redis is used as a high‑performance central store (single‑threaded, ~100k QPS). Buffer stock on each node mitigates loss when a node fails.

4. Code Demonstration (Go)

4.1 Initialization – sets local inventory, Redis keys, and a channel‑based lock.

<code>package localSpike

type LocalSpike struct {
    LocalInStock     int64
    LocalSalesVolume int64
}

package remoteSpike

type RemoteSpikeKeys struct {
    SpikeOrderHashKey   string // Redis hash key for orders
    TotalInventoryKey   string // Field for total tickets
    QuantityOfOrderKey string // Field for sold tickets
}

func NewPool() *redis.Pool {
    return &redis.Pool{
        MaxIdle:   10000,
        MaxActive: 12000,
        Dial: func() (redis.Conn, error) {
            c, err := redis.Dial("tcp", ":6379")
            if err != nil { panic(err.Error()) }
            return c, err
        },
    }
}

func init() {
    localSpike = localSpike2.LocalSpike{LocalInStock: 150, LocalSalesVolume: 0}
    remoteSpike = remoteSpike2.RemoteSpikeKeys{SpikeOrderHashKey: "ticket_hash_key", TotalInventoryKey: "ticket_total_nums", QuantityOfOrderKey: "ticket_sold_nums"}
    redisPool = remoteSpike2.NewPool()
    done = make(chan int, 1)
    done <- 1
}
</code>

4.2 Local and Remote Stock Deduction

<code>func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume++
    return spike.LocalSalesVolume < spike.LocalInStock
}

const LuaScript = `
    local ticket_key = KEYS[1]
    local ticket_total_key = ARGV[1]
    local ticket_sold_key = ARGV[2]
    local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
    local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
    if ticket_total_nums >= ticket_sold_nums then
        return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
    end
    return 0
`

func (r *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, r.SpikeOrderHashKey, r.TotalInventoryKey, r.QuantityOfOrderKey))
    if err != nil { return false }
    return result != 0
}
</code>

4.3 HTTP Handler and Logging

<code>func handleReq(w http.ResponseWriter, r *http.Request) {
    redisConn := redisPool.Get()
    <-done
    var LogMsg string
    if localSpike.LocalDeductionStock() && remoteSpike.RemoteDeductionStock(redisConn) {
        util.RespJson(w, 1, "抢票成功", nil)
        LogMsg = "result:1,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
    } else {
        util.RespJson(w, -1, "已售罄", nil)
        LogMsg = "result:0,localSales:" + strconv.FormatInt(localSpike.LocalSalesVolume, 10)
    }
    done <- 1
    writeLog(LogMsg, "./stat.log")
}

func writeLog(msg string, logPath string) {
    fd, _ := os.OpenFile(logPath, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0644)
    defer fd.Close()
    content := strings.Join([]string{msg, "\r\n"}, "")
    fd.Write([]byte(content))
}
</code>

4.4 Performance Testing – ApacheBench (ab) shows the single‑node service handling over 4,000 requests per second with uniform traffic distribution and stable Redis performance.

<code>ab -n 10000 -c 100 http://127.0.0.1:3005/buy/ticket
</code>

Log excerpts confirm correct request handling and the expected split of successful versus sold‑out responses.

5. Summary The article demonstrates a practical, high‑concurrency ticket‑flash‑sale system that avoids heavy database I/O by using in‑memory operations and Redis for centralized inventory, employs weighted Nginx load balancing, and tolerates node failures through buffered stock. Key takeaways include effective load distribution and leveraging Go’s native concurrency model.

backendRedisGoticketinghigh-concurrencyload-balancing
php中文网 Courses
Written by

php中文网 Courses

php中文网's platform for the latest courses and technical articles, helping PHP learners advance quickly.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.