Backend Development 19 min read

Designing a High‑Concurrency Ticket Spike System: Load Balancing, Stock Deduction, and Go Implementation

This article explores the architecture and implementation of a high‑concurrency ticket‑spike system, covering distributed load‑balancing, Nginx weighted round‑robin configuration, Go‑based local and remote stock deduction with Redis, performance testing, and strategies to avoid overselling and underselling.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Designing a High‑Concurrency Ticket Spike System: Load Balancing, Stock Deduction, and Go Implementation

1. Large‑Scale High‑Concurrency System Architecture

The system adopts a distributed cluster with multiple layers of load balancers, fault‑tolerance mechanisms (dual data centers, node failover, disaster recovery) and traffic distribution to ensure high availability under millions of QPS.

1.1 Load‑Balancing Overview

Three common load‑balancing methods are introduced: OSPF (routing protocol), LVS (Linux Virtual Server), and Nginx (HTTP reverse proxy). The article focuses on Nginx weighted round‑robin.

1.2 Nginx Weighted Round‑Robin Demo

The Nginx upstream module is used to assign weights to backend servers listening on ports 3001‑3004.

upstream load_rule {
    server 127.0.0.1:3001 weight=1;
    server 127.0.0.1:3002 weight=2;
    server 127.0.0.1:3003 weight=3;
    server 127.0.0.1:3004 weight=4;
}
server {
    listen 80;
    server_name load_balance.com www.load_balance.com;
    location / {
        proxy_pass http://load_rule;
    }
}

2. Spike System Design Choices

The article analyzes three stock‑deduction strategies: order‑first‑then‑stock, payment‑first‑then‑stock, and pre‑deduction. It concludes that pre‑deduction (reserve stock locally, then asynchronously create orders) offers the best balance between performance and correctness.

2.1 Local Stock Deduction

Each server keeps a portion of the total inventory in memory, reducing stock locally without immediate database I/O.

2.2 Remote Unified Stock Deduction

Redis stores the global inventory. A Lua script guarantees atomic check‑and‑decrement operations.

local ticket_key = KEYS[1]
local ticket_total_key = ARGV[1]
local ticket_sold_key = ARGV[2]
local ticket_total_nums = tonumber(redis.call('HGET', ticket_key, ticket_total_key))
local ticket_sold_nums = tonumber(redis.call('HGET', ticket_key, ticket_sold_key))
if (ticket_total_nums >= ticket_sold_nums) then
    return redis.call('HINCRBY', ticket_key, ticket_sold_key, 1)
end
return 0

3. Go Implementation

Key Go structures and functions are presented.

package localSpike

type LocalSpike struct {
    LocalInStock   int64
    LocalSalesVolume int64
}

func (spike *LocalSpike) LocalDeductionStock() bool {
    spike.LocalSalesVolume++
    return spike.LocalSalesVolume < spike.LocalInStock
}
package remoteSpike

const LuaScript = `...` // (see Lua script above)

func (r *RemoteSpikeKeys) RemoteDeductionStock(conn redis.Conn) bool {
    lua := redis.NewScript(1, LuaScript)
    result, err := redis.Int(lua.Do(conn, r.SpikeOrderHashKey, r.TotalInventoryKey, r.QuantityOfOrderKey))
    if err != nil { return false }
    return result != 0
}

The main function registers the /buy/ticket handler, initializes Redis pool, local stock, and a channel used as a lightweight lock.

func main() {
    http.HandleFunc("/buy/ticket", handleReq)
    http.ListenAndServe(":3005", nil)
}

The request handler performs local and remote stock deduction atomically, responds with success or sold‑out messages, and logs the result.

4. Performance Testing

ApacheBench (ab) is used to simulate 10,000 requests with concurrency 100. The test shows the single‑machine service handling over 4,000 requests per second, confirming the effectiveness of the design.

5. Summary

The article demonstrates a practical high‑concurrency ticket‑spike system using load balancing, in‑memory stock reservation, Redis for global consistency, and Go’s native concurrency model. It highlights how to avoid overselling and underselling, achieve fault tolerance with buffer stock, and scale the system to handle massive QPS.

load balancingRedisGohigh concurrencydistributed systemticket spike
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.