Backend Development 11 min read

Preventing Coupon Over‑Issuance in High‑Concurrency Scenarios with Java, SQL, and Redis Distributed Locks

This article analyzes the coupon over‑issuance issue caused by concurrent requests, demonstrates why simple SQL updates can fail under load, and presents four solutions—including Java synchronized blocks, conditional SQL updates, Redis distributed locks, and Redisson’s lock implementation—to ensure atomic stock reduction and prevent negative inventory.

Java Captain
Java Captain
Java Captain
Preventing Coupon Over‑Issuance in High‑Concurrency Scenarios with Java, SQL, and Redis Distributed Locks

In a recent project a coupon‑claiming feature suffered from over‑issuance when multiple users attempted to claim the same coupon simultaneously. Each coupon has a total stock (e.g., 120) and a per‑user limit (e.g., 140). When a claim succeeds a record is written to a secondary table (Table B).

Under low concurrency the basic update coupon set stock = stock - 1 where id = #{coupon_id} SQL works, but load testing with JMeter (500 concurrent requests) revealed that the stock for a specific coupon (id = 19) became –1, indicating an over‑deduction.

The root cause is that two threads can both pass the availability check before either reduces the stock, leading to a race condition where both decrement the same row.

Solution 1 – Java synchronized block

By wrapping the entire claim logic in a synchronized(this) block, only one thread can execute the method at a time:

synchronized (this){
    LoginUser loginUser = LoginInterceptor.threadLocal.get();
    CouponDO couponDO = couponMapper.selectOne(new QueryWrapper
()
            .eq("id", couponId)
            .eq("category", categoryEnum.name()));
    if(couponDO == null){
        throw new BizException(BizCodeEnum.COUPON_NO_EXITS);
    }
    this.checkCoupon(couponDO, loginUser.getId());
    // build record
    CouponRecordDO couponRecordDO = new CouponRecordDO();
    BeanUtils.copyProperties(couponDO, couponRecordDO);
    couponRecordDO.setCreateTime(new Date());
    couponRecordDO.setUseState(CouponStateEnum.NEW.name());
    couponRecordDO.setUserId(loginUser.getId());
    couponRecordDO.setUserName(loginUser.getName());
    couponRecordDO.setCouponId(couponDO.getId());
    couponRecordDO.setId(null);
    int row = couponMapper.reduceStock(couponId);
    if(row == 1){
        couponRecordMapper.insert(couponRecordDO);
    } else {
        log.info("发送优惠券失败:{},用户:{}", couponDO, loginUser);
    }
}

While this prevents over‑issuance in a single‑JVM deployment, it does not work across a cluster and can cause thread contention.

Solution 2 – Conditional SQL update

Adding a stock‑check condition makes the update atomic at the database level (InnoDB row lock):

update coupon set stock = stock - 1 where id = #{coupon_id} and stock > 0

For optimistic locking you can also include a version column:

update product set stock = stock - 1, version = version + 1 
where id = 1 and stock > 0 and version = #{lastVersion}

These approaches avoid over‑issuance as long as the business can tolerate the occasional failed update.

Solution 3 – Redis distributed lock (setnx)

Using Redis as a lock service, the flow is:

String key = "lock:coupon:" + couponId;
try {
    if (setnx(key, "1")) { // acquire lock
        exp(key, 30, TimeUnit.MILLISECONDS); // set expiration
        try {
            // business logic
        } finally {
            del(key);
        }
    } else {
        // retry or spin
    }
}

To avoid accidental lock release, store the thread ID (or UUID) as the lock value and delete only if the stored value matches:

String threadId = Thread.currentThread().getId();
if (setnx(key, threadId)) {
    exp(key, 30, TimeUnit.MILLISECONDS);
    try {
        // business logic
    } finally {
        if (get(key).equals(threadId)) {
            del(key);
        }
    }
}

For atomic check‑and‑delete, a Lua script can be used:

String script = "if redis.call('get',KEYS[1]) == ARGV[1] then return redis.call('del',KEYS[1]) else return 0 end";
redisTemplate.execute(new DefaultRedisScript<>(script, Integer.class), Arrays.asList(key), threadId);

Solution 4 – Redisson client

Redisson provides a high‑level distributed lock with an automatic watchdog that renews the lock lease:

<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson</artifactId>
    <version>3.17.4</version>
</dependency>

Configuration example:

@Configuration
public class AppConfig {
    @Value("${spring.redis.host}") private String redisHost;
    @Value("${spring.redis.port}") private String redisPort;
    @Bean
    public RedissonClient redisson(){
        Config config = new Config();
        config.useSingleServer().setAddress("redis://" + redisHost + ":" + redisPort);
        return Redisson.create(config);
    }
}

Using the lock in service code:

public JsonData addCoupon(long couponId, CouponCategoryEnum categoryEnum){
    String key = "lock:coupon:" + couponId;
    RLock rLock = redisson.getLock(key);
    rLock.lock();
    try {
        // business logic
    } finally {
        rLock.unlock();
    }
    return JsonData.buildSuccess();
}

Redisson’s watchdog automatically extends the lock’s TTL (default 30 seconds), eliminating the need for manual expiration handling.

All four approaches address the over‑issuance problem from different angles: in‑process synchronization, database‑level atomicity, a custom Redis lock, and a production‑ready Redisson lock. Choose the one that fits your deployment topology and performance requirements.

JavaSQLConcurrencyRedisdistributed lockcoupon
Java Captain
Written by

Java Captain

Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.