Backend Development 13 min read

Request Merging and Batch Processing in Java Spring Boot to Reduce Database Connections

This article explains how to merge multiple user‑detail requests into a single database query using a blocking queue, scheduled thread pool, and CompletableFuture in Spring Boot, providing code examples, a high‑concurrency test, and discussion of trade‑offs such as added latency and timeout handling.

Java Captain
Java Captain
Java Captain
Request Merging and Batch Processing in Java Spring Boot to Reduce Database Connections

The article introduces the concept of request merging, showing that combining several user‑detail queries into one SQL statement can dramatically reduce database connection usage, especially under high concurrency.

Technical means are presented: a LinkedBlockingQueue stores incoming requests, a ScheduledThreadPoolExecutor periodically batches them, and CompletableFuture (which lacks a built‑in timeout) is used to deliver results back to callers.

Core service code defines a UserService interface with a batch method and its implementation that builds an IN clause, executes a single query, and groups results by user ID.

public interface UserService {
    Map<String, Users> queryUserByIdBatch(List<UserWrapBatchService.Request> userReqs);
}

@Service
public class UserServiceImpl implements UserService {
    @Resource
    private UsersMapper usersMapper;

    @Override
    public Map<String, Users> queryUserByIdBatch(List<UserWrapBatchService.Request> userReqs) {
        List<Long> userIds = userReqs.stream()
            .map(UserWrapBatchService.Request::getUserId)
            .collect(Collectors.toList());
        QueryWrapper<Users> queryWrapper = new QueryWrapper<>();
        queryWrapper.in("id", userIds);
        List<Users> users = usersMapper.selectList(queryWrapper);
        Map<Long, List<Users>> userGroup = users.stream()
            .collect(Collectors.groupingBy(Users::getId));
        HashMap<String, Users> result = new HashMap<>();
        userReqs.forEach(val -> {
            List<Users> usersList = userGroup.get(val.getUserId());
            if (!CollectionUtils.isEmpty(usersList)) {
                result.put(val.getRequestId(), usersList.get(0));
            } else {
                result.put(val.getRequestId(), null);
            }
        });
        return result;
    }
}

Batch request handling is encapsulated in UserWrapBatchService , which queues Request objects, each containing a unique request ID, the user ID, and a CompletableFuture<Users> . A scheduled task runs every 10 ms (after an initial 100 ms delay), pulls up to MAX_TASK_NUM requests, calls the batch service, and completes each future.

@Service
public class UserWrapBatchService {
    @Resource
    private UserService userService;
    public static int MAX_TASK_NUM = 100;
    private final Queue<Request> queue = new LinkedBlockingQueue();

    public class Request {
        String requestId;
        Long userId;
        CompletableFuture<Users> completableFuture;
        // getters and setters omitted for brevity
    }

    @PostConstruct
    public void init() {
        ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
        scheduledExecutorService.scheduleAtFixedRate(() -> {
            int size = queue.size();
            if (size == 0) return;
            List<Request> list = new ArrayList<>();
            for (int i = 0; i < size && i < MAX_TASK_NUM; i++) {
                list.add(queue.poll());
            }
            List<Request> userReqs = new ArrayList<>(list);
            Map<String, Users> response = userService.queryUserByIdBatch(userReqs);
            for (Request request : list) {
                Users result = response.get(request.requestId);
                request.completableFuture.complete(result);
            }
        }, 100, 10, TimeUnit.MILLISECONDS);
    }

    public Users queryUser(Long userId) {
        Request request = new Request();
        request.requestId = UUID.randomUUID().toString().replace("-", "");
        request.userId = userId;
        request.completableFuture = new CompletableFuture<>();
        queue.offer(request);
        try {
            return request.completableFuture.get();
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
        return null;
    }
}

The controller exposes an endpoint /merge that delegates to the batch service, returning a Callable<Users> so the request can be processed asynchronously.

@RequestMapping("/merge")
public Callable<Users> merge(Long userId) {
    return () -> userBatchService.queryUser(userId);
}

A high‑concurrency test creates 30 threads, each issuing three requests to the endpoint simultaneously using RestTemplate , demonstrating how the batch mechanism reduces the number of actual database calls.

public class TestBatch {
    private static int threadCount = 30;
    private static final CountDownLatch LATCH = new CountDownLatch(threadCount);
    private static final RestTemplate restTemplate = new RestTemplate();

    public static void main(String[] args) {
        for (int i = 0; i < threadCount; i++) {
            new Thread(() -> {
                LATCH.countDown();
                try { LATCH.await(); } catch (InterruptedException e) { e.printStackTrace(); }
                for (int j = 1; j <= 3; j++) {
                    int param = new Random().nextInt(4);
                    if (param <= 0) param++;
                    String response = restTemplate.getForObject(
                        "http://localhost:8080/asyncAndMerge/merge?userId=" + param,
                        String.class);
                    System.out.println(Thread.currentThread().getName() + " param " + param + " response " + response);
                }
            }).start();
        }
    }
}

The article also discusses two practical issues: Java 8’s CompletableFuture lacks a timeout mechanism, and the generated SQL may exceed length limits, requiring a maximum batch size check ( MAX_TASK_NUM ).

To address the timeout problem, a variant using a LinkedBlockingQueue<Users> with a poll timeout is presented. The request object now carries a queue instead of a future, and the consumer thread offers the result to this queue, while the caller polls with a 3‑second timeout.

public class UserWrapBatchQueueService {
    // ... same fields as before ...
    public Users queryUser(Long userId) {
        Request request = new Request();
        request.requestId = UUID.randomUUID().toString().replace("-", "");
        request.userId = userId;
        LinkedBlockingQueue
usersQueue = new LinkedBlockingQueue<>();
        request.usersQueue = usersQueue;
        queue.offer(request);
        try {
            return usersQueue.poll(3000, TimeUnit.MILLISECONDS);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return null;
    }
    // scheduled task puts results into request.usersQueue
}

Conclusion : Merging requests and processing them in batches can greatly save connection resources for databases or remote services, but it introduces additional waiting time before the actual business logic runs, making it unsuitable for low‑concurrency scenarios.

Javaconcurrencybatch processingCompletableFutureSpring BootQueuerequest merging
Java Captain
Written by

Java Captain

Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.