Backend Development 8 min read

Implementing Request Merging with ScheduledExecutorService in Java

This article explains the concept, advantages, and drawbacks of request merging, and provides a complete Java implementation using ScheduledExecutorService, a memory queue, and generic batch handling interfaces, along with usage examples and sample code.

Java Architect Essentials
Java Architect Essentials
Java Architect Essentials
Implementing Request Merging with ScheduledExecutorService in Java

1. What Is Request Merging

In web projects we usually use the HTTP protocol to handle requests, which follows a one‑request‑one‑response model.

Batch interfaces have a clear performance advantage because they reduce I/O interactions; under high concurrency, frequent requests can be merged by waiting for a short period or until a certain request count is reached, then sending a single batch request.

2. Pros and Cons of Request Merging

Pros: By waiting for a configurable time or request count, multiple requests are combined into one, reducing I/O overhead.

Cons: The merged interface introduces latency, so it cannot be used for operations that require immediate responses.

3. Technical Implementation of Request Merging

The implementation uses a scheduled thread pool ( ScheduledExecutorService ) together with an in‑memory queue ( LinkedBlockingDeque ) to collect requests and trigger batch processing either by count or by time.

The principle is to cache user requests; when the cache reaches a configured size or the scheduled task fires, the cached requests are merged and a batch interface is invoked.

Dependencies: only JDK is required, no third‑party libraries.

Core batch‑collapser utility class definition:

package com.leilei.support;

import lombok.extern.log4j.Log4j2;
import java.util.*;
import java.util.concurrent.*;

/**
 * @author lei
 * @desc Request merging utility class
 */
@Log4j2
public class BatchCollapser
{
    private static final Map
BATCH_INSTANCE = new ConcurrentHashMap<>();
    private static final ScheduledExecutorService SCHEDULE_EXECUTOR = Executors.newScheduledThreadPool(1);
    private final LinkedBlockingDeque
batchContainer = new LinkedBlockingDeque<>();
    private final BatchHandler
, R> handler;
    private final int countThreshold;

    private BatchCollapser(BatchHandler
, R> handler, int countThreshold, long timeThreshold) {
        this.handler = handler;
        this.countThreshold = countThreshold;
        SCHEDULE_EXECUTOR.scheduleAtFixedRate(() -> {
            try { popUpAndHandler(BatchHandlerType.BATCH_HANDLER_TYPE_TIME); }
            catch (Exception e) { log.error("pop-up container exception", e); }
        }, timeThreshold, timeThreshold, TimeUnit.SECONDS);
    }

    public void addRequestParam(T event) {
        batchContainer.add(event);
        if (batchContainer.size() >= countThreshold) {
            popUpAndHandler(BatchHandlerType.BATCH_HANDLER_TYPE_DATA);
        }
    }

    private void popUpAndHandler(BatchHandlerType handlerType) {
        List
tryHandlerList = Collections.synchronizedList(new ArrayList<>(countThreshold));
        batchContainer.drainTo(tryHandlerList, countThreshold);
        if (tryHandlerList.isEmpty()) return;
        try {
            R handle = handler.handle(tryHandlerList, handlerType);
            log.info("Batch execution result:{}", handle);
        } catch (Exception e) {
            log.error("batch execute error, transferList:{}", tryHandlerList, e);
        }
    }

    public static
BatchCollapser
getInstance(BatchHandler
, R> batchHandler, int countThreshold, long timeThreshold) {
        Class jobClass = batchHandler.getClass();
        if (BATCH_INSTANCE.get(jobClass) == null) {
            synchronized (BatchCollapser.class) {
                BATCH_INSTANCE.putIfAbsent(jobClass, new BatchCollapser<>(batchHandler, countThreshold, timeThreshold));
            }
        }
        return BATCH_INSTANCE.get(jobClass);
    }

    public interface BatchHandler
{
        R handle(T input, BatchHandlerType handlerType);
    }

    public enum BatchHandlerType {
        BATCH_HANDLER_TYPE_DATA,
        BATCH_HANDLER_TYPE_TIME,
    }
}

Usage example:

package com.leilei.support;

import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.util.List;

@Service
public class ProductService implements BatchCollapser.BatchHandler
, Integer> {
    private BatchCollapser
batchCollapser;

    @PostConstruct
    private void postConstructorInit() {
        // Merge when 20 requests are accumulated or every 5 seconds
        batchCollapser = BatchCollapser.getInstance(this, 20, 5);
    }

    @Override
    public Integer handle(List
input, BatchCollapser.BatchHandlerType handlerType) {
        System.out.println("Handler type:" + handlerType + ", received batch params:" + input);
        return input.stream().mapToInt(x -> x).sum();
    }

    // Simulate a request every 300 ms
    @Scheduled(fixedDelay = 300)
    public void aaa() {
        Integer requestParam = (int) (Math.random() * 100) + 1;
        batchCollapser.addRequestParam(requestParam);
        System.out.println("Current request param:" + requestParam);
    }
}

Simple data class used in the demo:

@Data
public class Product {
    private Integer id;
    private String notes;
}

The above utilities are only a demonstration; developers should adapt and extend them according to their own performance requirements and latency constraints.

Source: blog.csdn.net/leilei1366615/article/details/123858619

backendJavaconcurrencybatch processingrequest merging
Java Architect Essentials
Written by

Java Architect Essentials

Committed to sharing quality articles and tutorials to help Java programmers progress from junior to mid-level to senior architect. We curate high-quality learning resources, interview questions, videos, and projects from across the internet to help you systematically improve your Java architecture skills. Follow and reply '1024' to get Java programming resources. Learn together, grow together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.