Backend Development 14 min read

Request Collapsing Techniques: Hystrix Collapser, Custom BatchCollapser, and ConcurrentHashMultiset

This article compares three request‑collapsing techniques—Hystrix Collapser, a custom BatchCollapser, and Guava’s ConcurrentHashMultiset—detailing their designs, implementations, configurations, and suitable scenarios for reducing downstream load and improving system throughput, including code examples, timer‑based batching, and thread‑safe container usage.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Request Collapsing Techniques: Hystrix Collapser, Custom BatchCollapser, and ConcurrentHashMultiset

Combining similar or duplicate requests upstream before sending them downstream can greatly reduce downstream load and improve overall system throughput. This article introduces three request‑collapsing techniques—Hystrix Collapser, a custom BatchCollapser, and Guava’s ConcurrentHashMultiset—and compares their implementations and applicable scenarios.

Hystrix Collapser

Hystrix is a Netflix‑open‑source library that provides circuit‑breaker functionality and also includes a request collapser. Using the hystrix‑javanica module, developers can annotate methods with @HystrixCollapser and @HystrixCommand to enable request merging. The collapser requires a batch method that returns a java.util.List and a single method that returns a java.util.concurrent.Future . The single method can only accept one argument; multiple arguments must be wrapped in a custom parameter object.

Configuration example (bean definition for the Hystrix aspect):

Key points of the collapser implementation:

Add @HystrixCollapser on the method to be merged and @HystrixCommand on the batch method.

The single method must accept a single argument; batch method receives a java.util.List .

Return types must match request count (Future for single, List for batch).

Simple example:

public class HystrixCollapserSample {
    @HystrixCollapser(batchMethod = "batch")
    public Future
single(String input) {
        return null; // never executed
    }
    public List
batch(List
inputs) {
        return inputs.stream().map(it -> Boolean.TRUE).collect(Collectors.toList());
    }
}

Configuration details (annotation parameters): collapserKey, batchMethod, scope (REQUEST or GLOBAL), and collapserProperties such as maxRequestsInBatch , timerDelayInMilliseconds , and requestCache.enabled . Example full annotation:

@HystrixCollapser(
    batchMethod = "batch",
    collapserKey = "single",
    scope = com.netflix.hystrix.HystrixCollapser.Scope.GLOBAL,
    collapserProperties = {
        @HystrixProperty(name = "maxRequestsInBatch", value = "100"),
        @HystrixProperty(name = "timerDelayInMilliseconds", value = "1000"),
        @HystrixProperty(name = "requestCache.enabled", value = "true")
    })

BatchCollapser (Custom Implementation)

For scenarios where the result of each request is not needed, a lightweight custom collapser can be built. It stores incoming requests in a thread‑safe container (e.g., LinkedBlockingDeque ) and triggers batch execution either when a size threshold is reached or after a fixed time interval using a scheduled timer thread.

Key design points:

Container must allow duplicate elements and preserve order.

Container must support atomic bulk removal without explicit locking.

Implementation uses LinkedBlockingDeque and a ScheduledExecutorService to periodically drain the queue and invoke a user‑provided Handler .

public class BatchCollapser
implements InitializingBean {
    private static final Logger logger = LoggerFactory.getLogger(BatchCollapser.class);
    private static volatile Map
, BatchCollapser
> instance = Maps.newConcurrentMap();
    private static final ScheduledExecutorService SCHEDULE_EXECUTOR = Executors.newScheduledThreadPool(1);
    private volatile LinkedBlockingDeque
batchContainer = new LinkedBlockingDeque<>();
    private Handler
, Boolean> cleaner;
    private long interval;
    private int threshHold;
    private BatchCollapser(Handler
, Boolean> cleaner, int threshHold, long interval) {
        this.cleaner = cleaner;
        this.threshHold = threshHold;
        this.interval = interval;
    }
    @Override
    public void afterPropertiesSet() throws Exception {
        SCHEDULE_EXECUTOR.scheduleAtFixedRate(() -> {
            try { clean(); } catch (Exception e) { logger.error("clean container exception", e); }
        }, 0, interval, TimeUnit.MILLISECONDS);
    }
    public void submit(E event) {
        batchContainer.add(event);
        if (batchContainer.size() >= threshHold) { clean(); }
    }
    private void clean() {
        List
transferList = Lists.newArrayListWithExpectedSize(threshHold);
        batchContainer.drainTo(transferList, 100);
        if (CollectionUtils.isEmpty(transferList)) return;
        try { cleaner.handle(transferList); } catch (Exception e) { logger.error("batch execute error, transferList:{}", transferList, e); }
    }
    public static
BatchCollapser
getInstance(Handler
, Boolean> cleaner, int threshHold, long interval) {
        Class
jobClass = cleaner.getClass();
        if (instance.get(jobClass) == null) {
            synchronized (BatchCollapser.class) {
                if (instance.get(jobClass) == null) {
                    instance.put(jobClass, new BatchCollapser<>(cleaner, threshHold, interval));
                }
            }
        }
        return (BatchCollapser
) instance.get(jobClass);
    }
}

ConcurrentHashMultiset

Guava’s ConcurrentHashMultiset provides a thread‑safe multiset where each element maintains a count. It is ideal for high‑frequency duplicate‑counting scenarios, such as aggregating statistics before persisting to a database.

Typical usage merges request results by counting occurrences and then constructing a batch list:

if (ConcurrentHashMultiset.isEmpty()) { return; }
List
transferList = Lists.newArrayList();
ConcurrentHashMultiset.elementSet().forEach(request -> {
    int count = ConcurrentHashMultiset.count(request);
    if (count <= 0) return;
    transferList.add(count == 1 ? request : new Request(request.getIncrement() * count));
    ConcurrentHashMultiset.remove(request, count);
});

Summary of Scenarios

Hystrix Collapser: when each request’s result is required and the extra latency from the timer is acceptable.

Custom BatchCollapser: when results are not needed and merging can be triggered by time or count thresholds.

ConcurrentHashMultiset: for high‑duplicate‑rate statistical aggregation before downstream processing.

These techniques can also be combined—for example, using ConcurrentHashMultiset as the container inside a custom BatchCollapser—to leverage both time‑based batching and duplicate counting advantages.

batch processingGuavaJava ConcurrencyHystrixRequest Collapsing
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.