Backend Development 8 min read

Optimizing Large-Scale API Parameter Combination Testing with Concurrency and QPS Control

This article describes how to efficiently test billions of API parameter combinations by replacing naive nested loops with a queue‑based concurrent approach, dynamically controlling QPS, and addressing memory‑pressure issues using thread‑safe data structures.

FunTester
FunTester
FunTester
Optimizing Large-Scale API Parameter Combination Testing with Concurrency and QPS Control

The earliest mention of interface testing highlighted its execution‑speed advantage, often many times faster than UI‑level testing, and this article presents an upgraded version of that benefit.

Although the target API has only a few parameters, each parameter has a large range—up to about 500 enumerated values for the largest and around 20 for the smallest—so a full combinatorial sweep could reach the order of a billion cases.

The requirement was to iterate over every possible parameter combination for testing, which quickly led to many pitfalls.

Initial Version

The first idea was to nest multiple loops and fire requests concurrently, which is straightforward to implement:

@Log4j2
class TT extends MonitorRT {
    static void main(String[] args) {
        ["types参数集合"].each {
            def type = it
            ["id类型集合"].each {
                def id = it
                2.upto(99) {
                    def a = it
                    2.upto(99) {
                        def b = it
                        2.upto(99) {
                            def c = it
                            def params = new JSONObject()
                            params.id = id
                            params.endTime = 0
                            params.type = type
                            params.paramMap = parse('{"a":"${a}","b":"$b","c":"$c"}')
                            fun {
                                getHttpResponse(getHttpGet(url, params))
                            }
                        }
                    }
                }
            }
        }
    }
}

This approach quickly shows two major drawbacks:

The sheer number of tasks overwhelms the thread pool, causing rejections.

QPS and concurrency cannot be controlled.

To address the first issue, I increased the length of the asynchronous thread‑pool waiting queue, but that introduced a new problem: excessive memory pressure and frequent GC spikes.

Upgraded Version

For the second issue, I reverted to a performance‑testing framework and used dynamic QPS adjustment. The idea is to first enumerate all parameter sets into a List, then replay the List while controlling QPS via a dynamic load model.

static void main(String[] args) {
    def list = []
    ["types参数集合"].each {
        def type = it
        ["id类型集合"].each {
            def id = it
            2.upto(99) {
                def a = it
                2.upto(99) {
                    def b = it
                    2.upto(99) {
                        def c = it
                        def params = new JSONObject()
                        params.id = id
                        params.endTime = 0
                        params.type = type
                        params.paramMap = parse('{"a":"${a}","b":"$b","c":"$c"}')
                    }
                }
            }
        }
    }
    AtomicInteger index = new AtomicInteger()
    def test = {
        def increment = index.getAndIncrement()
        if (increment >= list.size()) FunQpsConcurrent.stop()
        else getHttpResponse(getHttpGet(url, list.get(increment)))
    }
    new FunQpsConcurrent(test, "遍历10亿参数组合").start()
}

Running this version caused the CPU to spike due to GC pressure from storing massive data in memory. To solve the memory issue, I referenced the approach from the article “10 Billion Log Replay Chronicle Performance Test”.

Final Version

The final solution uses a thread‑safe queue java.util.concurrent.LinkedBlockingQueue with a bounded waiting mechanism, combined with asynchronous request generation. A 1‑second sleep buffer prevents the queue from growing too large; consumption proceeds as long as the queue length stays within twice the 1‑second capacity.

static void main(String[] args) {
    def ps = new LinkedBlockingQueue()
    fun {
        ["types参数集合"].each {
            def type = it
            ["id类型集合"].each {
                def id = it
                2.upto(99) {
                    def a = it
                    2.upto(99) {
                        def b = it
                        2.upto(99) {
                            def c = it
                            def params = new JSONObject()
                            params.id = id
                            params.endTime = 0
                            params.type = type
                            params.paramMap = parse('{"a":"${a}","b":"$b","c":"$c"}')
                            if (ps.size() > 10_0000) sleep(1.0)
                            ps.put(params)
                        }
                    }
                }
            }
        }
    }
    AtomicInteger index = new AtomicInteger()
    def test = {
        def params = ps.poll(100, TimeUnit.MILLISECONDS)
        if (params == null) FunQpsConcurrent.stop()
        else getHttpResponse(getHttpGet(url, params))
    }
    new FunQpsConcurrent(test, "遍历10亿参数组合").start()
}

Having learned and applied the queue, I also plan to implement a 10‑billion‑level log replay feature and compare its performance with chronicle in the future.

FunTester Original Recommendations~ API Function Testing Series Performance Testing Series Groovy Series Java, Groovy, Go, Python Unit Test & White‑Box FunTester Community Highlights Testing Theory Soup FunTester Video Series Case Share: Solutions, Bugs, Crawlers UI Automation Series Testing Tools Series -- By FunTester
Javaconcurrencyperformance testinglarge scaleAPI testingQPS
FunTester
Written by

FunTester

10k followers, 1k articles | completely useless

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.