Databases 15 min read

Understanding Object Pooling in Java: Commons Pool 2, HikariCP, and Performance Benchmarks

This article explains Java object pooling concepts, introduces the Commons Pool 2 library and its usage with Redis's Jedis client, compares it with the high‑performance HikariCP database connection pool, and presents JMH benchmark results demonstrating significant throughput gains, while also covering configuration parameters and interview questions.

Architect
Architect
Architect
Understanding Object Pooling in Java: Commons Pool 2, HikariCP, and Performance Benchmarks

In typical Java code we often need to keep expensive objects such as thread resources, database connections, or TCP sockets for reuse; creating and destroying these objects repeatedly consumes considerable system resources and leads to performance loss.

Object pooling solves this by maintaining a virtual pool that stores ready‑to‑use instances, allowing fast acquisition when needed. The article first introduces the widely used Commons Pool 2 library and shows how to add it via Maven:

<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-pool2 -->
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-pool2</artifactId>
    <version>2.11.1</version>
</dependency>

The core class GenericObjectPool creates a pool by receiving a PooledObjectFactory and a configuration object:

public GenericObjectPool(final PooledObjectFactory<T> factory, final GenericObjectPoolConfig<T> config)

As a concrete example, the Redis client Jedis uses Commons Pool 2. The factory’s makeObject method wraps a newly created Jedis instance in a DefaultPooledObject :

@Override
public PooledObject<Jedis> makeObject() throws Exception {
    Jedis jedis = null;
    try {
        jedis = new Jedis(jedisSocketFactory, clientConfig);
        jedis.connect();
        return new DefaultPooledObject<>(jedis);
    } catch (JedisException je) {
        if (jedis != null) {
            try { jedis.quit(); } catch (RuntimeException e) { logger.warn("Error while QUIT", e); }
            try { jedis.close(); } catch (RuntimeException e) { logger.warn("Error while close", e); }
        }
        throw je;
    }
}

When a client calls borrowObject , the pool first tries to poll an idle object from a LinkedBlockingDeque . If none is available, the factory creates a new instance; the method’s simplified logic is shown below:

public T borrowObject(final Duration borrowMaxWaitDuration) throws Exception {
    // omitted lines
    while (p == null) {
        create = false;
        // try to get from idle queue
        p = idleObjects.pollFirst();
        if (p == null) {
            p = create();
            if (p != null) { create = true; }
        }
        // omitted lines
    }
    // omitted lines
}

The pool’s behaviour is governed by many configuration fields in GenericObjectPoolConfig , such as maxTotal , maxIdle , minIdle , maxWaitMillis , testOnBorrow , testWhileIdle , etc. Understanding these parameters is essential for tuning performance.

Interview‑style questions often focus on the timeout setting ( maxWaitMillis ). A typical recommendation is to set it to the maximum latency the service can tolerate (e.g., 500‑1000 ms for a 10 ms normal response).

To quantify the benefit of pooling, a JMH benchmark compares using a pooled Jedis instance versus creating a new one each time. The benchmark code is:

@Fork(2)
@State(Scope.Benchmark)
@Warmup(iterations = 5, time = 1)
@Measurement(iterations = 5, time = 1)
@BenchmarkMode(Mode.Throughput)
public class JedisPoolVSJedisBenchmark {
    JedisPool pool = new JedisPool("localhost", 6379);

    @Benchmark
    public void testPool() {
        Jedis jedis = pool.getResource();
        jedis.set("a", UUID.randomUUID().toString());
        jedis.close();
    }

    @Benchmark
    public void testJedis() {
        Jedis jedis = new Jedis("localhost", 6379);
        jedis.set("a", UUID.randomUUID().toString());
        jedis.close();
    }
}

The results, plotted with meta‑chart , show that the pooled version achieves roughly five times the throughput of the non‑pooled version.

The article then shifts to the popular database connection pool HikariCP , the default in Spring Boot. Its performance advantages stem from three main techniques:

Replacing ArrayList with FastList to reduce bounds checks.

Byte‑code optimisation via Javassist , using invokestatic instead of invokevirtual .

Implementing a lock‑free ConcurrentBag to minimise contention.

Configuration advice includes setting maximumPoolSize to a realistic value (typically 20‑50 for most databases), leaving minimumIdle at its default (equal to maximumPoolSize ), and enabling only testWhileIdle with an appropriate eviction interval.

The concept of a “result cache pool” is introduced, highlighting the similarity between caching and pooling: both store processed objects in a fast‑access area to avoid repeated expensive creation.

Finally, the article summarises the key points: use pooling when object creation is costly, when objects can be reset and reused, and tune pool size, timeout, and eviction settings to achieve optimal performance. It also encourages readers to apply similar thinking to HTTP connection pools, RPC pools, and thread pools.

JavaPerformanceHikariCPObject PoolingJMHCommons Pooldatabase-connection
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.