Optimizing Apache HttpClient for High-Concurrency Scenarios
This article details practical optimization techniques for Apache HttpClient—including connection pooling, keep-alive, singleton client usage, proper timeout settings, and asynchronous handling—to reduce average request latency from 250 ms to about 80 ms in a ten-million-calls-per-day service.
1. Background
We have a business that calls an HTTP service provided by another department, with daily call volume at the ten‑million level. Using HttpClient, the original average execution time was 250 ms; after optimization it dropped to 80 ms, eliminating thread‑exhaustion alarms.
2. Analysis
Original implementation created a new HttpClient and HttpPost for each request, then closed response and client explicitly. The following problems were identified:
2.1 Repeated creation of HttpClient
HttpClient is thread‑safe; creating a new instance per request adds unnecessary overhead. A single shared instance should be used.
2.2 Repeated TCP connection establishment
Each request performed a full TCP handshake and teardown, consuming several milliseconds. Using keep‑alive to reuse connections dramatically reduces this cost.
2.3 Redundant entity buffering
Code copied the response entity into a String while the original HttpResponse still held the content, leading to double memory usage and the need for explicit connection closure.
HttpEntity entity = httpResponse.getEntity();
String response = EntityUtils.toString(entity);3. Implementation
Three main actions were taken: a singleton HttpClient, a connection‑pool manager with keep‑alive, and a better response handling strategy.
3.1 Define a keep‑alive strategy
Custom ConnectionKeepAliveStrategy reads the "timeout" parameter from the response header; if absent, a default of 60 seconds is used.
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
@Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
HeaderElementIterator it = new BasicHeaderElementIterator
(response.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase("timeout")) {
return Long.parseLong(value) * 1000;
}
}
return 60 * 1000; // default 60s
}
};3.2 Configure a PoolingHttpClientConnectionManager
Set maximum total connections and per‑route limits according to business needs.
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setMaxTotal(500);
connectionManager.setDefaultMaxPerRoute(50); // example values3.3 Build the HttpClient
httpClient = HttpClients.custom()
.setConnectionManager(connectionManager)
.setKeepAliveStrategy(myStrategy)
.setDefaultRequestConfig(RequestConfig.custom()
.setStaleConnectionCheckEnabled(true).build())
.build();Note: Using setStaleConnectionCheckEnabled is deprecated; a dedicated thread should periodically invoke closeExpiredConnections() and closeIdleConnections() .
public static class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread(HttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
@Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
connMgr.closeExpiredConnections();
connMgr.closeIdleConnections(30, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// terminate
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
}3.4 Reduce overhead when executing methods
Do not close the connection manually; let the client manage it. Use a ResponseHandler to automatically consume the entity.
public
T execute(final HttpHost target, final HttpRequest request,
final ResponseHandler
responseHandler, final HttpContext context)
throws IOException, ClientProtocolException {
Args.notNull(responseHandler, "Response handler");
final HttpResponse response = execute(target, request, context);
try {
return responseHandler.handleResponse(response);
} finally {
final HttpEntity entity = response.getEntity();
EntityUtils.consume(entity);
}
}4. Additional Settings
4.1 Timeout configuration
Configure connection timeout, socket timeout, and connection‑manager timeout, and optionally disable retries.
HttpParams params = new BasicHttpParams();
int CONNECTION_TIMEOUT = 2 * 1000;
int SO_TIMEOUT = 2 * 1000;
long CONN_MANAGER_TIMEOUT = 500L;
params.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, CONNECTION_TIMEOUT);
params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT, SO_TIMEOUT);
params.setLongParameter(ClientPNames.CONN_MANAGER_TIMEOUT, CONN_MANAGER_TIMEOUT);
params.setBooleanParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK, true);
httpClient.setHttpRequestRetryHandler(new DefaultHttpRequestRetryHandler(0, false));4.2 Nginx keep‑alive
If an Nginx reverse proxy is used, configure keepalive_timeout , keepalive_requests , and upstream keepalive to match the client settings.
After applying these changes, the average request latency decreased from 250 ms to roughly 80 ms, demonstrating a substantial performance gain.
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.