Resolving HTTP ConnectionPoolTimeoutException and CLOSE_WAIT Issues in Java Applications
This article analyzes a production Java service that repeatedly hit org.apache.http.conn.ConnectionPoolTimeoutException due to exhausted HttpClient connection pools, explains how CLOSE_WAIT sockets are created when responses are not closed, and provides step‑by‑step code fixes and Tomcat configuration tweaks to eliminate the problem.
Author background: Zhang Zhaoyuan joined Qunar in 2018, works on the international flight ticket team, and has experience designing high‑concurrency distributed systems.
Phenomenon: The application started generating alerts, and logs showed a large number of org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool errors. Initial guess pointed to a full HttpClient connection pool.
Investigation: Checking other machines showed no similar pool exhaustion, suggesting the issue was isolated to a single host. Netstat revealed many connections stuck in CLOSE_WAIT on port 8080, exactly matching the HttpClient pool size (200). Each CLOSE_WAIT held one byte of unread data, indicating the client never closed the response stream.
Root cause: The code that performed the HTTP request logged the response but never consumed or closed the CloseableHttpResponse . Consequently, the underlying socket remained in CLOSE_WAIT , filling the pool.
Original problematic code:
public void keep(String url) {
log.info("url {}", url);
HttpPost httpPost = new HttpPost(url);
CloseableHttpResponse response = httpClient.execute(httpPost);
log.info("result:{}", response);
} // response never closedFix: Read the response entity and explicitly close the stream (or use EntityUtils.toString() which closes the stream internally).
public void keep(String url) {
log.info("url {}", url);
HttpPost httpPost = new HttpPost(url);
CloseableHttpResponse response = httpClient.execute(httpPost);
HttpEntity entity = response.getEntity();
String resBody = EntityUtils.toString(entity);
log.info("result:{}", resBody);
// response and entity are now closed
}The helper method used by EntityUtils.toString() also ensures the input stream is closed:
public static String toString(HttpEntity entity, Charset defaultCharset) throws IOException, ParseException {
Args.notNull(entity, "Entity");
InputStream instream = entity.getContent();
if (instream == null) return null;
// read stream, build string, finally close stream
instream.close();
return result;
}Tomcat configuration insight: Without a keepAliveTimeout , Tomcat falls back to connectionTimeout (20 s by default). Reducing this timeout to 5 s on a test server reproduced the CLOSE_WAIT behavior, confirming that idle connections were being closed by the server while the client kept the socket open.
Additional observations: Server‑side FIN_WAIT2 connections are reclaimed by the OS after net.ipv4.tcp_fin_timeout (default 30 s). Monitoring tools like netstat , mpstat , and top help locate the offending process and CPU usage spikes caused by dead loops.
Extended reading: The article also discusses why Tomcat may close connections proactively, how to simulate the issue with a simple Spring Boot demo, and how to handle non‑200 responses or exceptions to avoid leaving streams open.
Conclusion: By ensuring every HttpClient response is fully consumed and closed, and by tuning Tomcat’s timeout settings, the CLOSE_WAIT buildup disappears, restoring normal connection pool behavior.
Qunar Tech Salon
Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.