Fundamentals 11 min read

Understanding TCP Full and Half Connection Queues and Their Overflow Handling

This article explains the TCP three‑handshake process, the roles of the full‑connection (accept) and half‑connection (SYN) queues, how to inspect their lengths on Linux, and practical solutions for queue overflow including kernel parameters and socket options.

Yang Money Pot Technology Team
Yang Money Pot Technology Team
Yang Money Pot Technology Team
Understanding TCP Full and Half Connection Queues and Their Overflow Handling

Preface

When using third‑party services, occasional network timeouts may occur; this article investigates the root causes, focusing on TCP full‑connection and half‑connection queues, how to check for overflow, and ways to resolve it.

Three‑Way Handshake Overview

Anyone familiar with basic computer networking knows the three‑way handshake and the existence of a full‑connection queue and a half‑connection queue. The overall process is illustrated below.

The server first calls bind() to bind an IP and port, then listen() to wait for client connections.

The client calls connect() , sending a SYN to the server and entering the SYN_SENT state.

After receiving the SYN, the server replies with SYN+ACK, placing the new socket (state SYN_RCVD) into the half‑connection queue (SYN QUEUE). Note that the original listening socket remains in LISTEN state.

The client receives the server’s ACK and sends its own ACK; the server then moves the socket from the half‑connection queue to the full‑connection queue (ACCEPTED QUEUE), and both sides reach ESTABLISHED state.

The server calls accept() , removing the socket from the full‑connection queue and handing it to the application layer.

We will first discuss the full‑connection queue, then the half‑connection queue, because the former is simpler and its saturation affects the latter.

Full‑Connection Queue

Principle

The full‑connection queue contains sockets that have completed the three‑way handshake but have not yet been accepted by the application (i.e., accept() has not been called). If incoming connections arrive faster than the application can process them, the queue grows.

Factors Limiting Queue Length

The queue length is limited by two parameters:

/proc/sys/net/core/somaxconn (default 128). The effective limit is min(net.core.somaxconn, backlog) .

The backlog argument passed to listen() (e.g., the second argument of Java’s ServerSocket constructor). public ServerSocket(int port, int backlog) throws IOException { this(port, backlog, null); }

Thus the final queue length is qlen = min(net.core.somaxconn, backlog) .

Viewing the Full‑Connection Queue

Use ss -ltn to list listening TCP sockets. Example output shows the maximum Send‑Q (full‑connection queue size) and current Recv‑Q (used slots). For SSH the default maximum is 128, meaning up to 129 connections can be in the handshake‑completed state.

Experimental Verification

Server code (Java):

import java.io.IOException;
import java.net.ServerSocket;
import java.util.concurrent.TimeUnit;

public class Test {
    public static void main(String[] args) throws IOException, InterruptedException {
        ServerSocket socket = new ServerSocket(8080, 50);
        while (true) {
            TimeUnit.HOURS.sleep(1);
        }
    }
}

With the default /proc/sys/net/core/somaxconn value of 128, the effective queue length is 50 (the backlog argument).

Client code (Java) creates 100 concurrent connections:

public static void main(String[] args) throws InterruptedException {
    for (int i = 0; i < 100; i++) {
        new Thread(() -> {
            try {
                new Socket("39.105.125.58", 8080);
            } catch (IOException e) {
                e.printStackTrace();
            }
        }).start();
    }
    TimeUnit.HOURS.sleep(1);
}

Running ss -ltn shows a maximum queue size of 50 and a current length of 51 (the 51st connection is the one currently being added). The Recv‑Q stops growing, indicating the queue is full.

Using netstat -s we can see that 49 sockets were dropped (100 requests – 51 accepted), confirming overflow.

What to Do When the Full‑Connection Queue Is Full

If connections arrive too quickly and the application processes them too slowly, the queue fills. The kernel parameter /proc/sys/net/ipv4/tcp_abort_on_overflow controls the behavior:

Value 1: the kernel aborts the connection, sending an RST to the client (client sees “Connection reset by peer”).

Value 0: the final ACK of the handshake is dropped; the client must retransmit, and the server may eventually respond.

Half‑Connection Queue

The half‑connection queue (SYN queue) is not a real queue but a hash table. Its size is not directly visible, but it is affected when the full‑connection queue is full, especially if tcp_syncookies is disabled.

Detecting Half‑Connection Queue Overflow

Run netstat -s | grep -i dropped | grep -i listen . An increasing count of dropped SYN packets indicates overflow.

Resolving Half‑Connection Queue Overflow

Adjust /proc/sys/net/ipv4/tcp_synack_retries to change the number of SYN‑ACK retransmissions; fewer retries can cause faster failures, reducing queue pressure.

Enable /proc/sys/net/ipv4/tcp_syncookies (values 0‑disable, 1‑enable on overflow, 2‑always enable) to mitigate SYN‑flood attacks.

Conclusion

This article introduced the concepts of full‑connection and half‑connection queues, how to monitor them on Linux, and practical solutions for overflow. While Java developers may not often consider these details, understanding them helps diagnose high response times caused by server‑side queuing.

Links

[1] Source code: https://github.com/torvalds/linux/tree/v4.18/net

performanceTCPLinuxNetworkingServersocketConnection Queue
Yang Money Pot Technology Team
Written by

Yang Money Pot Technology Team

Enhancing service efficiency with technology.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.