Operations 15 min read

How Many TCP Connections Can a Server Really Handle? Limits and Tuning

This article explains the practical limits on how many TCP connections a Linux server or client can support, covering file‑descriptor parameters, memory consumption per socket, kernel tuning examples, and scaling calculations for large‑scale long‑connection services.

Architect
Architect
Architect
How Many TCP Connections Can a Server Really Handle? Limits and Tuning

Overview

A common interview question asks how many TCP connections a server can support; the answer involves understanding Linux file‑descriptor limits, memory usage, and kernel parameters.

Linux file‑descriptor limits

Three key parameters affect the maximum number of open files (including sockets):

fs.file-max – system‑wide limit (root is exempt).

soft nofile – per‑process soft limit.

fs.nr_open – per‑process hard limit, must be larger than

hard nofile

.

These values are coupled; increasing one often requires adjusting the others, and using

echo

to modify kernel files is discouraged because changes are lost after reboot.

Example: increase max open files to 1,100,000

<code>vim /etc/sysctl.conf
fs.file-max=1100000   # system‑wide
fs.nr_open=1100000      # must exceed hard nofile
sysctl -p</code>
<code>vim /etc/security/limits.conf
soft nofile 1000000
hard nofile 1000000</code>

Maximum TCP connections on a server

A TCP connection is identified by a 4‑tuple (source IP, source port, destination IP, destination port). The theoretical maximum is about 2×10^14, but real limits are set by memory. An ESTABLISHED idle connection consumes roughly 3.3 KB, so a 4 GB server can hold around 1 million concurrent connections.

Client‑side limits

Clients consume a source port for each connection. With a single client IP and a single server port, the maximum is 65 535 connections. With multiple client IPs, the limit becomes

n × 65535

. If the server listens on

m

ports, the client can open up to

65535 × m

connections. The kernel parameter

net.ipv4.ip_local_port_range

may further restrict the usable port range.

Scaling a long‑connection push service

Assuming each idle connection uses ~3 KB, a 128 GB server can support roughly 5 million connections. To handle 100 million users, about 20 such servers would be sufficient.

Additional kernel tuning

The listen queue length is controlled by

net.core.somaxconn

(default 128). Raising this value reduces the chance of dropped SYN packets under high concurrency. After terminating a process, ports may remain occupied briefly due to TIME_WAIT; waiting allows the OS to reclaim them.

Socket bind warning

Binding a client socket to a specific port forces the use of that port and can lead to exhaustion; it is usually better to let the OS choose the source port.

<code>public static void main(String[] args) throws IOException {
    SocketChannel sc = SocketChannel.open();
    // sc.bind(new InetSocketAddress("localhost", 9999));
    sc.connect(new InetSocketAddress("localhost", 8080));
    System.out.println("waiting..........");
}</code>
TCPLinuxconnection limitsnetwork performancesystem tuningfile descriptors
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.