Fundamentals 8 min read

How BBR Congestion Control Supercharges Cloud Disk Speed

This article explains how the BBR congestion control algorithm, designed for long‑fat networks, overcomes the limitations of traditional TCP to dramatically improve cloud storage download speeds, detailing its principles, implementation steps, and real‑world performance gains.

360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
360 Zhihui Cloud Developer
How BBR Congestion Control Supercharges Cloud Disk Speed

Introduction

Cloud storage speed is a critical metric for product reputation and user satisfaction. Traditional acceleration relies on proxy servers, which do not address the specific conditions of Chinese wide‑area networks. The BBR congestion control algorithm, optimized for long‑fat networks, can dramatically boost transfer speeds.

Traditional TCP Congestion Control

Wide‑area networks in China often exhibit high bandwidth, high latency, and a certain packet‑loss rate. Packet loss may stem from congestion or transmission errors. Shared bandwidth among secondary ISPs leads to buffer overflow, causing loss and a sharp reduction in sending rate. This environment is referred to as a “long‑fat network” (large RTT with high bandwidth).

Traditional TCP aims to fully utilize the pipe, using slow start, additive increase, and multiplicative decrease. The pipe capacity is estimated as bandwidth × RTT.

Problems include the inability to distinguish loss causes, buffer‑bloat, and consequently very small sending windows and low throughput.

BBR Congestion Control

BBR tackles the two issues by ignoring packet loss (since loss cause cannot be reliably identified) and by separating bandwidth and delay estimation, thus avoiding buffer‑bloat. It alternates between bandwidth probing and delay probing phases.

During bandwidth probing, BBR first increases the sending rate (5/4 of the current estimate) to test if the pipe can be filled, then reduces it (3/4) to drain excess packets from the buffer, and finally sends at the newly estimated bandwidth for the remaining round‑trips. This cycle repeats until the true bandwidth is reached.

During delay probing, if no new minimum RTT is observed, the sending window is reduced to four packets, allowing the buffer to stay empty and the measured RTT to be accurate; the window then returns to its previous size.

BBR Summary

BBR’s initial phase avoids aggressively filling the pipe, preventing buffer‑bloat‑induced loss and latency. Subsequent alternating bandwidth and delay probing yields accurate pipe capacity, reduces loss, and continuously expands the sending window to achieve maximum throughput.

Suitable Scenarios

1. High‑bandwidth, high‑latency networks with a non‑negligible loss rate. 2. Networks with relatively small buffers (slow‑access links).

BBR in Cloud Disk Practice

Kernel upgrade to version 4.9 or newer and enable BBR:

<code>echo "net.core.default_qdisc=fq" >> /etc/sysctl.conf
echo "net.ipv4.tcp_congestion_control=bbr" >> /etc/sysctl.conf
sysctl -p
sysctl net.ipv4.tcp_available_congestion_control
sysctl -n net.ipv4.tcp_congestion_control</code>

Adjust TCP parameters to increase the sliding window beyond 64 KB:

<code>sysctl net.ipv4.tcp_window_scaling=1</code>

Speed Improvement Results

Average user speed increased by roughly 50 %. The proportion of users achieving speeds above 1 Mbps doubled.

References

[1] Cardwell, Neal, et al. "BBR: Congestion‑Based Congestion Control." Queue 14.5 (2016): 50.

Performance TuningLinuxnetwork optimizationcloud storagecongestion controlBBR
360 Zhihui Cloud Developer
Written by

360 Zhihui Cloud Developer

360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.