Understanding InfiniBand: Technology, Market Landscape, and Its Role in Supercomputing
This article provides a comprehensive overview of InfiniBand technology, its high‑bandwidth low‑latency architecture, dominant market players such as Mellanox, and its critical impact on the performance and evolution of global supercomputing systems as reflected in TOP500 rankings.
InfiniBand is a high‑bandwidth, low‑latency interconnect technology that supports concurrent links for storage I/O, network I/O, and inter‑process communication, enabling the connection of disks, SANs, LANs, servers, and clusters, as well as external networks like WAN and VPN.
1. InfiniBand technology background and status
Designed primarily for enterprise and large‑scale data centers, InfiniBand delivers high reliability, availability, scalability, and performance, offering up to 100 Gbps bandwidth and sub‑0.6 µs latency. It is widely used in high‑performance computing (HPC) to build clusters where the aggregate performance scales linearly with the number of servers.
InfiniBand switches come in managed and unmanaged variants; managed switches include hardware management ports for network control, typically deployed in small numbers per project.
2. InfiniBand’s ultra‑strong transmission performance for supercomputing centers
Increasingly, top‑tier supercomputing facilities adopt InfiniBand due to its superior bandwidth and low latency. Approximately two‑thirds of systems surveyed in the Intersect360 study combine Ethernet and InfiniBand, giving InfiniBand a dominant market share in HPC networking.
Mellanox’s InfiniBand solutions hold roughly 46 % of the market, with 65 of the 2016 TOP500 new systems using Mellanox products and 28 % (140 systems) in the 2019 list; InfiniBand accounts for about 40 % of total TOP500 performance.
3. Major InfiniBand players and industry giants
The InfiniBand Trade Association lists nine board members; only Mellanox and Emulex focus exclusively on InfiniBand, while others are users. Emulex was acquired by Avago in 2015, and Qlogic sold its InfiniBand business to Intel, which now offers OPA.
Mellanox dominates the market, with deployment numbers far exceeding competitors such as Intel’s Omni‑Path and Cray. Mellanox promotes co‑design and offloading architectures (e.g., RDMA) that enable capabilities unavailable to traditional on‑loading designs.
In November 2019, Mellanox released the world’s first 200 Gb/s HDR InfiniBand solution, comprising ConnectX‑6 adapters, Quantum switches, and LinkX cables, targeting HPC, AI, big data, cloud, and storage platforms.
The ConnectX‑6 multi‑host technology allows up to eight hosts to share a single PCIe adapter, reducing capital and operational expenses. Mellanox’s 200 Gb/s HDR solution offers up to 16 Tb/s total switching capacity with 90 ns latency, supporting in‑network computing.
4. Global supercomputer landscape
Analysis of the 2019 TOP500 data shows the United States leading in both system count and performance, followed by China. IBM and Intel dominate processor supply, with IBM’s cores fewer but higher‑performance. Lenovo, IBM, and Cray (now part of HPE) are the top vendors by total compute capacity.
Supercomputer growth has slowed since 2008/2013, reflecting the deceleration of Moore’s Law. China’s installed TOP500 systems increased to 227 (45.6 % of global installations) by November 2019, while the U.S. maintains a higher average performance share (≈37 %).
Overall, InfiniBand remains the cornerstone of high‑performance interconnects, driving the evolution of supercomputing architectures and enabling future advances in data‑intensive workloads.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.