Survey of Network Types and Vendors in High‑Performance Computing (HPC) Environments
The Intersect360 2016 survey of 474 HPC sites covering 723 compute systems, 633 storage systems and 638 LANs reveals that Ethernet and InfiniBand dominate system interconnect, storage and LAN networks, with Mellanox and Cisco accounting for over half of installations, while newer technologies such as 10 GE, 40 G, 56 G InfiniBand and Omni‑Path show evolving market shares driven by bandwidth and latency demands.
Intersect360 conducted a web‑based survey of high‑performance computing (HPC) users from industry, government and academia, collecting hardware, software and network configuration data from 474 sites (723 compute systems, 633 storage systems, 638 LANs) during the second and third quarters of 2016.
The survey found that Ethernet and InfiniBand are the two primary network protocols across system interconnect, storage networks and LANs. Ethernet is the most widely deployed overall, while high‑performance interconnects favor InfiniBand, especially 40 G and the newer 56 G variants.
Approximately two‑thirds of respondents reported deploying both Ethernet and InfiniBand. 10 GE Ethernet leads in storage networks, accounting for 35% of installations, whereas InfiniBand’s share in storage is rising (from 31% to 34%).
Mellanox is the most common vendor for system interconnect (42% of sites) and storage networks (35%), while Cisco remains the dominant LAN supplier (46% of sites). Together, Mellanox and Cisco represent over 50% of all network component installations.
Market‑share analysis shows that over 30% of system interconnects and LANs still use 1 GE Ethernet, primarily for secondary management links or small throughput‑oriented clusters. More than 70% of 1 GE Ethernet is considered a secondary interconnect.
InfiniBand continues to be the preferred high‑performance interconnect; excluding 1 GE Ethernet, InfiniBand installations are roughly twice those of all Ethernet (10 G, 40 G, 100 G) combined. InfiniBand 40 G remains the most installed, but since 2014 InfiniBand 56 G has rapidly overtaken it.
Omni‑Path installations were below 1% in 2016, having just entered volume production in Q1 2016, with expectations of growth in future surveys.
Vendor analysis across the three application domains shows that 20 of the top 51 network vendors have products in all three areas. Mellanox leads in system interconnect and storage, while Cisco leads in LANs.
Key drivers for network evolution are the increasing demand for higher bandwidth and lower latency, as well as the need to support more parallelized applications. Users tend to adopt the latest or best price‑performance interconnect when refreshing servers, typically on a 2‑3‑year cycle.
The report concludes that bandwidth and latency requirements will continue to push the HPC market toward faster Ethernet variants, higher‑speed InfiniBand (56 G and beyond), and emerging technologies such as Omni‑Path, with Mellanox and Cisco maintaining dominant positions.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.