Artificial Intelligence 11 min read

UALink 1.0: An Open High‑Speed Interconnect Challenging Nvidia’s AI Dominance

The UALink 1.0 specification, driven by AMD, Intel, Broadcom and other industry leaders, introduces an open, low‑latency, high‑bandwidth interconnect that can link up to 1,024 AI accelerators, offering a cost‑effective alternative to Nvidia’s NVLink and reshaping the AI‑HPC market.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
UALink 1.0: An Open High‑Speed Interconnect Challenging Nvidia’s AI Dominance

UALink is an open‑standard, high‑speed interconnect initiative launched by AMD, Broadcom, Google, Intel and other leading technology companies to challenge Nvidia’s NVLink dominance in AI accelerator networking.

Version 1.0 delivers up to 200 GT/s per lane, supports seamless connection of up to 1,024 accelerators, and provides low‑cost deployment, flexible scaling, and strong security features, injecting fresh competition into the AI accelerator ecosystem.

The specification defines a four‑layer protocol stack—physical, data‑link, transaction, and protocol layers—optimised for AI and HPC workloads. The physical layer uses standard Ethernet components (e.g., 200GBASE‑KR1/CR1) with enhanced forward error correction and interleaved coding to reduce latency while staying Ethernet‑compatible.

The data‑link layer employs a flit‑based packetisation (64‑640 bytes) with CRC and optional retry logic for reliable, high‑throughput transfers.

The transaction layer provides compressed addressing and direct memory operations (read, write, atomic) achieving up to 95 % protocol efficiency, ideal for low‑latency AI training and inference.

The protocol layer adds hardware‑level encryption (UALinkSec) and trusted execution environments such as AMD SEV and Intel TDX, delivering multi‑tenant isolation and confidential computing for data‑center pods.

Compared with PCI‑Express or CXL, UALink offers superior bandwidth and latency, making it well‑suited for large‑scale AI clusters. Its scalability enables up to 800 GT/s aggregate bandwidth in a four‑lane configuration and supports up to 1,024 accelerators within a single pod.

Power efficiency is a key advantage: UALink switches consume only one‑third to one‑half the power of comparable Ethernet ASICs, and each accelerator saves roughly 150–200 W, reducing total‑cost‑of‑ownership for hyperscale cloud providers.

While Nvidia’s ecosystem (NVLink, NVSwitch, CUDA) provides a strong hardware‑software lock‑in, UALink’s open approach requires parallel development of software stacks such as ROCm and oneAPI to attract developers.

The alliance includes chip designers (AMD, Intel, Broadcom), cloud providers (Google, Microsoft, Meta), networking vendors (Cisco), and system integrators (HPE). Early ecosystem milestones include Synopsys IP controllers, and planned production of UALink switches by Astera Labs and Broadcom.

Challenges remain: differing member priorities (e.g., Google/Meta’s focus on custom accelerators vs. AMD/Intel’s GPU strategy) and the need to demonstrate real‑world performance and cost benefits within a 12‑18‑month commercialization window, with first products expected in 2026.

Strategic collaboration with the UltraEthernet Consortium (UEC) aims to combine intra‑pod accelerator interconnect (UALink) with inter‑pod Ethernet scaling, delivering a comprehensive solution for both “scale‑in” and “scale‑out” AI workloads.

Overall, UALink 1.0 represents a significant step toward an open, high‑performance, low‑cost AI interconnect that could reshape the competitive landscape of AI hardware.

High Performance ComputingData Centeraccelerator networkingAI interconnectNvidia competitionOpen StandardsUALink
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.