Big tech computing firms have joined forces to develop a new standard for interconnecting AI accelerators, a decision that excluded NVIDIA, the biggest supplier of AI GPUs. These companies confront the issue of efficiently transmitting colossal amounts of data with minimal latency in AI data centers.
NVIDIA has brought forth a proprietary high-speed interconnect, NVLink, that caters to its GPUs. However, its exclusivity poses a disadvantage as it can only operate with NVIDIA GPUs. To address this constraint, companies including AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, and Microsoft have formed the Ultra Accelerator Link Promoter Group, aimed at devising and advancing an open standard, UALink.
The consortium has set out to gain industry-wide adoption of UALink, an open-standard solution to enhance low-latency, high-bandwidth data transfer between AI accelerators contained within data centers. The group believes that engaging in this open standard initiative could challenge NVIDIA’s dominance, considering that AMD and Intel are NVIDIA’s business rivals in the GPU market, while Microsoft and Google are on the course to develop their AI hardware.
Similar attempts at standardizing protocols like the PCI Bus, Ethernet, or TCP/IP have proved crucial to the tech industry, allowing hardware and software from diverse manufacturers to connect harmoniously. A standardized interface is key to AI, Machine Learning, high-performance computing, and cloud applications for the forthcoming generation of AI data centers and implementations, according to the consortium.
The group has projected the first release of UALink by the third quarter of 2024, accessible to enterprises that join the UALink Consortium. While NVIDIA has not been invited to the initiative, this does not necessarily imply permanent exclusion. The consortium has the option to welcome NVIDIA at a later stage, and NVIDIA could reciprocate positively to the UALink if the industry adopts it extensively.