Signalling a shift towards a more open and competitive AI chip market, a group of tech majors, including Google, Intel, Microsoft, Meta, AMD, HP, Cisco, and Broadcom, have come together to create a new open standard for AI accelerator chips
Imagine a highway system where only one company controls the roads, dictating which cars can drive on them and how fast they can go. This is essentially the situation with NVIDIA’s proprietary NVLink technology, which dominates the AI chip market. However, a group of major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, have come together to create a new open standard for AI accelerator chips, called Ultra Accelerator Link (UALink).
UALink is an open standard that aims to challenge NVIDIA’s dominant position in the AI chip market. By providing an alternative interconnect technology, UALink allows multiple companies to contribute to and develop AI hardware advancements, which can potentially reduce NVIDIA’s influence and create a more competitive market.
A New Standard to Rival NVIDIA
This new standard aims to provide an alternative to NVIDIA’s NVLink, allowing multiple companies to contribute to and develop AI hardware advancements. This move is similar to how other open standards, like Compute Express Link (CXL), have improved data centre connectivity by providing high-speed connections between CPUs, devices, and memory.
Google and Meta are keen to reduce their reliance on NVIDIA, whose networking business is a crucial part of its AI dominance. By adopting UALink, they can break free from NVIDIA’s proprietary ecosystem and have more flexibility in choosing AI hardware.NVIDIA currently enjoys a very strong position and influence in the AI chip market, with an estimated market share of 80-95%. In its most recent fiscal quarter, NVIDIA’s data centre sales (including AI chips) grew by over 400% year-on-year.
Faster Processing & Training AI Models
Just like a superhighway facilitates rapid and efficient movement of vehicles, UALink promises to enable high-speed data transfer between AI accelerator chips in data centres. This allows for faster processing and training of AI models, which is critical for applications like machine learning and artificial intelligence.
- Interconnectivity: A superhighway connects various parts of a city or region, enabling seamless travel. Similarly, UALink connects multiple AI accelerator chips within a data centre, allowing them to communicate and work together efficiently.
- Standardisation: A superhighway follows standardised rules and regulations for traffic flow, ensuring safety and efficiency. UALink is an open standard that aims to standardise the interface for AI and machine learning applications, making it easier for different companies to develop and integrate their AI accelerator chips.
- Scalability: A superhighway can handle a large volume of traffic, and UALink is designed to connect up to 1024 GPU AI accelerators, making it scalable for large-scale AI applications.
- Innovation: A superhighway enables innovation by providing a platform for new transportation technologies and services. UALink aims to promote innovation in AI accelerator chip development by providing an open standard that allows multiple companies to contribute and add value to the ecosystem.
Reducing Latency
UALink 1.0 aims to enable the connection of up to 1,024 GPUs within a single computing “pod,” which is a group of server racks. This standard is based on technologies like AMD’s Infinity Architecture and is expected to improve speed and reduce data transfer latency compared to existing interconnect specifications.
The UALink Promoter Group plans to form a consortium later in 2024 to manage the ongoing development of the UALink specification. Member companies will have access to UALink 1.0 upon joining, with a higher-bandwidth version, UALink 1.1, planned for release in Q4 2024.
A Challenge to NVIDIA
This development is significant because it challenges NVIDIA’s dominance in the AI chip market and provides a means for other companies to contribute to and develop AI hardware advancements. As more companies invest in their own AI chip development, the need for a standardised interconnect technology becomes increasingly pressing, particularly as a means to counter or balance NVIDIA’s influence.
As thing currently stands, NVIDIA may have a window of opportunity over the next two years because the first UALink products will probably not be marketed before that. This gap allows the company to cement its proprietary advantage while the AI data centre sector continues to expand. However, this move by major tech players signals a shift towards a more open and competitive AI chip market, which could ultimately benefit consumers and the technology industry as a whole.