AI back-end networks to drive data centre switch spending

According to the new AI Networks for AI Workloads report by Dell’Oro Group, spending on switches deployed in AI back-end networks used to connect accelerated servers is forecast to approach $80 B over the next five years, nearly doubling the total data center switch market opportunity.

Current data center switch market spending is on front-end networks used primarily to connect general-purpose servers but AI workloads will require a new back-end infrastructure buildout. While InfiniBand is currently dominating the AI back-end network market, Ethernet is poised to take over soon.

“As predicted, growth in AI workloads and the associated data center infrastructure continues to track well ahead of market expectations. We therefore raised our forecast for the AI back-end networks compared to our prior December 2023 forecast, though the latter was initially considered very aggressive by a number of industry players,” said Sameh Boujelbene, Vice President at Dell’Oro Group. “The upward adjustment was broad-based across Ethernet and InfiniBand. We are now, however, much more optimistic about the ability of Ethernet to eclipse InfiniBand within the next few years. This optimism stems from significant improvements on the technology side as well as market demand.

“On the technology side, we are observing significant enhancements on Ethernet at multiple layers of the stack: network chips, network operating systems, network interface cards and optics. In terms of demand, we are tracking significant new Ethernet wins, such as the latest 100K GPU cluster win announced by NVIDIA, among many other potential wins by other switch vendors including Celestica, Cisco, Arista, Juniper, and Nokia,” continued Boujelbene.

Additional highlights from the AI Networks for AI Workloads Report:

AI networks will accelerate the transition to higher speeds. The majority of the switch ports deployed in AI back-end networks are expected to be 800 Gbps by 2025 and 1600 Gbps by 2027.

While most of the market demand will come from Tier 1 Cloud Service Providers, we have significantly raised our forecast for Tier 2/3 and large enterprises in light of their accelerated pace of AI workload adoption and infrastructure deployment.

Jitterbit has announced the next era of integration, orchestration, automation, and application...
RHEL AI combines open, more efficient models with accessible model alignment, extending the...
A recent survey conducted by Iris.ai, a leading AI company for scientific research, has unveiled...
Nebula cloud management platform can help partners deliver great customer service and scale-up...
Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high.
Enhancing the protection and performance of enterprise AI inference solutions with F5 NGINX Plus,...
Intel and IBM to deploy Gaudi 3 AI accelerators on IBM Cloud to help enterprises scale AI.
However, only one third of IT leaders believe their business is fully set up to realize the...