Supermicro unveil advanced liquid-cooled NVIDIA HGX B300 systems

Supermicro enhances its NVIDIA Blackwell range with liquid-cooled HGX B300 systems, offering remarkable GPU efficiency for AI and cloud deployments.

Super Micro Computer, Inc. (SMCI), a renowned provider of extensive IT solutions, has announced a significant expansion in its NVIDIA Blackwell architecture portfolio. This development comes with the introduction and shipping of new 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems. With these state-of-the-art additions forming a core part of Supermicro's Data Centre Building Block Solutions (DCBBS), they set new standards for GPU density and power efficiency, tailored for hyperscale data centres and AI factories.

President and CEO Charles Liang highlights that their latest systems provide the density and energy efficiency required in today's fast-paced AI infrastructure landscape. By offering the market's most compact NVIDIA HGX B300 solutions, Supermicro achieves a staggering 144 GPUs in a single rack, courtesy of its direct liquid-cooling technology, notably reducing power consumption and cooling costs.

The 2-OU (OCP) system aligns with the 21-inch OCP Open Rack V3 (ORV3) specifications, empowering up to 144 GPUs per rack. This equates to maximum GPU density, particularly vital for hyperscale and cloud providers prioritising space efficiency without compromising serviceability. The innovative design includes efficient cooling solutions, blind-mate manifold links, and a modular GPU/CPU tray setup. Moreover, it propels AI workloads by leveraging eight NVIDIA Blackwell Ultra GPUs, drastically saving space and energy. A single ORV3 setup accommodates up to 18 nodes with a total of 144 GPUs, smoothly scaling with NVIDIA Quantum-X800 InfiniBand switches via Supermicro's cooling units.

The 4U system variant complements this offering by maintaining its compute prowess in a traditional 19-inch EIA rack, fitting for large-scale AI deployments. Thanks to Supermicro's DLC-2 technology, it captures up to 98% of the heat generated, enhancing power efficiency with reduced noise and improved serviceability for dense AI clusters.

Key performance enhancements ensure significant gains, boasting 2.1TB of HBM3e GPU memory, facilitating larger model handling. Both platforms vastly improve compute fabric throughput up to 800Gb/s using integrated NVIDIA ConnectX-8 SuperNICs when paired with NVIDIA networking solutions. These enhancements speed up AI workloads such as agent-driven applications, foundational model training, and large-scale inference.

Supermicro's focus on total cost of ownership, efficiency, and serviceability shines through. The use of their DLC-2 tech allows data centres to optimise power utilization by up to 40%, reduce water usage via 45°C warm water operations, and eliminate the need for chilled water and compressors. Pre-validated, these systems streamline deployment, catering to hyperscale, corporate, and governmental affiliates.

The introduction extends Supermicro's NVIDIA Blackwell portfolio, incorporating NVIDIA GB300 NVL72, NVIDIA HGX B200, and others. Each of these platforms is certified for optimal AI application performance, offering secure scalability from single nodes to comprehensive AI infrastructures.

ABB unveils the UK's first medium-voltage UPS at Ark Data Centres, setting a benchmark in AI-ready...
Vertiv strengthens its thermal management capabilities by acquiring PurgeRite, bolstering its...
Aggreko invests in liquid-cooled load banks as AI and high-performance computing reshape data...
Vantage Data Centers announces the completion of its fourth facility, KUL14, in Malaysia, marking a...
atNorth's collaboration with Vestforbrænding will pioneer the use of excess data centre heat in...
Siemens and Delta unveil a global partnership to revolutionise data centre infrastructure with...
nVent Electric introduces modular liquid cooling solutions to meet modern chip cooling demands,...
Legrand partners with NorthC for an AI-ready upgrade at the Münchenstein data center, enhancing...