GPU SuperServer B300

A high-density liquid-cooled 8U AI server featuring 8 NVIDIA HGX B300 GPUs, dual Intel Xeon processors, DDR5 memory, and 800 GbE networking. Designed for large-scale AI training, LLMs, and HPC workloads requiring maximum performance and bandwidth.

GPU 8x NVIDIA® HGX™ B300
CPU Dual Intel® Xeon® 6700-series with P-cores
RAM 32x DDR5
Storage 8× E1.S NVMe bays + 2× M.2 NVMe
Cooling Direct Liquid Cooling
Power 4x Redundant 6600W Titanium
Network 8× 800GbE OSFP
Applications AI Training, LLM, Generative AI
Chassis 4U Rackmount

Description

Push the limits of AI and high-performance computing with the Supermicro SYS-422GS-NB3RT-ALC, a next-generation HGX B300 platform engineered for the most demanding AI workloads. Featuring 8 NVIDIA HGX B300 GPUs, dual Intel Xeon 6700-series processors, and a high-density DDR5 memory architecture, this system is purpose-built for large-scale AI training, LLMs, generative AI, and advanced HPC applications.

Designed around direct-to-chip liquid cooling, the SYS-422GS-NB3RT-ALC delivers sustained peak performance while maintaining thermal efficiency at extreme power densities. High-bandwidth NVLink and NVSwitch interconnects enable ultra-fast GPU-to-GPU communication, while 800 GbE networking via OSFP ensures the system integrates seamlessly into modern AI fabrics and multi-node clusters.

With flexible NVMe storage options, redundant Titanium-grade power supplies, and Supermicro’s data-center-ready engineering, this platform provides a powerful, scalable foundation for organizations building next-generation AI infrastructure. It’s built not just to run today’s models — but to handle what comes next.

ApplicationsAI Training, LLM, Generative AI
Chassis4U Rackmount
CPUDual Intel® Xeon® 6700-series with P-cores
CoolingDirect Liquid Cooling
Drive8× E1.S NVMe bays + 2× M.2 NVMe
GPU8x NVIDIA® HGX™ B300
Network Ports8× 800GbE OSFP
Power Supply4x Redundant 6600W Titanium
RAM32x DDR5
Storage8x Hot-swap E1.S NVMe bays