Category top banner badge image

NVIDIA HGX Solutions: B300, B200, & H200

value propositon

Extreme HPC Performance

NVIDIA HGX B300/B200/H200 8-GPU platforms deliver exceptional HPC performance and industry-leading FP16 and FP8 performance for mixed precision AI training and inference.

value propositon

NVIDIA NVLink

NVIDIA NVLink delivers unparalleled multi-GPU interconnect for HPC and AI workloads, proprietary to NVIDIA HGX and DGX systems. The extreme GPU interconnect is unrivaled by traditional PCIe GPU servers.

value propositon

NVIDIA NVSwitch

NVIDIA NVSwitchâ„¢ creates a unified networking fabric that allows the multiple nodes of multiple NVIDIA HGX systems to function as a single cohesive GPU. NVSwitch enables enterprises to build complete AI factories.

NVIDIA HGX B300 Platforms

Solution image

NVIDIA HGX B300 Dual Intel Xeon 6700 8UServer

TS4-198437339

Highlights
CPU2x Intel Xeon 6700E/6700P
GPU8x NVIDIA B300 SXM5 288GB HBM3e
MEM32x DDR5 ECC (Up to 4TB)
STO12x 2.5" NVMe Hot-Swap
NET2x 1000BASE-T + 8x 800Gbps OSFP InfiniBand + 4x PCIe 5.0 x16 FHHL Slots
Solution image

NVIDIA HGX B300 Dual Intel Xeon 6700 8UServer

TS4-104366747

Highlights
CPU2x Intel Xeon 6700E/6700P
GPU8x NVIDIA B300 SXM5 288GB HBM3e
MEM32x DDR5 ECC (Up to 4TB)
STO8x E1.S NVMe Hot-Swap
NET1x 10GBASE-T + 8x 800Gbps OSFP InfiniBand + 2x PCIe 5.0 x16 FHHL Slots
Solution image

NVIDIA HGX B300 Dual Intel Xeon 6700 8UServer

TS4-131188947

Highlights
CPU2x Intel Xeon 6700E/6700P
GPU8x NVIDIA B300 SXM5 288GB HBM3e
MEM32x DDR5 ECC (Up to 4TB)
STO8x 2.5" NVMe Hot-Swap
NET1x 10GBASE-T + 8x 800Gbps OSFP InfiniBand + 4x PCIe 4.0 x16 FHHL Slots

NVIDIA HGX B200 Platforms

Solution image

NVIDIA HGX B200 Dual AMD EPYC 9005/90048U

TS4-142487900

Highlights
CPU2x AMD EPYC 9005/9004
GPU8x NVIDIA B200 SXM5 192GB HBM3e
MEM24x DDR5 ECC (Up to 3TB)
STO10x 2.5" NVMe Hot-Swap
NET2x 1000BASE-T + 8x HHHL & 1x FHHL PCIe 5.0 x16 Slots
Solution image

NVIDIA HGX B200 Dual Intel Xeon Scalable8U

TS4-128275176

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable
GPU8x NVIDIA B200 SXM5 192GB HBM3e
MEM24x DDR5 ECC (Up to 3TB)
STO12x 2.5" Hot-Swap (8x NVMe + 2x NVMe/SATA + 2x SATA)
NET2x 1000BASE-T + 8x HHHL & 2x FHHL PCIe 5.0 x16 Slots
Solution image

NVIDIA HGX B200 Dual Intel Xeon 690010U

TS4-169219634

Highlights
CPU2x Intel Xeon 6900P
GPU8x NVIDIA B200 SXM5 192GB HBM3e
MEM32x DDR5 ECC (Up to 4TB)
STO10x 2.5" Hot-Swap NVMe + 8x opt. E1.S Hot-Swap NVMe
NET2x 10GBASE-T + 10x HHHL PCIe 5.0 x16

NVIDIA HGX H200 Platforms

Solution image

NVIDIA HGX H200 Dual Intel Xeon Scalable3U

TS4-147416516

Highlights
CPU2x 4th/5th Gen Intel Xeon Scalable
GPU4x NVIDIA H200 SXM5 141GB HBM3e
MEM16x DDR5 ECC (Up to 2TB)
STO8x 2.5" NVMe & SATA Hot-Swap
NET2x 10GBASE-T + 6x LP PCIe 5.0 x16
Solution image

NVIDIA HGX H200 Dual Intel Xeon Scalable 6UServer

TS4-127744315

Highlights
CPU2x 4th/5th Intel Xeon Scalable
GPU8x NVIDIA H200 SXM5 141GB HBM3e
MEM32x DDR5 ECC (Up to 4TB)
STO12x 2.5" Hot-Swap
NET2x 1000BASE-T + 10x PCIe 5.0 x16
Solution image

NVIDIA HGX H200 Dual AMD EPYC 9005/90046U

TS4-110455529

Highlights
CPU2x AMD EPYC 9005/9004
GPU8x NVIDIA H200 SXM5 141GB HBM3e
MEM24x DDR5 ECC (Up to 3TB)
STO12x 2.5" U.2 NVMe Hot-Swap
NET 2x 1000BASE-T + 9x LP PCIe 5.0 x16

Build your ideal system

Need a bit of help? Contact our sales engineers directly.

NVIDIA HGX B300 Enterprise-Grade AI Performance for LLM Workloads

NVIDIA HGX B300 systems deliver up to 11x higher inference performance for large language models like Llama 3.1 405B compared to the previous generation. Blackwell Tensor Core technology and the second-generation Transformer Engine with FP8 precision enable 4x faster training while ensuring efficient scalability through fifth-generation NVLink (1.8 TB/s GPU-to-GPU interconnect) and InfiniBand networking—ideal for enterprises deploying advanced AI workloads at scale.

  • Input Sequence Length = 32,768 with Output Sequence Length 1,028
  • Per-GPU Performance from an HGX H100 system vs HGX B200 system
  • Per-GPU Performance from an HGX H100 system vs HGX B200 system

NVIDIA HGX B200 Exceptional Performance for LLM Workloads

NVIDIA HGX B200 systems deliver up to 15X higher inference performance for large language models like GPT MoE 1.8T compared to previous generation solutions. With 3X faster training capabilities, these systems leverage Blackwell Tensor Core technology, second-generation Transformer Engine, and advanced networking infrastructure to ensure optimal scalability for enterprise AI deployments.

  • Input Sequence Length = 32,768 with Output Sequence Length 1,028
  • Per-GPU Performance from an HGX H100 system vs an HGX H200 system
  • 32,768 GPU scale: 4,096x HGX H100 systems 400G InfiniBand network vs 4,096x HGX B200 systems 400G InfiniBand network.

HGX B300, B200, H200 Specifications

AI can Accelerate the Workflow of any Industry

Start Training Your Own AI Model Today

NVIDIA AI Enterprise enables users to harness the power of AI through an optimized and streamlined development and deployment framework. Coupled with NVIDIA Enterprise Support and Training Services, developers can leverage a professional to assist and teach the best AI practices. Train and deploy the best AI model, tailored for your deep learning goals today.

logo

Partnerships

nvidia
AMD
ampere
pny
WEKA
DDN