NVIDIA H100 NVL NVH100NVLTCGPU-KIT Tensor Core GPU

NVIDIA H100 NVL NVH100NVLTCGPU-KIT Tensor Core GPU

Was $30,537.12 SAVE 4%
$29,450.00
{{option.name}}: {{selected_options[option.position]}}
{{value_obj.value}}

The NVIDIA H100 NVL Tensor Core GPU is built for AI, deep learning, and high-performance computing at scale. Designed for data centers and enterprise environments, it delivers up to 5X faster performance on Llama 2 70B compared to NVIDIA A100 systems, with improved power efficiency and reduced latency. With 1.5X higher throughput than the H100 PCIe, the H100 NVL offers top-tier performance across large AI models and intensive training tasks. It combines NVLink architecture, enhanced memory bandwidth, and high compute density, making it the most capable GPU in the H100 series for large-scale inference and training workflows. FP64 -30 teraFLOPS FP64 Tensor Core - 60 teraFLOPS FP32 - 60 teraFLOPS TF32 Tensor Core - 835 teraFLOPS BFLOAT16 Tensor Core - 1,671 teraFLOPS FP16 Tensor Core - 1,671 teraFLOPS FP8 Tensor Core - 3,341 teraFLOPS INT8 Tensor Core - 3,341 teraFLOPS GPU Memory - 94GB GPU Memory Bandwidth - 3.9TB/s Decoders 7 NVDEC / 7 JPEG Max Thermal Design Power (TDP)- 350-400W (configurable) Form Factor - PCIe dual-slot air-cooled Interconnect - NVIDIA NVLINK: 600GB/s PCIe Gen5 - 128GB/s Server Options Partner and NVIDIA NVIDIA Enterprise Included Designed to operate in power-constrained data centers, the H100 NVL balances performance with energy efficiency, ensuring that high-density AI workloads remain cost-effective. 

Show More Show Less

Price History

$29,450 (-$1,087.12)