Skip to main content

NVIDIA® DGX™ H200

The World’s Proven Choice for Enterprise AI

NVIDIA® DGX™ H200

NVIDIA DGX H200 powers business innovation and optimization. As a part of NVIDIA’s legendary DGX platform and the foundation of NVIDIA DGX SuperPOD and DGX BasePOD, DGX H200 is an AI powerhouse that features the groundbreaking NVIDIA H200 Tensor Core GPU. The system is designed to maximize AI throughput, providing enterprises with a highly refined, systemized, and scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more. Available on-premises and through a wide variety of access and deployment options, DGX H200 delivers the performance needed for enterprises to solve the
biggest challenges with AI.

dgx h100

Powered by NVIDIA Base Command

NVIDIA Base Command powers the DGX platform, enabling organizations to leverage the best of NVIDIA software innovation. Enterprises can unleash the full potential of their DGX infrastructure with a proven platform that includes enterprise-grade orchestration and cluster management, libraries that accelerate compute, storage and network infrastructure, and an operating system optimized for AI workloads. Additionally, DGX infrastructure includes NVIDIA AI Enterprise, offering a suite of software optimized to streamline AI development and deployment.

Download Product BrochureGet a Quote

Break Through the Barriers to AI at Scale

NVIDIA DGX H200 breaks the limits of AI scale and performance. It delivers 32 petaFLOPS of AI performance, 2X faster networking than DGX A100 with NVIDIA ConnectX®-7 smart network interface cards (SmartNICs), and high-speed scalability for NVIDIA DGX SuperPOD and DGX BasePOD. DGX H200 is supercharged with 1,128GB of GPU memory for the largest, most complex AI training and inference jobs, such as generative AI, natural language processing and deep learning recommendation models.

ACCESS OUR AI LAB

Experience our HPC and AI solutions.
Modular solutions for multiple users and diverse workloads. Reference architecture for quick deployment and quick updates. Easy to manage configurations allowing workload orchestration

ACCESS AI LAB