Microsoft today announced Azure’s most powerful AI virtual machine series. The new ND H100 v5 VM is available on-demand in sizes ranging from eight to thousands of NVIDIA H100 GPUs interconnected by NVIDIA Quantum-2 InfiniBand networking. Microsoft claims that Azure customers can expect significantly faster performance for AI models over the last generation ND A100 v4 VMs.
ND H100 v5 VM features:
- 8x NVIDIA H100 Tensor Core GPUs interconnected via next gen NVSwitch and NVLink 4.0
- 400 Gb/s NVIDIA Quantum-2 CX7 InfiniBand per GPU with 3.2Tb/s per VM in a non-blocking fat-tree network
- NVSwitch and NVLink 4.0 with 3.6TB/s bisectional bandwidth between 8 local GPUs within each VM
- 4th Gen Intel Xeon Scalable processors
- PCIE Gen5 host to GPU interconnect with 64GB/s bandwidth per GPU
- 16 Channels of 4800MHz DDR5 DIMMs
With the supercomputing capabilities of these new ND H100 v5 VMs, startups and companies of all sizes can develop generative AI applications without requiring the capital for massive physical hardware or software investments.
“NVIDIA and Microsoft Azure have collaborated through multiple generations of products to bring leading AI innovations to enterprises around the world. The NDv5 H100 virtual machines will help power a new era of generative AI applications and services.”—Ian Buck, Vice President of hyperscale and high-performance computing at NVIDIA.
ND H100 v5 is available for preview and you can request access here.