The enterprise AI infrastructure that improves upon traditional approaches. Enterprises, developers, data scientists, and researchers need a new platform that unifies all AI workloads, simplifying infrastructure and accelerating ROI.
NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system.
Available with up to 640 gigabytes (GB) of total GPU memory, which increases performance in large-scale training jobs up to 3X and doubles the size of MIG instances, DGX A100 can tackle the largest and most complex jobs, along with the simplest and smallest.
NVIDIA DGX A100 is more than a server. It’s a complete hardware and software platform built upon the knowledge gained from the world’s largest DGX proving ground—NVIDIA DGX SATURNV—and backed by thousands of DGXperts at NVIDIA.
DGX A100 miracles:
- The world’s largest 7nm chip. By following the 7nm process, NVIDIA Ampere architecture is delivering highly powerful GPUs that go further Moore’s law promises.
- 3rd Generation NVLINK and NVSwitch. For GPUs to act as one accelerator, high-speed interconnection is possible with NVIDIA® NVLink®. NVIDIA® NVSwitch® incorporates multiple NVLinks® ensuring full GPU communication and full NVLink® speed.
- 3rd generation tensor cores. The NVIDIA Ampere architecture provides a huge performance boost and delivers new precisions to cover the full spectrum required by researchers— TF32, FP64, FP16, INT8, and INT4—accelerating and simplifying AI adoption and extending the power of NVIDIA Tensor Cores to HPC.
- Sparsity acceleration. The NVIDIA Ampere architecture introduces third-generation Tensor Cores in NVIDIA A100 GPUs that take advantage of the fine-grained sparsity in network weights. They offer up to 2x the maximum throughput of dense math without sacrificing accuracy of the matrix multiply-accumulate jobs at the heart of deep learning.
- New Multi-Instance GPU – The new MIG feature allows the NVIDIA A100 GPU to be securely partitioned into up to seven separate GPU Instances for CUDA applications, providing multiple users with separate GPU resources for optimal GPU utilization.