NVIDIA Announces New Generation of Computing Platform "HGX-2"

NVIDIA's next-generation GeForce game cards have long been reluctant to show up. On the one hand, it lacks sufficient competitive incentives. On the other hand, NVIDIA's business focus has long since shifted. It is no longer just staring at game cards, but is more concerned with computing platforms. After all, there is more room for development and the profits are much higher.

NVIDIA today announced the launch of its next-generation computing platform, the HGX-2, with up to 16 Tesla V100 top-level computing cards, and for the first time integrated AI artificial intelligence and HPC high-performance computing into a unified architecture.

Tesla V100 is based on the next-generation GPU architecture "Volta". It is manufactured on TSMC's 12nm FFN (16nm enhanced) process. It has 5120 CUDA cores, 640 Tensor deep learning cores, 815 square millimeters, and 21 billion integrated Transistors.

Its floating-point performance up to 30FFlops half-precision, single-precision 15TFlops, double-precision 7.5TFlops, Tensor performance is 120TFlops, while with 4096-bit wide 16GB HBM2 high-bandwidth memory, frequency 1.75GHz, bandwidth 900GB/s.

NVIDIA's previous computing platform HGX-1 integrates eight Tesla V100s, a total of 40960 CUDA cores, 5120 Tensor cores, 256GB of video memory, connected via 300GB/s NVLink bus with dual-line bandwidth, single floating-point performance 125TFlops, double precision 62TFlops, Tensor Performance 1 PFlops.

The new generation HGX-2 uses 16 Tesla V100s, which can easily double in size and performance, with a total of 81920 CUDA cores, 10240 Tensor cores, 512GB of memory, single-precision 250TFlops for floating-point performance, 125TFlops for double precision, and 2PFlops for Tensor.

NVIDIA also deployed 12 NVSwitches in the system for direct interconnection between GPUs. The NVLink bus has bidirectional bandwidth of up to 2.4TB/s.

In addition to high performance, the greatest advantage of the HGX-2 platform is that it supports multiple precision calculations and can be adapted to different needs. For example, in scientific calculations and simulations, high-precision calculations using FP64 and FP32 can be used, while in AI training and reasoning, Use FP16 floating-point, Int8 integer precision calculations.

Lenovo, AMD, QCT and Wiwynn will all release their HGX-2 systems later this year.

Foxconn, Inventec, Quanta, Wistron, and four major ODMs are designing systems based on HGX-2, which will be available later this year and can be used in cloud computing data centers.

Station And Docks

Shenzhen Jinziming Electronic Technology Co.,LTD , https://www.powerchargerusb.com

Posted on