0
NVIDIA Shares Blackwell GPU Compute Stats: 30% More FP64 Than Hopper, 30x Faster In Simulation & Science, 18X Faster Than CPUs

NVIDIA Shares Blackwell GPU Compute Stats: 30% More FP64 Than Hopper, 30x Faster In Simulation & Science, 18X Faster Than CPUs

NVIDIA has shared more performance statistics for its next-generation Blackwell GPU architecture that has taken the industry by storm. The company shared a number of metrics including its science, AI, and outgoing Hopper chips and competing x86 CPUs when using Grace-powered Superchip modules.

NVIDIA's monumental performance advantage with Blackwell GPUs isn't limited to AI, science and simulation See also Huge boost

In a new blog postNVIDIA has shared how Blackwell GPUs are going to add more performance to the research segment including Quantum Computing, Drug Discovery, Fusion Energy, Physics-Based Simulations, Scientific Computing, and more. When the architecture was originally announced at GTC 2024, the company showcased some big numbers but we are yet to get a proper look at the architecture itself. While we wait for that, the company has more data for us to use.

Starting with the details, NVIDIA's main goal with its Blackwell GPU architecture is to reduce cost and energy requirements. NVIDIA says the Blackwell platform can simulate weather patterns at 200x less cost and 300x less energy, while digital twin simulations encompassing the entire planet can be run with 65x less cost and 58x less energy.

Image source: NVIDIA

NVIDIA also highlighted the double-precision FP64 (floating point) capabilities of its Blackwell GPUs, which are rated at 30% more TFLOPs than the Hopper. A single Hopper H100 GPU offers about 34 TFLOPs of FP64 compute and a Blackwell B100 GPU offers about 45 TFLOPs of compute performance. Blackwell comes in mostly. GB200 Super Chip That includes two GPUs with a Grace CPU to deliver nearly 90 TFLOPs of FP64 compute capabilities. A single chip is behind the AMD MI300X and MI300A Instinct accelerators that offer 81.7 and 61.3 TFLOPs of FP64 capabilities on a single chip.

Image source: NVIDIA

While NVIDIA's Blackwell GPUs take a step back in traditional dense floating-point performance, that shouldn't undermine its computing capabilities. The company is the first to demonstrate simulation performance in Cadence SpectreX simulation that runs 13x faster on the Blackwell GB200 and 22x gains in CFD (Computational Fluid Dynamics) compared to ASICs and traditional CPUs. The chip is even faster than the A100. Grace Hopper (GH200) Systems.

Image source: NVIDIA

NVIDIA quickly shifts gears and once again brings us AI performance where its Blackwell GB200 GPU platform once again reigns supreme with a 30x increase over the H100 in GPT (1.8 trillion parameters). The GB200 NVL72 platform enables 30x higher throughput while achieving 25x higher energy efficiency and 25x lower TCO (Total Cost of Operation). Even putting the GB200 NVL72 system against 72 x86 CPUs gives an 18x advantage for the Blackwell system and a 3.27x advantage over the GH200 NVL72 system in database join queries.

The NVIDIA Grace Hopper GH200 platform continues to win supercomputer deals

With all the chatter around Blackwell GPUs, one would expect everyone to forget about Hopper but that's not the case at all. The NVIDIA Grace Hopper GH200 superchip GPU platform is currently the undisputed king of the AI ​​segment and currently powers nine different supercomputers on the planet with a combined computing capacity of 200 Exaflops, delivering 200 quintillion AIs per second. A performance calculation is obtained.

New Grace Hopper-based supercomputers coming online include the EXA1-HE from CEA and Eviden in France. Academic Computer Center Cyfronet in Helios, Poland, from Hewlett-Packard Enterprise (HPE); Alps in the Swiss National Supercomputing Center, from HPE; JUPITER at Jülich Supercomputing Center, Germany; Delta AI at the University of Illinois at Urbana-Champaign's National Center for Supercomputing Applications; and Miyabi at Japan's Joint Center for Advanced High Performance Computing—established between the Center for Computational Sciences at the University of Tsukuba and the Information Technology Center at the University of Tokyo.

NVIDIA

NVIDIA's GPUs are currently the product of choice for the growing demand for AI and there seems to be no stopping that. Analysts have identified NVIDIA as a dominant force through 2024 and as Blackwell becomes available to consumers, we can expect it to usher in record levels of performance in the AI ​​segment and NVIDIA's own revenue stream. But NVIDIA isn't stopping anytime soon as the company is expected to start production. Next-gen Rubin R100 GPUs by late 2025 and early specs sound insane.

Share this story.

Facebook

Twitter

About the Author

Leave a Reply