0
NVIDIA Expected To Mass Produce Next-Gen R100 “Rubin” GPUs In Q4 2025: TSMC N3, 8 HBM4 Stacks, 3nm Grace CPU & Focus on Power Efficiency

NVIDIA Expected To Mass Produce Next-Gen R100 “Rubin” GPUs In Q4 2025: TSMC N3, 8 HBM4 Stacks, 3nm Grace CPU & Focus on Power Efficiency

NVIDIA is expected to mass produce it. Next-generation Rubin R100 GPUs With HBM4 memory on TSMC 3nm node by Q4 2025.

NVIDIA's next-gen Rubin R100 GPUs use HBM4 memory and a TSMC 3nm node to focus on power efficiency while increasing AI performance.

New information comes from TF International Securities analyst, Mich-chi KuoWhich says NVIDIA has laid the groundwork for its next-generation Rubin R100 GPUs, named after Vera Rubin, an American astronomer who contributed to the understanding of dark matter in the universe. Also worked on the rotation rate of the galaxy.

Kuo says NVIDIA Rubin R100 GPUs will be part of the R-series lineup and are expected to be mass produced in the fourth quarter of 2025, while systems like the DGX and HGX solutions will go into mass production in the first half. is expected. of 2026. NVIDIA Recently has unveiled its next-generation Blackwell B100 GPUs. which significantly increases AI performance and is the first proper chiplet design from the company that made its debut in the Ampere GPU.

NVIDIA's Rubin R100 GPUs are expected to use a 4x reticle design (versus Blackwell's 3.3x) and will be built using TSMC CoWoS-L packaging technology on the N3 process node. TSMC recently made plans for 5.5x reticle size chips by 2026. which will feature a 100x100mm substrate and allow up to 12 HBM sites compared to 8 HBM sites on the current 80x80mm packages.

The semiconductor company also plans to move to a new SoIC design that will feature more than 8x reticle size in a 120x120mm package configuration. They are still being planned so we can more realistically expect somewhere between 4x the reticle size for the Rubin GPUs.

Image source: TSMC

Other information mentions that NVIDIA will use next-generation HBM4 DRAM to power its R100 GPUs. The company currently leverages the fastest HBM3E memory for its B100 GPUs and is expected to refresh these chips with HBM4 variants when the memory solution becomes mass-produced in late 2025. Production HBM4. both of them Samsung And SK Hanks has revealed plans to begin development of next-generation memory solutions with 16-Hi stacks in 2025.

NVIDIA is also set to upgrade its Grace CPU for the GR200 Superchip module which will have two R100 GPUs based on TSMC's 3nm process and an upgraded Grace CPU. Currently, the Grace CPU is built on TSMC's 5nm process node and The Grace Superchip packs 72 cores for a total of 144 cores on the solution..

A major focus for NVIDIA with its next-generation Rubin R100 GPUs will be power efficiency. The company is It is aware of the increasing power requirements of its data center chips. And it will provide a significant improvement in this area by enhancing the AI ​​capabilities of its chips. The R100 GPUs are still a long way off and we shouldn't expect them to be unveiled until next year's GTC, but if this information is correct, then NVIDIA certainly has a lot of exciting developments for the AI ​​and data center segment.

NVIDIA Data Center / AI GPU Roadmap

GPU codename x Rubbing Blackwell Hopper Ampere The Volta Pascal
GPU family GX 200 GR100 GB 200 GH200/GH100 GA100 GV100 GP100
GPU SKU X100 R100 B100/B200 H100/H200 A100 V100 P100
memory HBM4e? HBM4? HBM3e HBM2e/HBM3/HBM3e HBM2e HBM2 HBM2
Launch. 202X 2025 2024 2022-2024 2020-2022 2018 2016

Share this story.

Facebook

Twitter

About the Author

Leave a Reply