0
AI Craze May Have Nerfed AMD's & Intel's Upcoming Chips: Strix APUs Originally Had Big Cache Which Boosted CPU & iGPU Performance

AI Craze May Have Nerfed AMD's & Intel's Upcoming Chips: Strix APUs Originally Had Big Cache Which Boosted CPU & iGPU Performance

The recent AI frenzy may have hurt some upcoming SOCs as the likes of AMD and Intel prioritize NPUs over other core IPs.

Microsoft's demand for faster AI capabilities leads to SOC nerfs in the AMD and Intel camps as NPUs take precedence over other aspects.

We've seen an AI explosion in the PC segment recently with all the chipmakers talking about the relative capabilities of their chips and platforms. This segment is driven by a range of software innovations and Microsoft. Windows Copilot has some heavy requirements to support AI functionality.. Chipmakers are now betting heavily on the AI ​​craze, and some seem to have deviated from their traditional chip development plans to prioritize AI over other parts of their new SOCs that are due later this year. Will come to the market.

Image source: AMD

End on Anandtech Forums, reported by member, Uzzi38.that of AMD Strix Point APUs Those launching later this year were originally planned to be very different from the chips we'll be getting soon. It is alleged that before AMD dedicated a large AI engine block to this 3x NPU “XDNA 2” AI performance, the chip had a large SLC (System-level-Cache) and allowed both CPU (Zen 5) would have increased efficiency. ) and iGPU (RDNA 3+) by a large margin. However, that is no longer the case.

Image Source: Anandtech Forums

A follow-up comment on this issue was made by adroc_thurston Which responded to Uzzi saying that the Strix 1 or Strix Point monolithic had 16 MB of MALL cache before it was dropped. Intel has also weighed in on its future. Arrow Lake, Lunar LakeAnd Panther Lake Chips that will target the AI ​​PC segment.

Image source: Intel

These AI blocks will take up a large chunk of valuable die space that could have been devoted elsewhere, such as higher core counts, higher iGPU counts, wider caches, and more, but it seems that the AI ​​PC craze has forced chipmakers to compromise on quality. has been withdrawn. Focus more on CPU / iGPU performance and the NPU side of things. For Strix Point, AMD has a 3x advantage with 50 TOPs while Lunar Lake is going to offer 3x AI NPU performance over Meteor Lake (~35 TOPs) and Panther Lake is going to double it further (~70 TOPs). .

Image Source: Anandtech Forums

For now, it looks like unless the AI ​​bubble bursts (which doesn't seem to be happening anytime soon), chipmakers like AMD and Intel are going to increasingly devote resources to adding NPUs. We'll still see improvements on the CPU and GPU side for next-generation SOCs, but there will always be untapped potential for what could have been if these companies had focused elsewhere than the NPU.

2024 AI PC platforms

Brand name apple Qualcomm AMD Intel
CPU name M3 Snapdragon X Elite Ryzen 8040 “Hawkpoint” Meteor Lake “Core Ultra”
CPU architecture arm arm x86 x86
CPU process 3 nm 4nm 4 nm 7nm (Intel 4)
Maximum CPU cores 16 cores (MAX) 12 Cor 8 Cor 16 Cor
NPU architecture In the house Hexagonal NPU XDNA 1 NPU Movidius NPU
Total AI tops 18 tops 75 tops (tops) 38 Tops (16 Tops NPU) 34 TOPS (11 TOPS NPU)
GPU architecture In the house Adreno GPU rDNA 3 Alchemist Arc Xe-LPG
Maximum GPU cores 40 cover TBD 12 compute units 8 Xe-Cores
GPU TFLOPs TBD 4.6 TFLOPS 8.9 TFLOPS ~4.5 TFLOPS
Memory Support (Maximum) LPDDR5-6400 LPDDR5X-8533 LPDDR5X-7500 LPDDR5X-7467
Availability Q4 2024 Mid 2024 Q1 2024 Q4 2023

What do you think about chipmakers prioritizing NPUs over traditional CPU and GPU performance in future SOCs?

Share this story.

Facebook

Twitter

About the Author

Leave a Reply