0
Celestial AI Combines HBM & DDR5 Memory To Lower Power Consumption By 90%, Could Be Used By AMD In Next-Gen Chiplets

Celestial AI Combines HBM & DDR5 Memory To Lower Power Consumption By 90%, Could Be Used By AMD In Next-Gen Chiplets

The startup, Celestial AI, has developed a new interconnect solution that uses DDR5 and HBM memory to boost chiplet performance with AMD and is likely among the first to use such a design. I am

Celestial AI plans to break traditional interconnect barriers, courtesy of Silicon Photonics, which combines HBM and DDR5 memory.

As with semiconductors, generational evolution has become more than necessary for the AI ​​industry, whether in the form of advances in hardware or interconnect methods.

Traditional methods of connecting thousands of accelerators include NVIDIA's NVLINK, traditional Ethernet methods, and even AMD's own Infinity Fabric. Still, they are limited in a number of ways, not only by the interconnect performance they provide but also by the lack of scope for expansion, which has led the industry to look for alternatives, one of which is Celestial AI. is Photonic Fabric.

In an earlier post, we mentioned The importance of silicon photonics And how the technology, which combines laser and silicon technology, has become the next big thing in the world of interconnects. Celestial AI has taken advantage of this, harnessing the strengths of the technology to develop its Photonic Fabric solution.

Image source: AMD

According to the firm's co-founder Dave Lazowski, the firm's photonic fabric has been able to garner widespread interest from potential clients, not only securing $175 million in its first round of funding, but also backing from the likes of AMD. obtained, which shows how the interconnection method can prove to be large.

The growth in demand for our photonic fabric is a product of the right technology, the right team and the right customer engagement model.

– Dave Lazowski, co-founder of Celestial AI

Moving on to photonic fabric capabilities, the firm revealed that the first generation of the technology could potentially deliver 1.8 Tb/sec for every square millimeter, which the second iteration could see a fourfold increase over its predecessor. . However, due to the memory capacity limitations that come with stacking multiple HBM models, the interconnect becomes somewhat limited, but Celestial AI has proposed an attractive solution for this as well.

The firm plans to integrate the use of DDR5 memory with HBM stacks as the memory module expansion is onboard to provide more significant capacity by combining two HBMs and sets of four DDR5 DIMMS, with memory capacities of up to 72 GB and 2 TB. can be obtained That's really interesting, considering that with DDR5, you'll get a higher price-to-capacity ratio here, resulting in a more efficient model. Celestial AI plans to use photonic fabric as an interface to connect everything, and the firm labels the method as a “supercharged grease hopper without all the cost overhead.”

However, Celestial AI believes that their integrated solution will not hit the markets until at least 2027, and by then, many competitors will emerge in the silicon photonics segment, which means that Celestial AI will have Won't have an easy time getting in. Markets, especially after the mainstream solution by TSMC and Intel.

News Sources: TechRadar, The next platform

Share this story.

Facebook

Twitter

About the Author

Leave a Reply