Astera Labs wants to make it simpler and cheaper for cloud and hyperscale customers to keep up with ever larger AI models, by allowing DRAM to be attached directly to a PCIe slot using its Compute Express Link (CXL) memory controllers.
Founded in 2017, the Silicon Valley-based startup has focused on developing chip architectures for hyperscale and cloud customers, and its Leo E and P-series CXL memory controllers and Aurora A-series memory expansion cards are no exception.
“We have developed technology that you know addresses the requirements of large cloud vendors like Amazon, or Meta, or Google,” Astera Labs chief business officer Sanjay Gajendra told The Register.
Hype around CXL has grown in recent months as the launch of the first compatible CPUs from AMD and Intel inch closer.
In a nutshell, CXL enables CPUs, memory, accelerators, and other peripherals to communicate with each over a common cache-coherent interface. Eventually, the interconnect tech will enable fully-disaggregated systems where resources can be shared throughout the rack over high-speed CXL fabrics.
However, in its first iterations, most of the conversation has centered on memory expansion, and this is what Astera Labs’ Leo E-series chips are designed to support. The chips are based on the CXL.mem standard to enable system memory to be attached to a compatible PCIe slot.
The idea here is that instead of being limited by the number of DIMM slots on a motherboard, customers can attach additional DIMMs using an expansion card and one of Astera’s Leo memory controllers.
- Why you should start paying attention to CXL now
- Samsung unveils 512GB DRAM CXL module in E3.S form factor
- Interconnect innovation key to satiating soaring demand for fiber capacity
- Inflation worries push PC and mobile DRAM demand down, with pricing to follow
Meanwhile, the company’s Leo P-series chips expand on this capability to enable CXL 2.0 functionality, such as memory pooling. In this implementation, memory attached to the controller can be accessed by multiple processors on the host.
“You’re able to provision the memory on demand, depending on whichever processor requires that based on the workload it’s running,” Gajendra said. “From a cost standpoint, it’s much more efficient to have a pooled memory versus a localized memory.”
The chips support up to 2TB of 5600 MT/s DDR5 registered error-correcting memory in a dual-channel configuration, which Astera says is enough to fully utilize CXL 1.1/2.0’s available bandwidth.
The company also offers its Aurora expansion cards for cloud customers that don’t want to use custom hardware. The card is available in a standard PCIe form factor and features four DDR5 memory slots.
While it would certainly be possible to package the memory controllers directly alongside the DRAM chips, as suppliers like Samsung have done, Gajendra says the primary use case for Leo is to support a DIMM-type form factor.
The approach is favored by cloud and hyperscale customers, he said, because it offers them greater flexibility in terms of memory vendor selection and pricing, compared to a fully integrated product. All three Astera products are sampling to customers now.
“We’re entering the preproduction stage, which means that customers are actually buying this part; they’re starting to deploy this in their fleets, which is an important step in the CXL transition from being a promise to being a reality,” Gajendra said.
Astera’s ambitions for the CXL interconnect don’t stop at memory expansion or pooling, he said. Looking ahead, the company plans to introduce products with support for the 3.0 spec ratified earlier this month.
“In the future, you can imagine disaggregation at the rack level where you have these memory appliances which essentially disaggregate or provision from memory at a rack level rather than a server,” Gajendra said.
However, the company isn’t without competition. Earlier this year Marvell detailed its CXL strategy, which also involves memory expansion and pooling devices. Samsung has, as mentioned, also shown off a 512GB memory expansion module in an E3.S SSD form factor, while SK Hynix demoed a DRAM expansion module of its own at the Flash Memory Summit earlier this month. ®