News

[News] Micron’s 12-Hi HBM3e Ready for Production, Targeting NVIDIA’s H200 and B100/ B200 GPUs


2024-09-10 Semiconductors editor

After its 8-Hi HBM3e entered mass production in February, Micron officially introduced the 12-Hi HBM3e memory stacks on Monday, which features a 36 GB capacity, according to a report by Tom’s Hardware. The new products are designed for cutting-edge processors used in AI and high-performance computing (HPC) workloads, including NVIDIA’s H200 and B100/B200 GPUs.

It is worth noting that the achievement has made the US memory chip giant almost on the same page with the current HBM leader, SK hynix. Citing Justin Kim, president and head of the company’s AI Infra division at SEMICON Taiwan last week, another report by Reuters notes that SK hynix is set to begin mass production of its 12-Hi HBM3e chips by the end of this month.

Samsung, on the other hand, is said to have completed NVIDIA’s quality test for the shipment of 8-Hi HBM3e memory, while the company is still working on the verification of its 12-Hi HBM3e.

Micron’s 12-Hi HBM3e memory stacks, according to Tom’s Hardware, feature a 36GB capacity, a 50% increase over the previous 8-Hi models, which had 24GB. This expanded capacity enables data centers to handle larger AI models, such as Meta AI’s Llama 2, with up to 70 billion parameters on a single processor. In addition, this capability reduces the need for frequent CPU offloading and minimizes communication delays between GPUs, resulting in faster data processing.

According to Tom’s Hardware, in terms of performance, Micron’s 12-Hi HBM3e stacks deliver over 1.2 TB/s. Despite offering 50% more memory capacity than competing products, Micron’s HBM3e consumes less power than the 8-Hi HBM3e stacks.

Regarding the future roadmap of HBM, Micron is said to be working on its next-generation memory solutions, including HBM4 and HBM4e. These upcoming memory technologies are set to further enhance performance, solidifying Micron’s position as a leader in addressing the increasing demand for advanced memory in AI processors, such as NVIDIA’s GPUs built on the Blackwell and Rubin architectures, the report states.

Read more

(Photo credit: Micron)

Please note that this article cites information from Tom’s Hardware and Reuters.

Get in touch with us