In mid-August, Samsung is said to be accelerating its progress on next-gen HBM, targeting to tape-out HBM4 by the end of this year. Now it seems SK hynix has maintained its competitive edge, as the company aims to tape out HBM4 in October, which will be used to power NVIDIA’s Rubin AI chips, according to the reports by Wccftech and ZDNet.
In addition, the reports note that SK hynix also plans to tape out HBM4 for AMD’s AI chips, which is expected to take place a few month later.
To further prepare for the strong demand from AI chip giants’ upcoming product launch, SK hynix is assembling development teams to supply HBM4 to NVIDIA and AMD, according to Wccftech and ZDNet.
Per SK hynix’s product roadmap, the company plans to launch 12-layer stacked HBM4 in the second half of 2025 and 16-layer in 2026. With NVIDIA’s Rubin series planned for 2026, it is expected to adopt HBM4 12Hi with 8 clusters per GPU.
SK hynix is the major HBM3e supplier for NVIDIA’s AI chips, as the memory giant has taken the lead by starting shipping the product a few months ago, followed by Micron. Samsung’s HBM3, on the other hand, have been cleared by NVIDIA in July, while its HBM3e is still striving to pass NVIDIA’s qualification.
According to the reports, the introduction of HBM4 represents another major milestone for SK hynix, as it offers the fastest DRAM with exceptional power efficiency and higher bandwidth.
HBM4 will feature double the channel width of HBM3E, offering 2048 bits versus 1024 bits. Moreover, it supports stacking 16 DRAM dies, up from 12 in HBM3e, with options for 24Gb and 32Gb layers. This advancement will enable a capacity of 64GB per stack, compared to 32GB with HBM3e, the reports suggest.
On August 19, SK hynix showcased the ambition on securing its leadership on HBM, claiming that the company is developing a product with performance up to 30 times higher than current HBM.
Read more
(Photo credit: SK hynix)