News

[News] Three-way Contest for HBM Dominance, Uncertainties Surrounding China’s Supply Chain Involvement


2024-03-15 Semiconductors editor

With numerous cloud computing companies and large-scale AI model manufacturers investing heavily in AI computing infrastructure, the demand for AI processors is rapidly increasing. As per a report from IJIWEI, the demand for HBM (High Bandwidth Memory), a key component among them, has been on the rise as well.

Amid the opportunity brought about by the surge in demand for computing power, which has in turn created a wave of opportunities for storage capabilities, when looking at the entire HBM industry chain, the number of China’s local companies which are able to enter the field is limited.

Faced with significant technological challenges but vast prospects, whether from the perspective of independent controllability or market competition, it is imperative to accelerate the pace of catching up.

HBM Demand Grows Against the Trend, Dominated by Three Giants

The first TSV HBM product debuted in 2014, but it wasn’t until after the release of ChatGPT in 2023 that the robust demand for AI servers drove rapid iterations of HBM technology in the order of HBM1, HBM2, HBM2e, HBM3, and HBM3e.

The fourth-generation HBM3 has been mass-produced and applied, with significant improvements in bandwidth, stack height, capacity, I/O speed, and more compared to the first generation. Currently, only three storage giants—SK Hynix, Samsung Electronics, and Micron—are capable of mass-producing HBM.

According to a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

In 2023, the primary applications in the market were HBM2, HBM2e, and HBM3, with the penetration rate of HBM3 increasing in the latter half of the year due to the push from NVIDIA’s H100 and AMD’s MI300.

According to TrendForce’s report, SK Hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter.

Driven by market demand, major players such as SK Hynix, Samsung, and Micron Technology are increasing their efforts to expand production capacity. SK Hynix revealed in February that all its HBM products had been fully allocated for the year, prompting preparations for 2025 to maintain market leadership.

Reportedly, Samsung, aiming to compete in the 2024 HBM market, plans to increase the maximum production capacity to 150,000 to 170,000 units per month before the end of the fourth quarter of this year. Previously, Samsung also invested KRW 10.5 billion to acquire Samsung Display’s factory and equipment in Cheonan, South Korea, with the aim of expanding HBM production capacity.

Micron Technology CEO Sanjay Mehrotra recently revealed that Micron’s HBM production capacity for 2024 is expected to be fully allocated.

Although the three major HBM suppliers continue to focus on iterating HBM3e, there is still room for improvement in single-die DRAM and stacking layers. However, the development of HBM4 has been put on the agenda.

Trendforce previously predicted that HBM4 will mark the first use of a 12nm process wafer for its bottommost logic die (base die), to be supplied by foundries. This advancement signifies a collaborative effort between foundries and memory suppliers for each HBM product, reflecting the evolving landscape of high-speed memory technology.

Continuous Surge in HBM Demand and Prices, Local Supply Chains in China Catching Up

In the face of a vast market opportunity, aside from the continuous efforts of the three giants to ramp up research and production, some second and third-tier Chinese DRAM manufacturers have also entered the HBM race. With the improvement in the level of locally produced AI processors, the demand for independent HBM supply chains in China has become increasingly urgent.

Top global manufacturers operate DRAM processes at the 1alpha and 1beta levels, while China’s DRAM processes operate at the 25-17nm level. China’s DRAM processes are approaching those overseas, and there are advanced packaging technology resources and GPU customer resources locally, indicating a strong demand for HBM localization. In the future, local DRAM manufacturers in China are reportedly expected to break through into HBM.

It is worth noting that the research and manufacturing of HBM involve complex processes and technical challenges, including wafer-level packaging, testing technology, design compatibility, and more. CoWoS is currently the mainstream packaging solution for AI processors, and in AI chips utilizing CoWoS technology, HBM integration is also incorporated.

CoWoS and HBM involves processes such as TSV (Through-Silicon Via), bumps, microbumps, and RDL (Redistribution Layer). Among these, TSV accounts for the highest proportion of the 3D packaging cost of HBM, close to 30%.

Currently, China has only a few leading packaging companies such as JCET Group, Tongfu Microelectronics, and SJSemi that possess the technology (such as TSV through-silicon via) and equipment required to support HBM production.

However, despite these efforts, the number of Chinese companies truly involved in the HBM industry chain remains limited, with most focusing on upstream materials.
With GPU acquisition restricted, breakthroughs in China’s AI processors are urgently needed both from its own self-sufficiency perspective and in terms of market competition. Therefore, synchronized breakthroughs in HBM are also crucial from Chinese manufacturers.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from IJIWEI.

Get in touch with us