News
As SK hynix and Samsung are releasing their financial results on July 25th and July 31st, respectively, their progress on HBM3 and HBM3e have also been brought into spotlight. Earlier this week, Samsung is said to eventually passed NVIDIA’s qualification tests for its HBM3 chips. While the Big Three in the memory sector are now almost on the same page, the war between HBM3/ HBM3e is expected to intensify in the second half of 2024.
Samsung Takes a Big Leap
According to reports from Reuters and the Korea Economic Daily, Samsung’s HBM3 chips have been cleared by NVIDIA, which will initially be used exclusively in the AI giant’s H20, a less advanced GPU tailored for the Chinese market. Citing sources familiar with the matter, the reports note that Samsung may begin supplying HBM3 to NVIDIA as early as August.
However, as the U.S. is reportedly considering to implement new trade sanctions on China in October, looking to further limit China’s access to advanced AI chip technology, NVIDIA’s HGX-H20 AI GPUs might face a sales ban. Whether and to what extent would Samsung’s momentum be impacted remains to be seen.
SK hynix Eyes HBM3e to Account > 50% of Total HBM Shipments
SK hynix, as the current HBM market leader, has expressed its optimism in securing the throne on HBM3. According to a report by Business Korea, citing Kim Woo-hyun, vice president and chief financial officer of SK hynix, the company significantly expanded its HBM3e shipments in the second quarter as demand surged.
Moreover, SK hynix reportedly expects its HBM3e shipments to surpass those of HBM3 in the third quarter, with HBM3e accounting for more than half of the total HBM shipments in 2024.
SK hynix started mass production of the 8-layer HBM3e for NVIDIA in March, and now it is also confident about the progress on the 12-layer HBM3e. According to Business Korea, the company expects to begin supplying 12-layer HBM3e products to its customers in the fourth quarter. In addition, it projects the supply of 12-layer products to surpass that of 8-layer products in the first half of 2025.
Micron Expands at Full Throttle
Micron, on the other hand, has reportedly started mass production of 8-layer HBM3e in February, according to a previous report from Korea Joongang Daily. The company is also reportedly planning to complete preparations for mass production of 12-layer HBM3e in the second half and supply it to major customers like NVIDIA in 2025.
Targeting to achieve a 20% to 25% market share in HBM by 2025, Micron is said to be building a pilot production line for HBM in the U.S. and is considering producing HBM in Malaysia for the first time to capture more demand from the AI boom, a report by Nikkei notes. Micron’s largest HBM production facility is located in Taichung, Taiwan, where expansion efforts are also underway.
Earlier in May, a report from a Japanese media outlet The Daily Industrial News also indicated that Micron planned to build a new DRAM plant in Hiroshima, with construction scheduled to begin in early 2026 and aiming for completion of plant buildings and first tool-in by the end of 2027.
TrendForce’s latest report on the memory industry reveals that DRAM revenue is expected to see significant increases of 75% in 2024, driven by the rise of high-value products like HBM. As the market keeps booming, would Samsung come from behind and take the lead in the HBM3e battle ground? Or would SK hynix defend its throne? The progress of 12-layer HBM3e may be a key factor to watch.
Read more
(Photo credit: Samsung)
News
In order to address the growing demand for high-performance memory solutions fueled by the expansion of the artificial intelligence (AI) market, Samsung Electronics has formed a new “HBM Development Team” within its Device Solutions (DS) Division to enhance its competitive edge in high-bandwidth memory (HBM), according to the latest report from Business Korea. The new team will concentrate on advancing the progress on HBM3, HBM3e, and the next-generation HBM4 technologies, the report noted.
This initiative comes shortly after the Korean memory giant changed its semiconductor business leader in May. Citing industry sources, the report stated that Samsung’s DS Division carried out an organizational restructuring centered on the establishment of the HBM Development Team.
Also, the move attracts attention as on July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from NVIDIA for qualification of its 5th generation HBM, HBM3e, though the company denied the market rumor afterwards.
Samsung has a long history of dedicating to HBM development. Since 2015, it has maintained an HBM development organization within its Memory Business Division. Earlier this year, the tech heavyweight also created a task force (TF) to boost its HBM competitiveness, and the new team will unify and enhance these ongoing efforts, the report noted.
According to the report, Samsung reached a significant milestone in February by developing the industry’s first HBM3e 12-layer stack, which offers the industry’s largest capacity of 36 gigabytes (GB). Samples of the HBM3e 8-layer and 12-layer stacks have already been sent to NVIDIA for quality testing.
Regarding the latest development, TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.
According to TrendForce’s latest analysis on the HBM market, HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50–60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.
Read more
(Photo credit: Samsung)
News
According to a report from the South Korean newspaper “Korea Joongang Daily,” following Micron’s initiation of mass production of the latest high-bandwidth memory HBM3e in February 2024, it has recently secured an order from NVIDIA for the H200 AI GPU. It is understood that NVIDIA’s upcoming H200 processor will utilize the latest HBM3e, which are more powerful than the HBM3 used in the H100 processor.
The same report further indicates that Micron secured the H200 order due to its adoption of 1b nanometer technology in its HBM3e, which is equivalent to the 12-nanometer technology used by SK Hynix in producing HBM. In contrast, Samsung Electronics currently employs 1a nanometer technology, which is equivalent to 14-nanometer technology, reportedly lagging behind Micron and SK Hynix.
The report from Commercial Times indicates that Micron’s ability to secure the NVIDIA order for H200 is attributed to the chip’s outstanding performance, energy efficiency, and seamless scalability.
As per a previous report from TrendForce, starting in 2024, the market’s attention will shift from HBM3 to HBM3e, with expectations for a gradual ramp-up in production through the second half of the year, positioning HBM3e as the new mainstream in the HBM market.
TrendForce reports that SK Hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.
Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter. With Samsung having already made significant strides in HBM3 and its HBM3e validation expected to be completed soon, the company is poised to significantly narrow the market share gap with SK Hynix by the end of the year, reshaping the competitive dynamics in the HBM market.
Read more
(Photo credit: Micron)
News
The shortage of advanced packaging production capacity is anticipated to end earlier than expected. Industry suggests that Samsung’s inclusion in providing HBM3 production capacity has led to an increased supply of memory essential for advanced packaging. Coupled with TSMC’s strategy of enhancing advanced packaging production capacity through equipment modifications and partial outsourcing, and the adjustments made by some CSP in designs and placing orders, the bottleneck in advanced packaging capacity is poised to open up as early as the first quarter of the upcoming year, surpassing industry predictions by one quarter to half a year, according to the UDN News.
TSMC refrains from commenting on market speculations, while Samsung has already issued a press release signaling the expansion of HBM3 product sales to meet the growing demand for the new interface, concurrently boosting the share of advanced processes.
Industry indicates that the previous global shortage of AI chips primarily resulted from inadequate advanced packaging capacity. Now the shortage in advanced packaging capacity is expected to end sooner, it implies a positive shift in the supply of AI chips.
Samsung, alongside Micron and SK Hynix, is a key partner for TSMC in advanced packaging. In a recent press release, Samsung underscores its close collaboration with TSMC in previous generations and the current high-bandwidth memory (HBM) technology, supporting the compatibility of the CoWoS process and the interconnectivity of HBM. Having joined the TSMC OIP 3DFabric Alliance in 2022, Samsung is set to broaden its scope of work and provide solutions for future generations of HBM.
Previously, the industry points out that the earlier shortage of AI chips stemmed from three main factors: insufficient advanced packaging capacity, tight HBM3 memory capacity, and some CSPs repeatedly placing orders. Now, the obstacles related to these factors are gradually being overcome. In addition to TSMC and Samsung’s commitment to increasing advanced packaging capacity, CSPs are adjusting designs, reducing the usage of advanced packaging, and canceling previous repeated orders – all of which are the key factors.
TSMC’s ongoing collaboration with OSATs(Outsourced Semiconductor Assembly And Test) to expedite WoS capacity expansion is gaining momentum. NVIDIA confirmed during a recent financial calls that it has certified other CoWoS advanced packaging suppliers’ capacity as a backup. Industry speculation suggests that certifying the capacity of other CoWoS suppliers for both part of the front-end and back-end production will contribute to TSMC and its partners achieving the target of reaching a monthly CoWoS capacity of approximately 40,000 pieces in the first quarter of the next year.
Furthermore, previous challenges in expanding advanced packaging production capacity, especially in obtaining overseas equipment, are gradually being overcome. With equipment optimization, more capacity is being extracted, alleviating the shortage of AI chip capacity.
(Image: Samsung)
Explore more
News
Market reports suggest Nvidia’s new product release cycle has shortened from two to a year, sparking intense competition among major memory companies in the realm of next-gen High Bandwidth Memory (HBM) technology. Samsung, SK Hynix, and Micron are fervently competing, with SK Hynix currently holding the dominant position in the HBM market. However, Micron and Samsung are strategically positioned, poised for a potential overtake, reported by TechNews.
Current Status of the HBM Industry
SK Hynix made a breakthrough in 2013 by successfully developing and mass-producing HBM using the Through Silicon Via (TSV) architecture. In 2019, they achieved success with HBM2E, maintaining the overwhelming advantage in the HBM market. According to the latest research from TrendForce, Nvidia plan to partner with more HBM suppliers. Samsung, as one of the suppliers, its HBM3 (24GB) is anticipated to complete verification with NVIDIA by December this year.
Regarding HBM3e progress, Micron, SK Hynix, and Samsung provided 8-layer (24GB) Nvidia samples in July, August, and October, respectively, with the fastest verification expected by year-end. All three major players anticipate completing verification in the first quarter of 2024.
As for HBM4, the earliest launch is expected in 2026, with a stack increase to 16 layers from the existing 12 layers. The memory stack will likely adopt a 2048-bit memory stack connection interface, driving demand for the new “Hybrid Bonding” stacking method. The 12-layer HBM4 product is set to launch in 2026, followed by the 16-layer product expected in 2027.
Navigating HBM4, the New Technologies and Roadmaps of Memory Industry Leaders
SK Hynix
According to reports from Business Korea, SK Hynix is preparing to adopt “2.5D Fan-Out” packaging for the next-generation HBM technology. This move aims to enhance performance and reduce packaging costs. This technology, not previously used in the memory industry but common in advanced semiconductor manufacturing, is seen as having the potential to “completely change the semiconductor and foundry industry.” SK Hynix plans to unveil research results using this packaging method as early as next year.
The 2.5D Fan-Out packaging technique involves arranging two DRAM horizontally and assembling them similar to regular chips. The absence of a substrate beneath the chips allows for thinner chips, significantly reducing the thickness when installed in IT equipment. Simultaneously, this technique bypasses the Through Silicon Via (TSV) process, providing more Input/Output (I/O) options and lowering costs.
According to their previous plan, SK Hynix aims to mass-produce the sixth-generation HBM (HBM4) as early as 2026. The company is also actively researching “Hybrid Bonding” technology, likely to be applied to HBM4 products.
Currently, HBM stacks are placed on the interposer next to or GPUs and are connected to their interposer. While SK Hynix’s new goal is to eliminate the interposer completely, placing HBM4 directly on GPUs from companies like Nvidia and AMD, with TSMC as the preferred foundry.
Samsung
Samsung is researching the application of photonics in HBM technology’s interposer layer, aiming to address challenges related to heat and transistor density. Yan Li, Principal Engineer in Samsung’s advanced packaging team, shared insights at the OCP Global Summit in October 2023.
(Image: Samsung)
According to Samsung, The industry has made significant strides in integrating photonics with HBM through two main approaches. One involves placing a photonics interposer between the bottom packaging layer and the top layer containing GPU and HBM, acting as a communication layer. However, this method is costly, requiring an interposer and photon I/O for logic chips and HBM.
(Image: Samsung)
The alternative approach separates the HBM memory module from packaging, directly connecting it to the processor using photonics. Rather than dealing with the complexity of packaging, a more efficient approach is to separate the HBM memory module from the chip itself and connect it to the logic IC using photonics technology. This approach not only simplifies the manufacturing and packaging costs for HBM and logic ICs but also eliminates the need for internal digital-to-optical conversions in the circuitry. However, careful attention is required to address heat dissipation.
Micron
As reported by Tom’s Hardware, Micron’s 8-layer HBM3e (24GB) is expected to launch in early 2024, contributing to improved AI training and inference performance. The 12-layer HBM3e (36GB) chip is expected to debut in 2025.
Micron is working on HBM4 and HBM4e along with other companies. The required bandwidth is expected to exceed 1.5 TB/s. Micron anticipates launching 12-layer and 16-layer HBM4 with capacities of 36GB to 48GB between 2026 and 2027. After 2028, HBM4E will be introduced, pushing the maximum bandwidth beyond 2+ TB/s and increasing stack capacity to 48GB to 64GB.
Micron is taking a different approach from Samsung and SK Hynix by not integrating HBM and logic chips into a single die, suggested by Chinese media Semiconductor Industry Observation. This difference in strategy may lead to distinct technical paths, and Micron might advise Nvidia, Intel, AMD that relying solely on the same company’s chip carries greater risks.
(Image: Micron)
TSMC Aids Memory Stacking
Currently, TSMC 3DFabric Alliance closely collaborates with major memory partners, including Micron, Samsung, and SK Hynix. This collaboration ensures the rapid growth of HBM3 and HBM3e, as well as the packaging of 12-layer HBM3/HBM3e, by providing more memory capacity to promote the development of generative AI.
(Image: TSMC)
(Image: SK Hynix)
Explore more