News
Samsung’s latest high bandwidth memory (HBM) chips have reportedly failed Nvidia’s tests, while the reasons were revealed for the first time. According to the latest report by Reuters, the failure was said to be due to issues with heat and power consumption.
Citing sources familiar with the matter, Reuters noted that Samsung’s HBM3 chips, as well as its next generation HBM3e chips, may be affected, which the company and its competitors, SK hynix and Micron, plan to launch later this year.
In response to the concerns raising by heat and power consumption regarding HBM chips, Samsung stated that its HBM testing proceeds as planned.
In an offical statement, Samsung noted that it is in the process of optimizing products through close collaboration with customers, with testing proceeding smoothly and as planned. The company said that HBM is a customized memory product, which requires optimization processes in tandem with customers’ needs.
According to Samsung, the tech giant is currently partnering closely with various companies to continuously test technology and performance, and to thoroughly verify the quality and performance of HBM.
Nvidia, on the other hand, declined to comment.
As Nvidia currently dominates the global GPU market with an 80% lion’s share for AI applications, meeting Nvidia’s stardards would doubtlessly be critical for HBM manufacturers.
Reuters reported that Samsung has been attempting to pass Nvidia’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.
According to TrendForce’s analysis earlier, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix. SK hynix has been providing HBM3 chips to Nvidia since 2022, Reuters noted.
According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Another US memory giant, Micron, stated in February that its HBM3e consumes 30% less power than its competitors, meeting the demands of generative AI applications. Moreover, the company’s 24GB 8H HBM3e will be part of NVIDIA’s H200 Tensor Core GPUs, breaking the previous exclusivity of SK hynix as the sole supplier for the H100.
Considering major competitors’ progress on HBM3e, if Samsung fails to meet Nvidia’s requirements, the industry and investors may be more concerned on whether the Korean tech heavyweight would further fall behind its rivals in the HBM market.
(Photo credit: Samsung)
News
SK hynix has disclosed yield details regarding the company’s 5th generation High Bandwidth Memory (HBM), HBM3e, for the first time. According to a report from the Financial Times, citing Kwon Jae-soon, the head of yield at SK hynix, the memory giant has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
This is better than the industry’s previous speculation, which estimated the yield of SK Hynix’s HBM3e to be between 60% and 70%, according to a report by Business Korea.
According to TrendForce’s analysis earlier, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands.
The challenge, however, is the supply bottleneck caused by both CoWoS packaging constraints and the inherently long production cycle of HBM—extending the timeline from wafer initiation to the final product beyond two quarters.
The report by Business Korea noted that HBM manufacturing involves stacking multiple DRAMs vertically, which presents greater process complexity compared to standard DRAM. Specifically, the yield of the silicon via (TSV), a critical process of HBM3e, has been low, ranging from 40% to 60%, posing a significant challenge for improvement.
In terms of SK hynix’s future roadmap for HBM, CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for 2024 and 2025 has almost been fully sold out. According to Business Korea, SK hynix commenced delivery of 8-layer HBM3e products in March and plans to supply 12-layer HBM3e products in the third quarter of this year. The 12-layer HBM4 (sixth-generation) is scheduled for next year, with the 16-layer version expected to enter production by 2026.
Read more
(Photo credit: SK hynix)
News
On May 21st, Samsung Electronics announced that Young Hyun Jun will take over as the head of the Device Solutions (DS) division, replacing the current leader, Kyung Kye-Hyun, who will now lead the Samsung Advanced Institute of Technology (SAIT) and the Future Business Division, overseeing the company’s global operations of the Memory, System LSI and Foundry business units.
Young Hyun Jun initially joined Samsung Electronics in 2000, focusing on the development and strategic marketing of DRAM and flash memory. He has been in charge of the memory business since 2014, served as CEO of Samsung SDI, the battery division, in 2017, and led the Future Business Division starting in 2024.
In the press release, Samsung expressed confidence that Young Hyun Jun will strengthen its competitiveness amid an uncertain global business environment.
Upon approval by the board of directors and shareholders, Young Hyun Jun will also be appointed as CEO of Samsung. Samsung employs a dual CEO system, with one CEO responsible for the semiconductor division and the other for the device experience division, which includes the mobile and visual display business groups.
At the time of this personnel announcement, Samsung is striving to catch up with its competitor SK Hynix in the AI memory sector. SK Hynix currently leads the market for high bandwidth memory (HBM), a crucial component for AI computing. SK Hynix previously stated that its HBM production capacity for this year and next year is already sold out.
TrendForce has analyzed that the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands. Samsung’s entry into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment.
Whether Samsung, led by Young Hyun Jun with his extensive memory experience, can regain ground in its competition with SK Hynix is under close observation.
Per a report from Reuters, a source has noted that since Samsung’s personnel changes typically occur at the beginning of the year, it is unusual to replace a high-ranking executive like this in the middle of the year.
Read more
(Photo credit: Samsung)
News
Memory giants Samsung, SK Hynix, and Micron are all actively investing in high-bandwidth memory (HBM) production. Industry sources cited in a report from Commercial Times indicate that due to capacity crowding effects, DRAM products may face shortages in the second half of the year.
According to TrendForce, the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.
HBM production will be prioritized due to its profitability and increasing demand. Regarding the latest developments in HBM, TrendForce indicates that HBM3e will become the market mainstream this year, with shipments concentrated in the second half of the year.
Currently, SK Hynix remains the primary supplier, along with Micron, both utilizing 1beta nm processes and already shipping to NVIDIA. Samsung, using a 1alpha nm process, is expected to complete qualification in the second quarter and begin deliveries mid-year.
The growing content per unit in PCs, servers, and smartphones is driving up the consumption of advanced process capacity each quarter. Servers, in particular, are seeing the highest capacity increase—primarily driven by AI servers with content of 1.75 TB per unit. With the mass production of new platforms like Intel’s Sapphire Rapids and AMD’s Genoa, which require DDR5 memory, DDR5 penetration is expected to exceed 50% by the end of the year.
As HBM3e shipments are expected to be concentrated in the second half of the year—coinciding with the peak season for memory demand—market demand for DDR5 and LPDDR5(X) is also expected to increase. With a higher proportion of wafer input allocated to HBM production, the output of advanced processes will be limited. Consequently, capacity allocation in the second half of the year will be crucial in determining whether supply can meet demand.
Samsung expects existing facilities to be fully utilized by the end of 2024. The new P4L plant is slated for completion in 2025, and the Line 15 facility will undergo a process transition from 1Y nm to 1beta nm and above.
The capacity of SK Hynix’s M16 plant is expected to expand next year, while the M15X plant is also planned for completion in 2025, with mass production starting at the end of next year.
Micron’s facility in Taiwan will return to full capacity next year, with future expansions focused on the US. The Boise facility is expected to be completed in 2025, with equipment installations following and mass production planned for 2026.
With the expected volume production of NVIDIA’s GB200 in 2025, featuring HBM3e with 192/384GB specifications, HBM output is anticipated to nearly double. Each major manufacturer will invest in HBM4 development, prioritizing HBM in their capacity planning. Consequently, due to capacity crowding effects, there may be shortages in DRAM supply.
Read more
(Photo credit: Samsung)
News
According to reports from Korean news outlet FN News and Wccftech, aiming to win back NVDIA as a major customer, Samsung has made it a priority to secure chip order from the GPU heavyweight this year. To achieve this, Samsung is reportedly doing everything possible to ensure the company’s 3nm process node, which uses GAA (Gate-All-Around) architecture, meets NVIDIA’s requirements.
Sources quoted by the reports indicated that Samsung has implemented an internal strategy called “Nemo,” specifically targeting NVIDIA. Its foundry now plans to commence mass production of the 3nm GAA process in the first half of 2024. The GAA technology is expected to overcome significant bottlenecks associated with the previous FinFET processes, but it is still uncertain if this will be sufficient to persuade NVIDIA.
NVIDIA has been cooperating with TSMC in advanced process nodes for developing its GPUs for quite a while, both in consumer and data center markets. The tech giant’s latest GPU families, including Ada Lovelace, Hopper, and Blackwell, are all manufactured using TSMC’s 5nm (4N) processes, according to the aforementioned reports.
It’s important to note that NVIDIA last used Samsung’s 8nm process for its GeForce RTX 30 “Ampere” GPUs, designed for the gaming segment. However, the successor to Ampere, the Ada Lovelace “GeForce RTX 40,” switched to TSMC’s 5nm process.
Considering the high demand for NVIDIA’s GPUs, the chipmaker is expected to procure chips from multiple semiconductor fabs, which is simliar to its previous strategy of dual-sourcing HBM and packaging materials, according to Wccftech.
(Photo credit: Samsung)