News
Samsung’s latest high bandwidth memory (HBM) chips have reportedly failed Nvidia’s tests, while the reasons were revealed for the first time. According to the latest report by Reuters, the failure was said to be due to issues with heat and power consumption.
Citing sources familiar with the matter, Reuters noted that Samsung’s HBM3 chips, as well as its next generation HBM3e chips, may be affected, which the company and its competitors, SK hynix and Micron, plan to launch later this year.
In response to the concerns raising by heat and power consumption regarding HBM chips, Samsung stated that its HBM testing proceeds as planned.
In an offical statement, Samsung noted that it is in the process of optimizing products through close collaboration with customers, with testing proceeding smoothly and as planned. The company said that HBM is a customized memory product, which requires optimization processes in tandem with customers’ needs.
According to Samsung, the tech giant is currently partnering closely with various companies to continuously test technology and performance, and to thoroughly verify the quality and performance of HBM.
Nvidia, on the other hand, declined to comment.
As Nvidia currently dominates the global GPU market with an 80% lion’s share for AI applications, meeting Nvidia’s stardards would doubtlessly be critical for HBM manufacturers.
Reuters reported that Samsung has been attempting to pass Nvidia’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.
According to TrendForce’s analysis earlier, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix. SK hynix has been providing HBM3 chips to Nvidia since 2022, Reuters noted.
According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Another US memory giant, Micron, stated in February that its HBM3e consumes 30% less power than its competitors, meeting the demands of generative AI applications. Moreover, the company’s 24GB 8H HBM3e will be part of NVIDIA’s H200 Tensor Core GPUs, breaking the previous exclusivity of SK hynix as the sole supplier for the H100.
Considering major competitors’ progress on HBM3e, if Samsung fails to meet Nvidia’s requirements, the industry and investors may be more concerned on whether the Korean tech heavyweight would further fall behind its rivals in the HBM market.
(Photo credit: Samsung)
News
SK hynix has disclosed yield details regarding the company’s 5th generation High Bandwidth Memory (HBM), HBM3e, for the first time. According to a report from the Financial Times, citing Kwon Jae-soon, the head of yield at SK hynix, the memory giant has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
This is better than the industry’s previous speculation, which estimated the yield of SK Hynix’s HBM3e to be between 60% and 70%, according to a report by Business Korea.
According to TrendForce’s analysis earlier, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands.
The challenge, however, is the supply bottleneck caused by both CoWoS packaging constraints and the inherently long production cycle of HBM—extending the timeline from wafer initiation to the final product beyond two quarters.
The report by Business Korea noted that HBM manufacturing involves stacking multiple DRAMs vertically, which presents greater process complexity compared to standard DRAM. Specifically, the yield of the silicon via (TSV), a critical process of HBM3e, has been low, ranging from 40% to 60%, posing a significant challenge for improvement.
In terms of SK hynix’s future roadmap for HBM, CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for 2024 and 2025 has almost been fully sold out. According to Business Korea, SK hynix commenced delivery of 8-layer HBM3e products in March and plans to supply 12-layer HBM3e products in the third quarter of this year. The 12-layer HBM4 (sixth-generation) is scheduled for next year, with the 16-layer version expected to enter production by 2026.
Read more
(Photo credit: SK hynix)
News
Memory giants Samsung, SK Hynix, and Micron are all actively investing in high-bandwidth memory (HBM) production. Industry sources cited in a report from Commercial Times indicate that due to capacity crowding effects, DRAM products may face shortages in the second half of the year.
According to TrendForce, the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.
HBM production will be prioritized due to its profitability and increasing demand. Regarding the latest developments in HBM, TrendForce indicates that HBM3e will become the market mainstream this year, with shipments concentrated in the second half of the year.
Currently, SK Hynix remains the primary supplier, along with Micron, both utilizing 1beta nm processes and already shipping to NVIDIA. Samsung, using a 1alpha nm process, is expected to complete qualification in the second quarter and begin deliveries mid-year.
The growing content per unit in PCs, servers, and smartphones is driving up the consumption of advanced process capacity each quarter. Servers, in particular, are seeing the highest capacity increase—primarily driven by AI servers with content of 1.75 TB per unit. With the mass production of new platforms like Intel’s Sapphire Rapids and AMD’s Genoa, which require DDR5 memory, DDR5 penetration is expected to exceed 50% by the end of the year.
As HBM3e shipments are expected to be concentrated in the second half of the year—coinciding with the peak season for memory demand—market demand for DDR5 and LPDDR5(X) is also expected to increase. With a higher proportion of wafer input allocated to HBM production, the output of advanced processes will be limited. Consequently, capacity allocation in the second half of the year will be crucial in determining whether supply can meet demand.
Samsung expects existing facilities to be fully utilized by the end of 2024. The new P4L plant is slated for completion in 2025, and the Line 15 facility will undergo a process transition from 1Y nm to 1beta nm and above.
The capacity of SK Hynix’s M16 plant is expected to expand next year, while the M15X plant is also planned for completion in 2025, with mass production starting at the end of next year.
Micron’s facility in Taiwan will return to full capacity next year, with future expansions focused on the US. The Boise facility is expected to be completed in 2025, with equipment installations following and mass production planned for 2026.
With the expected volume production of NVIDIA’s GB200 in 2025, featuring HBM3e with 192/384GB specifications, HBM output is anticipated to nearly double. Each major manufacturer will invest in HBM4 development, prioritizing HBM in their capacity planning. Consequently, due to capacity crowding effects, there may be shortages in DRAM supply.
Read more
(Photo credit: Samsung)
News
As AI-related semiconductors has been driving the demand of High Bandwidth Memory (HBM), the NAND flash market now also feels the vibe. According to industry sources cited by Business Korea, the NAND Flash market competition is intensifying, while memory giants Samsung and SK Hynix are ramping up their efforts to improve the performance and capacity of NAND products.
In April, Samsung confirmed that it has begun mass production for its one-terabit (Tb) triple-level cell (TLC) 9th-generation vertical NAND (V-NAND), boasted to improve the bit density by about 50% compared to the 8th-generation V-NAND, with the number of layers reaching 290, according an earlier report by The Korea Economic Daily.
Based on the report on May 20 by Business Korea, Samsung intends to dominate the AI SSD market with its 9th Generation V-NAND, targeting the development and sampling of ultra-high capacity 64 terabyte (TB) SSDs in the second quarter.
In mid-May, Samsung even revealed the target to release advanced NAND Flash with over 1000 layers by 2030. According to an earlier report by Wccftech, the South Korean memory giant plans to apply new ferroelectric materials on the manufacturing of NAND.
On the other hand, the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK Hynix, leading to a supply shortfall in meeting burgeoning AI market demands. After establishing its leadership in HBM, it is reported that SK Hynix now aims to dominate the AI memory market in NAND as well, according to Business Korea.
It is worth noting that SK Hynix recently achieved a breakthrough with the development of “Zoned UFS 4.0” (ZUFS 4.0), an on-device AI mobile NAND solution tailored for AI-capable smartphones, which is scheduled to start mass production in the third quarter, according to TheElec.
(Photo credit: Samsung)
News
The recovery in demand for PCs and smartphones will take time, leading to a halt in the upward trend of DRAM prices, remaining stable for two consecutive months. However, the rapid growth in demand for High Bandwidth Memory (HBM), essential for data center servers and generative AI, is expected to boost future DRAM prices as the production trend of HBM rises.
The Nikkei News reported on May 18th that the recovery in demand for PCs and smartphones will take time, leading to a halt in the upward trend of DRAM prices used in smartphones, PCs, and data center servers for temporary data storage.
In April 2024, the wholesale price (bulk transaction price) of the benchmark product DDR4 8Gb was around USD 1.95 per unit, and the price of the smaller capacity 4Gb product was around USD 1.50 per unit, both remaining unchanged from the previous month (March 2024) and marking the second consecutive month of stability.
As of February 2024, DRAM prices had risen for four consecutive months. DRAM wholesale prices are negotiated between memory manufacturers and customers monthly or quarterly. Reportedly, approximately 50% of DRAM demand comes from PCs and servers, while around 35% comes from smartphones.
The report indicated that the demand for HBM, essential for generative AI, is rapidly increasing, and market expectations for the production trend of HBM are expected to boost future DRAM price increases.
A source cited in the report, which is an Electronic product trader, noted that some major manufacturers have accepted the memory manufacturers’ price hike requests. A PC manufacturer source cited by the report also stated that DRAM wholesale prices from April to June are expected to rise by 5-10% compared to January to March.
Another source cited by the report stated that the facilities required to produce HBM are approximately three times larger than those needed for producing general DRAM. If HBM production increases, the production volume of other DRAMs will decrease, thereby driving up prices. Another source cited in the report stated that supply cannot keep up with demand, and pricing power is currently in the hands of memory manufacturers.
TrendForce, in its latest press release on the HBM sector, pointed out that while new factories are scheduled for completion in 2025, the exact timelines for mass production are still uncertain and depend on the profitability of 2024. This reliance on future profits to fund further equipment purchases reinforces the manufacturers’ commitment to maintaining memory price increases this year.
Additionally, NVIDIA’s GB200, set to ramp up production in 2025, will feature HBM3e 192/384 GB, potentially doubling HBM output. With HBM4 development on the horizon, if there isn’t significant investment in expanding capacity, the prioritization of HBM could lead to insufficient DRAM supply due to capacity constraints.
Read more
(Photo credit: SK Hynix)