News
According to a report from Reuters on July 4, consensus from 27 analysts compiled by LSEG SmartEstimate indicates that driven by the surge in demand for AI technology and the resulting rebound in memory prices, Samsung Electronics’ operating profit for Q2 2024 (ending June 30) is projected to skyrocket by 1,213% from KRW 670 billion in the same period last year to KRW 8.8 trillion (roughly USD 6.34 billion), marking the highest since Q3 2022.
Other memory giants are also optimistic about the operation afterwards. Take Micron as an example. Regarding the AI frenzy, Micron CEO Sanjay Mehrotra claimed that in the data center sector, rapidly growing AI demand enabled the company to grow its revenue by over 50% on a sequential basis.”
Mehrotra is also confident that Micron can deliver a substantial revenue record in fiscal 2025, with significantly improved profitability underpinned by our ongoing portfolio shift to higher-margin products.
On the other hand, SK Group also stated that by 2026, the group will invest KRW 80 trillion in AI and semiconductors, while continuing to streamline its businesses to increase profitability and return value to shareholders.
Read more
(Photo credit: Samsung)
News
In early June, NVIDIA CEO Jensen Huang revealed that Samsung’s High Bandwidth Memory (HBM) is still striving on the certification process, but is one step away from beginning supply. On July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from the GPU giant for qualification of its 5th generation HBM, HBM3e. It is expected that Samsung will soon proceed with the subsequent procedures to officially start mass production for HBM supply, the report noted.
Citing sources from the semiconductor industry, the report stated that Samsung recently received the HBM3e quality test PRA (Product Readiness Approval) notification from NVIDIA. It is expected that negotiations for supply will commence afterward.
However, just one hour after the news reported that Samsung’s HBM3e passed NVIDIA’s tests, another media outlet, Hankyung, noted that Samsung has denied the rumor, clarifying it is “not true,” and that the company is consistently undergoing quality assessments.
TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. The successful qualification mentioned in the article was only an internal achievement for Samsung. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.
Per a previous report from Reuters, Samsung has been attempting to pass NVIDIA’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.
According to the latest report from Newdaily, Samsung’s breakthrough came about a month after the memory heavyweight sent executives in charge of HBM and memory development to the U.S. at NVIDIA’s request. Previously, it was reported that Samsung had failed to pass the quality test as scheduled due to issues such as overheating.
The report further stated that though from Samsung’s perspective, supplying HBM to NVIDIA was crucial, NVIDIA is also eager to receive more HBM, with the overwhelming demand for AI semiconductors and the impending mass production of its next-generation GPU Rubin, which significantly increases HBM usage.
According to the report, Samsung is expected to eliminate uncertainties in HBM and start full-scale mass production, giving a significant boost to its memory business. There are also suggestions that its HBM performance could see a quantum leap starting in the second half of the year.
On the other hand, Samsung’s major rival, SK hynix, is the primary supplier of NVIDIA’s HBM3 and HBM3e. According to an earlier TrendForce’s analysis, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix.
According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Read more
(Photo credit: Samsung)
Insights
According to TrendForce’s latest memory spot price trend report, the spot price of DRAM has finally seen a slight raise as supply for DDR4 and DDR5 tightens. Samsung has allocated more of its 1alpha nm production capacity to the manufacturing of HBM products, which leads to DDR5’s price increase. As for NAND flash, transactions within the spot market remain sluggish. Details are as follows:
DRAM Spot Price:
In the spot market, there has been a slight decrease in the supply of used DDR4 chips that were originally stripped from decommissioned modules. Moreover, spot prices of DDR4 chips had already dropped to a low of US$0.9 at the end of 2023, so there has been a modest rebound recently. However, there has not been a significant rebound in the demand for consumer electronics. As a result, the price increase for DDR4 chips is expected to be limited. As for DDR5 products, the supply has tightened primarily because Samsung has allocated more of its 1alpha nm production capacity to the manufacturing of HBM products. Additionally, there have been special cases of buyers requesting quotes for DDR5 products recently. Consequently, prices of DDR5 products have registered a slight rise. The average spot price of mainstream chips (i.e., DDR4 1Gx8 2666MT/s) has risen by 1.07% from US$1.875 last week to US$1.895 this week.
NAND Flash Spot Price:
Transactions within the spot market are currently at a sluggish stage, and the occasional appearance of rush orders are unable to provide a support for price increases. It is worth noting that spot traders and several module houses are fighting for orders by cutting down their prices sporadically due to pressure from inventory and funds. On the whole, module houses are still actively seeking for buyer orders in the hope of seeing a need of inventory replenishment during the traditional peak season that is 3Q24. Spots of 512Gb wafers have dropped by 0.33% this week, arriving at US$3.291.
Press Releases
As artificial intelligence (AI) technology enjoys rapid advances, the demand for AI chips is skyrocketing, driving continuous improvements in advanced packaging and HBM (High Bandwidth Memory) technology, which is expected to benefit the silicon wafer industry.
Recently, Doris Hsu, the Chairperson of GlobalWafers, revealed that HBM memory chips required by AI, such as HBM3 and the upcoming HBM4, need to be stacked on dies, with the number of layers increasing from 12 to 16. Additionally, a layer of base wafer is required underneath the structure, which adds to the consumption of silicon wafers.
Previously, it’s reported that there is a severe global shortage of HBM amid the AI boom, and original manufacturers’ HBM production capacity for this year and next already sold out. They are continuously revving up capital investment and expanding HBM production. According to industry insiders, compared to memory technologies of the same capacity and process like DDR5, the size of wafer for HBM chip has increased by 35-45%. Meanwhile, the complexity of HBM manufacturing processes leads to a yield rate that is 20-30% lower than DDR5, while lower yield rate means that fewer qualified chips can be produced from the same wafer area. These two factors imply that more silicon wafers are needed to meet HBM production demands.
Apart from memory, innovations in advanced packaging technology also conduces to silicon wafer. Hsu mentioned that more polished wafers are required for advanced packaging than before in that packaging has become three-dimensional, and the structure and processes have also changed, which means that some packaging may require twice as many wafers as before. With the releasing of advanced packaging capacity next year, the number of wafers needed will be even more significant.
As an advanced packaging technology, CoWoS (Chip on Wafer on Substrate) is in vogue currently, with demand overbalancing supply.
As per TrendForce’s survey, NVIDIA’s B series, including GB200, B100, and B200, will consume more CoWoS capacity. TSMC is also increasing its annual CoWoS capacity for 2024, with monthly capacity expected to approach 40k by the end of this year, an over 150% increase compared to 2023. The planned total capacity for 2025 could nearly double, and the demand from NVIDIA is expected to account for more than half.
Industry insiders pointed out that with the development of advanced semiconductor processes in the past, die size reduced and brought down the consumption of wafer. Now, driven by AI, the three-dimensionality of packaging leads to an increase in wafer usage, thereby facilitating the development of the silicon wafer industry. However, it is important to note that while silicon wafer is experiencing a boon, the development of HBM and advanced packaging technologies imposes higher requirements on the quality, flatness, and purity. This will also prompt silicon wafer manufacturers to make corresponding adjustments to cope with the AI trend.
Read more
News
After Micron’s announcement of constructing two new fabs in the U.S. in 2022, the memory giant has now provided more details regarding their production timeline. According to the information the company disclosed in its Q3 FY24 financial report and its conference call, the fabs in Idaho and New York target to start operation between 2026 and 2029, a report from AnandTech noted.
“Fab construction in Idaho is progressing well, and we are diligently working to complete the regulatory and permitting processes in New York,” said Sanjay Mehrotra, CEO of Micron, during the company’s conference call with investors and financial analysts. However, the company admits that the Idaho fab will not contribute to meaningful supply until FY27, while and the New York fab is not expected to contribute to bit supply growth until FY28 or later.
AnandTech further noted that as Micron’s fiscal year 2027 begins in September 2026, the new fab near Boise, Idaho, will likely commence operations between September 2026 and September 2027, while the New York fab is expected to begin operations afterwards. Namely, Micron’s U.S. memory fabs are projected to start operations between late 2026 and 2029.
According to an earlier report by Bloomberg, Micron is expected to receive over USD 6 billion in funding through the “Chips Act” from the Department of Commerce to assist with the costs of local factory projects, as part of efforts to bring semiconductor production back to U.S. soil.
Though its U.S. fabs may not start operation soon, Micron does confirm the strong momentum from HBM, saying that its HBM production capacity has been fully booked through 2025, according to another report by TheElec. The company would be the second memory giant to make such a statement, after SK hynix.
Micron claims that it expect to generate “several hundred million dollars” of revenue from HBM in FY24, and “multiple $Bs” in revenue from HBM in FY25. The company has already sampled its 12-high HBM3E product and expect to ramp it into high volume production in 2025, as it is also confident to maintain the technology leadership with HBM4 and HBM4E.
To support the strong market demand as well as preparing for the mass production for its U.S. fabs, Micron expects to increase its capital spending materially next year, with capex around mid-30s % range of revenue for FY25, which will support HBM assembly and test equipment, fab and back-end facility construction as well as technology transition investment to support demand growth, the company said.
Micron states that its average quarterly capex in FY25 to be meaningfully above the Q4 2024 level of USD 3 billion, which means its capex would be around USD 12 billion, reporting a strong 50% YoY growth comparing to the USD 8 billion in FY24.
Read more
(Photo credit: Micron)