News
Though has yet to disclose the actual progress on its 12-Hi HBM3e verification with AI chip giant NVIDIA, Samsung is rumored to lower its target for the maximum production capacity (CAPA) of HBM by the end of 2025, which echoes the speculation on delays of HBM3e mass production for key customers, according to Korean media outlet ZDNet.
It is worth noting that the struggling memory giant reportedly plans to lower the capacity target by over 10%, from the initial monthly goal of 200,000 units to 170,000 units by the end of next year, ZDNet suggests, as it now takes a relatively cautious approach to facility investment plans.
According to the report, as of the second quarter, in order to narrow the gap with competitors such as SK hynix, Samsung had planned to increase HBM production capacity to 140,000–150,000 units per month by the end of this year, and up to 200,000 units per month by the end of next year.
At the Q2 earnings call in late July, Samsung disclosed an ambitious roadmap for its HBM products. According to a previous report from Business Korea, Samsung expects the share of HBM3e chips within its HBMs to surpass the mid-10 percent range in the third quarter, and speedily grow to 60% by the fourth quarter. The company also projects its HBM sales to increase three to five times in the second half of 2024.
However, the scenario has changed a few months later. Citing a source familiar with the situation, the report by ZDNet notes that Samsung has decided to slow down the pace of facility investments due to the underperformance of its HBM business. Further discussions on investments will only proceed once its HBM3e supply for NVIDIA is confirmed, the source indicates.
According to the analysis by TrendForce, achieving stable yields for HBM3 and HBM3e 8-Hi products required at least two quarters of learning in previous generations. Based on this precedent, the learning curve for HBM3e 12-Hi is unlikely to shorten significantly, especially with the rapid market shift toward the 12-Hi version.
Furthermore, key products such as NVIDIA’s B200 and GB200, as well as AMD’s MI325 and MI350, will adopt HBM3e 12-Hi. The high cost of these systems will also demand strict stability, complicating mass production and adding another layer of uncertainty.
Ahead of its Q3 earnings call, Samsung already warned its profit would fall short of market expectations, while issuing an apology for the disappointing performance. Samsung’s operating profit for the third quarter is expected to reach 9.1 trillion won, which is below the expected 10 trillion won.
Another report by The Korea Times notes that the market expected SK hynix to see a substantial increase in operating profit driven by strong HBM demand, potentially outpacing Samsung’s semiconductor division.
To boost its competitiveness in the semiconductor industry, Samsung intends to assign research and development staff directly to its manufacturing facilities. This initiative seeks to enhance communication and collaboration with on-site production teams, according to a report by SmBom.
Read more
(Photo credit: Samsung)
News
Samsung reported its third-quarter earnings today, and according to The Korea Economic Daily, the company’s operating profit was initially expected to exceed 10 trillion won, but the actual performance fell short of that target.
Reuters also reported that Samsung Electronics warned its third-quarter profit would fall short of market expectations, issuing an apology for the disappointing performance. The tech giant has been lagging behind its rivals in supplying high-end chips to Nvidia amid the booming AI market.
The Korea Economic Daily noted that Samsung’s operating profit for the third quarter reached 9.1 trillion won, a 274.5% increase from the same period last year. However, this figure still fell significantly short of the expected 10 trillion won. Sales for the quarter amounted to 79 trillion won, up 17.2% year-on-year. The Device Solutions (DS) division, responsible for semiconductor operations, saw its performance decline compared to the previous quarter due to one-off costs, including incentive provisions.
Although demand for memory chips such as servers and high-bandwidth memory (HBM) remained stable, factors like inventory adjustments by mobile clients and increased supply of legacy products from Chinese memory manufacturers negatively impacted performance, exacerbated by one-time costs and exchange rate effects. Additionally, the weaker-than-expected recovery in demand for Samsung’s flagship conventional DRAM products, particularly due to sluggish smartphone and PC markets, further hindered its results.
In the same report by The Korea Economic Daily, it was noted that in the HBM sector, Samsung has yet to make significant progress. The commercialization of its fifth-generation HBM, HBM3E, has been delayed, with the product still undergoing quality tests by NVIDIA. However, the Device Experience (DX) division saw improved performance due to strong flagship smartphone sales, and Samsung Display benefited from new product launches by key customers.
The Korea Economic Daily highlighted that analysts had forecast Samsung’s third-quarter operating profit to surpass 10 trillion won, with sales expected to reach around 81 trillion won, but both figures missed these projections.
Before Samsung’s earnings announcement, The Korea Times had reported that the market expected SK Hynix to see a substantial increase in operating profit driven by strong HBM demand, potentially outpacing Samsung’s semiconductor division.
Regarding Samsung’s HBM3E validation, TrendForce noted in a September press release that while Samsung entered the HBM3E market later, the company recently completed validation and has begun shipping its HBM3E 8Hi units. According to TrendForce’s latest research, Samsung, SK Hynix, and Micron submitted their first HBM3E 12Hi samples in the first half and third quarter of 2024, with ongoing validation processes. SK Hynix and Micron are progressing faster and are expected to complete validation by the end of this year.
(Photo credit: Samsung)
News
Is the winter really coming for the memory sector? Despite an earlier report by Morgan Stanley warning of an AI bubble, U.S. memory giant Micron reveals a financial guidance that beats market expectations, projecting its fiscal first-quarter revenue to reach USD 8.7 billion, higher than an average analyst estimate of USD 8.32 billion, Bloomberg notes.
Meanwhile, Micron expects a significant increase in gross margin to around 39.5%, and an adjusted earnings of USD 1.74 per share, exceeding analysts’ estimates of USD 1.65, according to Reuters.
The growth momentum will mainly rely on the soaring demand for HBM, driven by AI. Earlier in June, Micron noted that its HBM chips have been fully booked for 2024 and 2025.
In terms of the outlook for the overall HBM market, Micron’s view evidently contradicts with that of Morgan Stanley, as it eyes the HBM total available market (TAM) to grow from approximately USD 4 billion in 2023 to over USD 25 billion in 2025.
And the company is making strides in its progress in HBM in the following year. According to its press release, Micron expects its HBM, high-capacity D5 and LP5 solutions, and data center SSD products to deliver multiple billions of dollars in revenue in fiscal 2025.
The U.S. memory giant also expects its HBM market share to commensurate with the company’s overall DRAM market share sometime in 2025.
According to TrendForce, Micron ranked third in DRAM revenue in Q2, 2024, with a market share of 19.6%, after Samsung’s 42.9% and SK hynix’s 34.5%, respectively.
Regarding the latest development on HBM, after its 8-hi HBM3E entered mass production in February, Micron confirms that it has started shipments of production-capable HBM3E 12-hi 36GB units to key industry partners to enable qualifications across the AI ecosystem, stating that its HBM3E 12-hi 36GB delivers 20% lower power consumption than its competitors’ HBM3E 8-hi 24GB solutions while providing 50% higher DRAM capacity.
The company expects to ramp its 12-hi HBM3E in early 2025 and increase the 12-hi mix in the overall shipments throughout the year.
According to a previous report by Tom’s Hardware, the new products are reportedly designed for cutting-edge processors used in AI and high-performance computing (HPC) workloads, including NVIDIA’s H200 and B100/B200 GPUs.
Micron delivered a strong finish to fiscal year 2024, with fiscal Q4 revenue at the high end of its guidance range and gross margins and earnings per share (EPS) above the high end of its guidance ranges.
In fiscal Q4, Micron’s revenue jumped 93% YoY to USD 7.75 billion. Its earnings per share (EPS) came in at USD 1.18, a notable turnaround from the loss of USD 1.07 per share in the same period of 2023. In addition, it achieved record-high revenues in NAND and in its storage business unit.
Micron’s fiscal 2024 revenue grew over 60%, with company gross margins expanding by over 30 percentage points and achieved revenue records in data center and in automotive, according to its press release.
Read more
(Photo credit: Micron)
News
SK Hynix announced today that it has commenced mass production of the world’s first 12-layer HBM3E product with 36GB of capacity, the largest for any HBM currently available, according to the company.
SK Hynix stated that it plans to deliver these mass-produced units to customers by year-end, marking another technological milestone just six months after shipping its 8-layer HBM3E product in March.
The company also emphasized that it remains the only firm globally to have developed and supplied the entire HBM lineup, from HBM1 to HBM3E, since debuting the world’s first HBM in 2013.
The 12-layer HBM3E meets the highest global standards in speed, capacity, and stability—all critical for AI memory, SK Hynix said. The memory’s operational speed has been increased to 9.6 Gbps. When paired with a single GPU running four HBM3E units, AI models like ‘Llama 3 70B’ can process 70 billion parameters 35 times per second.
SK Hynix has boosted capacity by 50% by stacking 12 layers of 3GB DRAM chips at the same thickness as the previous 8-layer product. To achieve this, each chip was made 40% thinner and stacked using TSV technology.
By employing its advanced MR-MUF process, SK Hynix claims to have resolved structural challenges posed by stacking thinner chips. This allows for 10% better heat dissipation and enhanced stability and reliability through improved warpage control.
“SK hynix has once again broken through technological limits demonstrating our industry leadership in AI memory,” said Justin Kim, President (Head of AI Infra) at SK hynix. “We will continue our position as the No.1 global AI memory provider as we steadily prepare next-generation memory products to overcome the challenges of the AI era.”
(Photo credit: SK Hynix)
News
Global Unichip Corp. (GUC), a leading provider of advanced ASIC solutions, announced that its 3nm HBM3E Controller and PHY IP have been adopted by a major cloud service provider and several high-performance computing (HPC) companies. The cutting-edge ASIC is expected to tape out this year, featuring the latest 9.2Gbps HBM3E memory technology.
In the same announcement, GUC highlighted its active collaboration with HBM suppliers like Micron, stating it is developing HBM4 IP for next-generation AI ASICs.
GUC noted that its joint efforts with Micron have demonstrated the ability of GUC’s HBM3E IP to achieve 9.2Gbps with Micron’s HBM3E on both CoWoS-S and CoWoS-R technologies. Test chip results from GUC show successful PI and SI outcomes, with excellent eye margins across temperature and voltage variations at these speeds.
Moreover, when GUC’s HBM3E IP is integrated with Micron’s HBM3E timing parameters, it improves effective bus utilization, further boosting overall system performance.
“We are thrilled to see our HBM3E Controller and PHY IP being integrated in CSP and HPC ASICs.” said Aditya Raina, CMO of GUC. “This adoption underscores the robustness and advantages of our HBM3E solution, which is silicon-proven and validated across multiple advanced technologies and major vendors. We look forward to continuing our support for various applications, including AI, high-performance computing, networking, and automotive.”
“Memory is an integral part of AI servers and foundational to the performance and advancement of data center systems,” said Girish Cherussery, senior director of Micron’s AI Solutions Group. “Micron’s best-in-class memory speeds and energy efficiency greatly benefit the increasing demands of Generative AI workloads, such as large language models like ChatGPT, sustaining the pace of AI growth.”
(Photo credit: GUC)