HBM3e


2024-11-04

[News] SK hynix Introduces World’s First 16-High HBM3E, Providing Samples in Early 2025

South Korean memory giant SK hynix has introduced the industry’s first 48GB 16-high HBM3E at SK AI Summit in Seoul today, which is the world’s highest number of layers followed by the 12-high product, according to its press release.

According to SK hynix CEO Kwak Noh-Jung, though the market for 16-high HBM is expected to open up from the HBM4 generation, SK hynix has been developing 48GB 16-high HBM3E in a bid to secure technological stability and plans to provide samples to customers early next year, the press release noted.

In late September, SK hynix announced that it has begun mass production of the world’s first 12-layer HBM3E product with 36GB.

On the other hand, SK hynix is expected to apply Advanced MR-MUF process, which enabled the mass production of 12-high products, to produce 16-high HBM3E, while also developing hybrid bonding technology as a backup, Kwak explained.

According to Kwak, SK hynix’s 16-high products come with performance improvement of 18% in training, 32% in inference vs 12-high products.

Kwak Noh-Jung made the introduction of SK hynix’s 16-high HBM3E during his keynote speech at SK AI Summit today, titled “A New Journey in Next-Generation AI Memory: Beyond Hardware to Daily Life.” He also shared the company’s vision to become a “Full Stack AI Memory Provider”, or a provider with a full lineup of AI memory products in both DRAM and NAND spaces, through close collaboration with interested parties, the press release notes.

It is worth noting that SK hynix highlighted its plans to adopt logic process on base die from HBM4 generation through collaboration with a top global logic foundry to provide customers with best products.

A previous press release in April notes that SK hynix has signed a memorandum of understanding with TSMC for collaboration to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass produced from 2026, through this initiative.

To further expand the memory giant’s product roadmap, it is developing LPCAMM2 module for PC and data center, 1cnm-based LPDDR5 and LPDDR6, taking full advantage of its competitiveness in low-power and high-performance products, according to the press release.

The company is readying PCIe 6th generation SSD, high-capacity QLC-based eSSD and UFS 5.0.

As powering AI system requires a sharp increase in capacity of memory installed in servers, SK hynix revealed in the press release that it is preparing CXL Fabrics that enables high capacity through connection of various memories, while developing eSSD with ultra-high capacity to allow more data in a smaller space at low power.

SK hynix is also developing technology that adds computational functions to memory to overcome so-called memory wall. Technologies such as Processing near Memory(PNM), Processing in Memory(PIM), Computational Storage, essential to process enormous amount of data in future, will be a challenge that transforms structure of next-generation AI system and a future of AI industry, according to the press release.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from SK hynix.
2024-11-01

[News] Samsung Advances Key HBM Supply, Hints at TSMC Partnership

Samsung Electronics released its third-quarter earnings on October 31, reporting a sharper-than-expected profit despite a substantial decline in profits from its flagship semiconductor business. Notably, Samsung’s senior management emphasized its continued focus on high-end chip production and disclosed progress in major supply deals. This includes a potential NVIDIA certification for its HBM3E which could boost performance in the fourth quarter.

According to reports from Commercial Times, Samsung Executive Vice President Jaejune Kim addressed analysts about high-end memory chips used in AI chipsets, stating that while they previously mentioned a delay in HBM3E’s commercialization, they have made meaningful progress in product certification with key clients. As a result, they expect HBM3E sales to improve in the fourth quarter and plan to expand sales to multiple customers.

Though Samsung did not disclose client names, analysts believe this certification likely refers to NVIDIA, which commands 80% of the global AI chip market.

According to Economic Daily News, Samsung reported significant revenue growth in high-bandwidth memory (HBM), DDR5, and server storage products, with expectations for improved performance in its semiconductor business this quarter.

Although demand for mobile and PC memory chips may decline, the growth in AI is expected to sustain robust demand. Demand for AI and data center products, including memory for both AI and traditional servers, is projected to remain strong and stable through next year.

Additionally, Kim tated that the company would flexibly reduce production of traditional DRAM and NAND chips to align with market demand and expedite the shift to advanced process nodes.

The same report from Economic Daily News indicated that Samsung plans to develop and mass-produce HBM4 in the second half of this year. Next year, its memory division will focus on HBM and server SSDs, and there are hints of potential collaboration with TSMC to meet the diverse needs of HBM clients.

(Photo credit: Samsung)

Please note that this article cites information from Commercial Times and Economic Daily News.

2024-10-31

[News] Samsung’s Chip Division Sees 40% Profit Drop in Q3 as Sales Hit Record High

Amid concerns on its HBM progress and yield issues on advanced nodes, Samsung has released its full Q3 2024 financial results, with the quarterly revenue reaching KRW 79.1 trillion won (approximately $57.35 billion), hitting an all-time high. However, its semiconductor business remains lackluster, as the DS Division recorded a quarterly operating profit of 3.86 trillion won, marking a 40% decline from the previous quarter.

According to a report by CNBC, while demand for memory chips driven by AI and traditional server products provided some support, Samsung noted that “inventory adjustments negatively impacted mobile demand.” The company also highlighted challenges with “the increasing supply of legacy products in China.”

Samsung continues to face challenges in its most advanced wafer foundry processes. According to TrendForce, the company has yet to solidify its reputation as a reliable partner for cutting-edge nodes, which may hinder its ability to secure orders from top IC design houses and potentially delay its efforts to expand capacity.

Losses in Foundry and System Chip Lead to Profit Drop in DS Division, while Memory Remains Strong

On October 31, Samsung Electronics reported Q3 consolidated revenue of KRW 79.1 trillion, an increase of 7% from the previous quarter, on the back of the launch effects of new smartphone models and increased sales of high-end memory products. According to Business Korea, the Q3 revenue exceeded its previous revenue record of KRW 77.78 trillion, set in Q1 2022.

However, operating profit declined to KRW 9.18 trillion, largely due to one-off costs, including the provision of incentives in the Device Solutions (DS) Division, according to its press release.

In terms of the DS Division, which encompasses the memory and foundry business, it posted KRW 29.27 trillion in consolidated revenue and KRW 3.86 trillion in operating profit in the third quarter, marking almost a 50% drop from the prior quarter’s KRW 6.45 trillion.

According the Korean Economic Daily, Samsung attributed the weaker profit to higher-than-anticipated one-time expenses totaling around KRW 1.5 trillion, which included employee performance bonuses, as well as escalating losses in its foundry and system chip divisions, each estimated at over KRW 1.5 trillion.

On the other hand, the company noted that its memory chip business performed better than anticipated, with an estimated profit of around KRW 7 trillion for the quarter, the Korean Economic Daily notes.

Memory business sales reached KRW 22.27 trillion, more than doubling from the previous year, driven by increased demand for high-end chips used in AI devices and servers, such as HBM, DDR5, and server SSDs, according to Samsung.

Key Takeaways for 2025 Outlook

In the fourth quarter, Samsung notes that while memory demand for mobile and PC may encounter softness, growth in AI will keep demand at robust levels.

As for the Foundry Business, Samsung claims that the unit successfully met its order targets — particularly in sub-5nm technologies — and released the 2nm GAA process design kit (PDK), enabling customers to proceed with their product designs. It notes that the Foundry Business will strive to acquire customers by improving the process maturity of its 2nm GAA technology.

Looking ahead to 2025, for DRAM, Samsung plans to expand the sales of HBM3E and the portion of high-end products such as DDR5 modules with 128GB density or higher for servers and LPDDR5X for mobile, PC, servers, and so on. For NAND, it will proactively respond to the high-density trend based on QLC products — including 64TB and 128TB SSDs — and solidify leadership in the PCIe Gen5 market by accelerating the tech migration from V6 to V8.

The Foundry Business, on the other hand, aims to expand revenue through ongoing yield improvements in advanced technology while securing major customers through successful 2nm mass production. In addition, integrating advanced nodes and packaging solutions to further develop the HBM buffer die is expected to help acquire new customers in the AI and HPC sectors, according to Samsung.

(Photo credit: Samsung)

Please note that this article cites information from CNBC, Business Korea, Korean Economic Daily and Samsung.
2024-10-18

[News] Samsung Reportedly Mulls 1a DRAM Redesign amid HBM3e Verification Delays

At its previous earnings call in July, Samsung has announced the ambitious goal that its HBM sales would increase three to five times in 2H24. However, as it is still struggling to pass the verification of 12-Hi HBM3e products, the company’s prospects for returning to glory in the near term seems to be rather dim.

According to a report by Korean media outlet ZDNet, the main issue may lie in the core die of HBM, while the adoption of 1a DRAM is hindering Samsung’s recent efforts to supply HBM3e for NVIDIA.

Notably, an insider cited by the report notes that Samsung’s Vice Chairman Jun Young-hyun, the new Head of Device Solutions (DS) Division, is aware of these issues, so the decision for whether to redesign the 1a DRAM or not may be made soon.

According to the report, Samsung began the mass production of 1a (4th generation) DRAM, which has a linewidth of approximately 14 nm, as early as in the second half of 2021. It is worth noting that the company tries to enhance the product’s competitiveness by actively adopting advanced technologies such as EUV (extreme ultraviolet lithography).

ZDNet notes that Samsung applied five EUV layers to its 1a DRAM, which is significantly more than the one layer used by its major competitor, SK hynix.

However, though EUV is advantageous for reducing linewidths compared to the existing ArF (argon fluoride) lithography process, which is supposed to enhance efficiency and thus lowering manufacturing costs, EUV’s high technical difficulty has negatively affected the stability of the process, according to the report.

As a result, the cost of Samsung’s 1a DRAM has not decreased as initially anticipated, according to the report, with the yield issue occurring reportedly hinders Samsung’s HBM3e verification progress with NVIDIA.

Previous reports indicate that Samsung had conducted an on-site inspection with NVIDIA regarding the 8-layer HBM3e products at its Pyeongtaek campus. While the inspection itself concluded without any issues, concerns have reportedly been raised as the data processing speed (Gbps) of Samsung’s 8-layer HBM3e is about 10% lower compared to its rivals, according to sources cited by ZDNet.

Both SK hynix and Micron utilize 1b DRAM for their HBM3e core dies, the report notes.

Therefore, industry insiders cited by ZDNet reveal that Samsung has been internally discussing the possibility of redesigning some of the circuits in its 1a DRAM.

However, If Samsung does proceed with the redesign, it is expected to take at least six months for the product to be completed, the report suggests, which means the mass production could only begin by the second quarter of next year, and it will be challenging to supply the product in a timely manner.

Read more

(Photo credit: Samsung)

Please note that this article cites information from ZDNet.
2024-10-17

[News] GPU Frenzy: Memory Giants Battle for HBM3e

The AI wave continues to fuel surging demand for AI chips. Following reports of HBM sellouts and manufacturers ramping up production to meet demand, recent news reveals that Nvidia’s Blackwell architecture GPUs are also in short supply.

 

Nvidia’s Blackwell GPUs Sold Out for the Next 12 Months

Although Nvidia’s Blackwell architecture GPUs are delayed until Q4 of this year, it hasn’t dampened orders.

 

According to Tom’s Hardware, Morgan Stanley recently held a three-day meeting in New York with Nvidia CEO Jensen Huang, CFO Colette Kress, and other members of the chipmaker’s management team.

 

Morgan Stanley reported that Nvidia stated that orders for Blackwell architecture GPUs are sold out for the next 12 months, and new customers placing orders now won’t receive products until the end of 2025.

 

Existing customers, including AWS, CoreWeave, Google, Meta, Microsoft, and Oracle, have already purchased all of the Blackwell architecture GPUs that Nvidia and its partner TSMC can produce in the coming quarters.

 

The industry points out that the demand for high-performance GPUs and the AI chip market behind them remains frenetic, and the competition between major AI chip manufacturers such as Nvidia, AMD, and Intel will become increasingly fierce.

 

Three Memory Giants Seize HBM3e Opportunities, Highlighting the Importance of 12hi Products

 

Driven by the continuous iteration of high-performance AI chips and the expansion of HBM capacity per system, the demand for HBM bits continues to grow.

 

At the same time, with the iteration of mainstream GPU products from Nvidia and AMD, as well as changes in HBM specifications, the market will gradually upgrade from HBM3 to HBM3e. The three major memory manufacturers will actively seize HBM3e opportunities.

 

According to TrendForce, the annual growth rate of HBM demand bits will be close to 200% in 2024 and will double again in 2025.

 

TrendForce estimates that driven by the active adoption of new-generation HBM products by AI platforms, more than 80% of HBM demand bits will be for HBM3e generation products in 2025, of which 12-hi will account for more than half, becoming the mainstream product that major AI manufacturers will compete for in the second half of next year, followed by 8-hi.

 

Samsung, SK Hynix, and Micron have submitted their first batches of HBM3e 12-hi samples in the first half of 2024 and the third quarter, respectively, and are currently in the continuous verification stage. Among them, SK Hynix and Micron are progressing faster and are expected to complete verification by the end of this year.

(Photo credit:  Nvidia)

Please note that this article cites information from Tom’s Hardware.

  • Page 1
  • 13 page(s)
  • 61 result(s)

Get in touch with us