HBM


2023-12-08

[News] Memory Titans Vie for Control in HBM Tech, Who Will Shape the Next-Gen?

Market reports suggest Nvidia’s new product release cycle has shortened from two to a year, sparking intense competition among major memory companies in the realm of next-gen High Bandwidth Memory (HBM) technology. Samsung, SK Hynix, and Micron are fervently competing, with SK Hynix currently holding the dominant position in the HBM market. However, Micron and Samsung are strategically positioned, poised for a potential overtake, reported by TechNews.

Current Status of the HBM Industry

SK Hynix made a breakthrough in 2013 by successfully developing and mass-producing HBM using the Through Silicon Via (TSV) architecture. In 2019, they achieved success with HBM2E, maintaining the overwhelming advantage in the HBM market. According to the latest research from TrendForce, Nvidia plan to partner with more HBM suppliers. Samsung, as one of the suppliers, its HBM3 (24GB) is anticipated to complete verification with NVIDIA by December this year.

Regarding HBM3e progress, Micron, SK Hynix, and Samsung provided 8-layer (24GB) Nvidia samples in July, August, and October, respectively, with the fastest verification expected by year-end. All three major players anticipate completing verification in the first quarter of 2024.

As for HBM4, the earliest launch is expected in 2026, with a stack increase to 16 layers from the existing 12 layers. The memory stack will likely adopt a 2048-bit memory stack connection interface, driving demand for the new “Hybrid Bonding” stacking method. The 12-layer HBM4 product is set to launch in 2026, followed by the 16-layer product expected in 2027.

Navigating HBM4, the New Technologies and Roadmaps of Memory Industry Leaders

SK Hynix

According to reports from Business Korea, SK Hynix is preparing to adopt “2.5D Fan-Out” packaging for the next-generation HBM technology. This move aims to enhance performance and reduce packaging costs. This technology, not previously used in the memory industry but common in advanced semiconductor manufacturing, is seen as having the potential to “completely change the semiconductor and foundry industry.” SK Hynix plans to unveil research results using this packaging method as early as next year.

The 2.5D Fan-Out packaging technique involves arranging two DRAM horizontally and assembling them similar to regular chips. The absence of a substrate beneath the chips allows for thinner chips, significantly reducing the thickness when installed in IT equipment. Simultaneously, this technique bypasses the Through Silicon Via (TSV) process, providing more Input/Output (I/O) options and lowering costs. 

According to their previous plan, SK Hynix aims to mass-produce the sixth-generation HBM (HBM4) as early as 2026. The company is also actively researching “Hybrid Bonding” technology, likely to be applied to HBM4 products.

Currently, HBM stacks are placed on the interposer next to or GPUs and are connected to their interposer. While SK Hynix’s new goal is to eliminate the interposer completely, placing HBM4 directly on GPUs from companies like Nvidia and AMD, with TSMC as the preferred foundry.

Samsung

Samsung is researching the application of photonics in HBM technology’s interposer layer, aiming to address challenges related to heat and transistor density. Yan Li, Principal Engineer in Samsung’s advanced packaging team, shared insights at the OCP Global Summit in October 2023.

(Image: Samsung)

According to Samsung, The industry has made significant strides in integrating photonics with HBM through two main approaches. One involves placing a photonics interposer between the bottom packaging layer and the top layer containing GPU and HBM, acting as a communication layer. However, this method is costly, requiring an interposer and photon I/O for logic chips and HBM.

(Image: Samsung)

The alternative approach separates the HBM memory module from packaging, directly connecting it to the processor using photonics. Rather than dealing with the complexity of packaging, a more efficient approach is to separate the HBM memory module from the chip itself and connect it to the logic IC using photonics technology. This approach not only simplifies the manufacturing and packaging costs for HBM and logic ICs but also eliminates the need for internal digital-to-optical conversions in the circuitry. However, careful attention is required to address heat dissipation.

Micron

As reported by Tom’s Hardware, Micron’s 8-layer HBM3e (24GB) is expected to launch in early 2024, contributing to improved AI training and inference performance. The 12-layer HBM3e (36GB) chip is expected to debut in 2025.

Micron is working on HBM4 and HBM4e along with other companies. The required bandwidth is expected to exceed 1.5 TB/s. Micron anticipates launching 12-layer and 16-layer HBM4 with capacities of 36GB to 48GB between 2026 and 2027. After 2028, HBM4E will be introduced, pushing the maximum bandwidth beyond 2+ TB/s and increasing stack capacity to 48GB to 64GB.

Micron is taking a different approach from Samsung and SK Hynix by not integrating HBM and logic chips into a single die, suggested by Chinese media Semiconductor Industry Observation. This difference in strategy may lead to distinct technical paths, and Micron might advise Nvidia, Intel, AMD that relying solely on the same company’s chip carries greater risks.

(Image: Micron)

TSMC Aids Memory Stacking       

Currently, TSMC 3DFabric Alliance closely collaborates with major memory partners, including Micron, Samsung, and SK Hynix. This collaboration ensures the rapid growth of HBM3 and HBM3e, as well as the packaging of 12-layer HBM3/HBM3e, by providing more memory capacity to promote the development of generative AI.

(Image: TSMC)

Please note that this article cites information from TechNewsBusiness KoreaOCP Global SummitTom’s Hardware, and Semiconductor Industry Observation

(Image: SK Hynix)

Explore more

2023-11-24

[News] Beyond Price Hikes, What Lies Ahead for the Memory Market?

Stepping into the fourth quarter of 2023, the memory market is witnessing a comprehensive uptick in DRAM and NAND Flash prices. This surge, attributed to the gradual impact of companies’ production cuts and sustained robust demand in specific application markets, is poised to continue into the first quarter of the following year.

TrendForce’s analysis reveals an estimated 13-18% increase in Mobile DRAM contract prices for the fourth quarter, while eMMC and UFS NAND Flash contracts are expected to see a rise of about 10-15%. Looking forward to the first quarter of 2024, the upward trajectory in overall memory prices is anticipated to persist. The contract prices for Mobile DRAM and NAND Flash (eMMC, UFS) are expected to continue ascending, contingent on whether companies uphold a conservative production strategy and if there’s tangible consumer demand support at the end.

The memory market, coming out of its challenging phase, is not just experiencing increases in prices but is also anticipated to gain momentum from various factors contributing to its revival.

AI-Driven Surge in Smartphone Memory Capacities

According to reports from Wccftech, a notable trend in 2024 is the rise of terminal AI, now integrated into various chipsets like Snapdragon 8 Gen 3, Dimensity 9300, and Exynos 2400. Smartphones with AI demand more memory, with the expectation that Android phones featuring built-in AI will require a minimum of 20GB RAM.

While 8GB RAM remains the standard for Android phones, there are now phones in the market boasting higher RAM capacities than most laptops or PCs, though it has yet to become ubiquitous. Industry experts suggest that to smoothly execute AI image feature in the future, Android phones will need at least 12GB RAM. Considering AI applications and other features, phones will require over 20GB RAM for seamless operations.

Given that numerous Android phone brands are actively investing in AI, 2024 is poised to make AI a focal point for devices. Consequently, the industry underscores that as RAM requirements rise, hardware specifications become more crucial than ever for modern AI devices.

Noteworthy Growth in DDR5 Market Demand

Industry experts anticipate significant growth in demand for the DDR5 market, fueled by decreasing prices and the continuous improvement in companies’ yields.

As a high-value-added DRAM, DDR5 continues to garner favor from major players. Micron recent announcement of DDR5 memory based on 1β technology, boasting speeds of up to 7200 MT/s, signifies a shift toward the data center and PC markets.

Recently, Micron also introduced a 128GB DDR5 RDIMM memory utilizing 32Gb chips. This series boasts speeds of up to 8000 MT/s and is suitable for server and workstations. Employing Micron’s 1β technology, these series contribute to a 24% improvement in energy efficiency and 16% reduction in latency. Furthermore, Micron plans to launch models with speeds of 4800 MT/s, 5600 MT/s, and 6400 MT/s in 2024, with an upcoming model featuring a speed of 8000 MT/s in the future.

In terms of Samsung, it is reported to expand its DDR5 production line. Given the high value of DDR5 and its adoption in the PC and server markets, this year is considered a “year of large-scale adoption of DDR5.”

Improvement in HBM Supply Situation

Similar to DDR5, HBM, a high-value-added DRAM, has attracted significant attention this year. Fueled by the AI trend, the demand for the HBM market has surged, leading to an expansion in HBM production capacity.

TrendForce’s research indicates that looking ahead to 2024, the HBM sufficiency ratio is expected to improve, shifting from -2.4% in 2023 to 0.6%. With the AI boom driving demand for AI chips in 2023 and 2024, companies are increasing HBM capacity, anticipating a significant improvement in the HBM supply in 2024.

In terms of specifications, as the performance needs of AI chips increase, it’s anticipated that HBM3 and HBM3e will become the dominant choices in 2024. In general, with a rise in demand and the higher average selling prices of HBM3 and HBM3e compared to older versions, the revenue from HBM is expected to experience significant growth in 2024.

(Image: Qualcomm)

 

Explore more

2023-11-08

[News] Seizing the AI Trend! Revealing Samsung and Micron’s HBM Expansion Timetable

In a subdued environment for consumer electronic applications in the storage market, High Bandwidth Memory (HBM) technology is emerging as a new driving force, gaining significant attention from major players. Recent reports reveal that both Samsung and Micron are gearing up for substantial HBM production expansion.

Major Manufacturers Actively Investing in HBM

Recent reports indicate that Samsung has acquired certain buildings and equipment within the Cheonan facility of Samsung Display in South Korea to expand its HBM production capacity.

It is reported that Samsung plans to establish a new packaging line at the Cheonan facility for large-scale HBM production. The company has already spent 10.5 billion Korean won on the acquisition of the mentioned buildings and equipment, with an additional investment expected to range between 700 billion and 1 trillion Korean won.

Earlier, it was disclosed by Mr. Hwang Sang-jun, the Vice President of Samsung Electronics and Head of the DRAM Product and Technology Team, that Samsung has developed HBM3E with a speed of 9.8Gbps and plans to commence providing samples to customers.

Concurrently, Samsung is in the process of developing HBM4 with the objective of making it available by 2025. It is reported that Samsung Electronics is actively working on various technologies for HBM4, including non-conductive adhesive film (NCF) assembly techniques optimized for high-temperature thermal characteristics and hybrid bonding (HCB).

On November 6th, Micron Technology opened a new facility in Taichung. Micron has stated that this new facility will integrate advanced testing and packaging functions and will be dedicated to the mass production of HBM3E, along with other products. This expansion aims to meet the increasing demand across various applications such as artificial intelligence, data centers, edge computing, and cloud services.

Previously, Micron’s CEO, Sanjay Mehrotra, revealed that the company plans to commence substantial shipments of HBM3E in early 2024. Micron’s HBM3E technology is currently undergoing certification by NVIDIA. The initial HBM3E offerings will feature an 8-Hi stack design with a capacity of 24GB and a bandwidth exceeding 1.2TB/s.

Furthermore, Micron intends to introduce larger-capacity 36GB 12-Hi stacks HBM3E in 2024. In an earlier statement, Micron had anticipated that the new HBM technology would contribute “hundreds of millions” of dollars in revenue by 2024.

Shift Toward HBM3 Expected in 2024

According to TrendForce, the current mainstream technology in the HBM market is HBM2e. This specification is utilized by prominent players like NVIDIA with their A100 and A800, AMD with the MI200 series, and various custom system-on-chip designs by CSPs.

Simultaneously, in response to the evolving demand for AI accelerator chips, many manufacturers are planning to introduce new products based on HBM3e technology in 2024. It is anticipated that both HBM3 and HBM3e will become the dominant technologies in the market next year, catering to the requirements of AI accelerator chips.

Regarding the demand for different generations of HBM, TrendForce believes that the primary demand is shifting from HBM2e to HBM3 in 2023, with an anticipated demand ratio of approximately 50% and 39%, respectively. As the usage of HBM3-based accelerator chips continues to increase, the market demand is expected to see a substantial shift towards HBM3 in 2024.

It is anticipated that in 2024, HBM3 will surpass HBM2e, with an estimated share of 60%. This transition to HBM3 is expected to be accompanied by higher average selling prices (ASP), significantly boosting next year’s HBM revenue.

Read more

(Photo credit: Samsung)

2023-11-03

TrendForce Foresees China’s Mature Wafer Processes to Expand to 33% by 2027, Japan Secures Advanced Processes

The research institution TrendForce held its AnnualForecast 2024 Seminar on November 3, where they delved into discussions about global wafer foundry trends, the applications of AI, the dynamics of AI servers, and the demand for High Bandwidth Memory (HBM).

Joanne Chiao, analyst from TrendForce, observed that while AI servers have experienced robust growth over the past two years, AI chips account for just 4% of wafer consumption, limiting their impact on the overall wafer industry. Nevertheless, both advanced and mature processes offer business opportunities. The former benefits from the desire of companies like CSPs to develop customized chips, leading them to seek the assistance of design service providers; while the latter can consider venturing into sector such as power management ICs and I/O solutions.

Persisting US export restrictions continue to affect China’s foundries, causing delays in their expansion plans. Furthermore, the regionalisation of wafer foundry services is exacerbating issues related to uneven resource distribution.

Due to lackluster end-market demand and fierce market competition, the capacity utilization rate of 8-inch wafer foundries continue to decline until the first quarter of the upcoming year. Inventory adjustments are underway in the fields of industrial control and automotive electronics. Chinese foundries are more willing to offer competitive prices, and outperforming their counterparts in Taiwan and Korea in terms of order performance.

In the realm of 12-inch wafer foundry services, success relies on technological leadership and exclusivity. Competition isn’t as intense as it is with 8-inch wafers. This resurgence is driven by inventory replenishment, the demand for iPhone 15, select Android smartphone brands, and the need for AI chips. A moderate recovery is expected in the latter part of this year.

TrendForce indicates that, with the expansion of processes beyond 28nm, mature process capacity is expected to occupy less than 70% of the capacity of the top ten foundries by 2027. Under the pressure to transition towards mature processes, China is anticipated to account for 33% of mature process capacity by 2027, with the possibility of further increases.

It’s noteworthy that Japan is actively promoting the revival of its semiconductor industry and, through incentives for foreign companies establishing fabs, may secure 3% of advanced process capacity.

TrendForce’s analyst, Frank Kung, predicts that the shipment of Nvidia’s high-end GPU processors will exceed 1.5 million units this year, with a YoY growth rate of over 70%, expected to reach 90% by 2024. Starting from the latter half of this year, Nvidia’s high-end GPU market will transition primarily to H100. As for AMD, its high-end AI solutions are mainly targeted at CSPs and supercomputers. The AI server market, equipped with MI300, is expected to experience significant expansion in the latter half of this year.

In the 2023-2024 period, major CSPs are poised to become the primary drivers of AI server demand, with Microsoft, Google, and AWS ranking among the top three. Additionally, the robust demand for cloud-based AI training is expected to propel the growth of advanced AI chips, which may, in turn, stimulate growth in power management or high-speed transmission-related ICs in the future.

Lastly, concerning HBM, TrendForce’s senior research vice president, Avril Wu, mentioned that as Nvidia’s H100 gradually gains momentum, HBM3 is set to become the industry standard in the latter half of this year. With the launch of B100 next year, HBM3e is poised to replace HBM3 as the mainstream memory in the latter half of the following year. Overall, HBM plays a pivotal role in DRAM revenue, with expectations of an increase from 9% in 2023 to 18% in 2024, potentially leading to higher DRAM prices in the coming year.
(Image: TechNews)

2023-10-26

[News] Thanks to AI demand, SK hynix’s Q3 DRAM business turned profitable

SK hynix today reported the financial results for the third quarter ended September 30, 2023. The company recorded revenues of 9.066 trillion won, operating losses of 1.792 trillion won and net losses of 2.185 trillion won in the three-month period. The operating and net margins were a negative 20% and 24%, respectively.

After bottoming out in the first quarter, the business has been on a steady recovery track, helped by growing demand for products such as high-performance memory chips, the company said.

“Revenues grew 24%, while operating losses narrowed 38%, compared with the previous quarter, thanks to strong demand for high-performance mobile flagship products and HBM3, a key product for AI applications, and high-capacity DDR5,” the company said, adding that a turnaround of the DRAM business following two quarters of losses is particularly hopeful.

SK hynix attributed the growth in sales to increased shipments of both DRAM and NAND and a rise in the average selling price.

By products, shipments of DRAM increased 20% from the three months earlier, thanks to strong sales of high-performance products for server applications such as the AI with the average selling price also recording a 10% rise. Shipments of NAND also rose with high-capacity mobile products and solid state drive products taking the lead.

Following a turnaround, an improvement in the DRAM business is forecast to gain speed, backed by popularity of the generative AI technology, while there are looming signs of a steady recovery in the NAND space as well.

With the effect of the production reduction by global memory providers starting to be seen and customers, following efforts to reduce inventories, placing new orders now, semiconductor prices are starting to stabilize, the company said.

To meet new demands, SK hynix plans to increase investments in high-value flagship products such as HBM, DDR5, and LPDDR5. The company will increase the share of the products manufactured from the 1anm and 1bnm, the fourth and the fifth generations of the 10nm process, respectively, while increasing investments in HBM and TSV.

(Image: SK hynix)

  • Page 24
  • 27 page(s)
  • 133 result(s)

Get in touch with us