HBM3e


2024-03-27

[News] SK Hynix Reportedly Plans to Invest USD 4 Billion in Advanced Packaging Fab in Indiana

SK Hynix is rumored planning to build an advanced packaging fab worth USD 4 billion in West Lafayette, Indiana. According to a report from The Wall Street Journal, it is expected to commence operations by 2028, creating up to 1,000 job opportunities. This initiative may receive support in the form of state and federal tax incentives.

As reported by The Wall Street Journal and Tom’s Hardware, SK Hynix’s investment aims to enhance its capabilities in advanced semiconductor packaging, with a particular emphasis on manufacturing High-Bandwidth Memory (HBM).

Considering a potential capital expenditure of USD 4 billion for the construction, per Tom’s Hardware, if the project proceeds, it will become one of the largest advanced packaging facilities globally. Hence, government support is crucial, with expectations of tax incentives from both state and federal levels in the US.

SK Hynix, a supplier of HBM memory for NVIDIA, is eyeing enhanced capabilities in advanced chip packaging, particularly crucial for manufacturing HBM. The recent NVIDIA Blackwell B200, with each GPU utilizing 8 HBM3e chips, has also underscored SK Hynix’s role in the critical components supply chain for the AI industry.

The recent CHIPS and Science Act allocated USD 8.5 billion to Intel, enhancing US semiconductor competitiveness. SK Hynix’s plan to build a fab in Indiana is a significant stride, fostering US semiconductor growth.

However, US subsidies for chip manufacturing and packaging have been slow, with only three American companies currently benefiting, including BAE Systems, GlobalFoundries, and Microchip Technology.

Reportedly, SK Hynix’s plan remains more of an intention statement than a finalized deal, and whether it proceeds to the construction phase remains to be seen.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from The Wall Street Journal and Tom’s Hardware.

2024-03-22

[News] Micron’s Financial Report Reveals High Demand for HBM in 2025, Capacity Nears Full Allocation

Micron, the major memory manufacturer in the United States, has benefited from AI demand, turning losses into profits last quarter (ending in February) and issuing optimistic financial forecasts.

During its earnings call on March 20th, Micron CEO Sanjay Mehrotra stated that the company’s HBM (High Bandwidth Memory) capacity for this year has been fully allocated, with most of next year’s capacity already booked. HBM products are expected to generate hundreds of millions of dollars in revenue for Micron in the current fiscal year.

Per a report from Washington Post, Micron expects revenue for the current quarter (ending in May) to be between USD 6.4 billion and USD 6.8 billion, with a midpoint of USD 6.6 billion, surpassing Wall Street’s expectation of USD 6 billion.

Last quarter, Micron’s revenue surged 58% year-on-year to USD 5.82 billion, exceeding Wall Street’s expectation of USD 5.35 billion. The company posted a net profit of USD 790 million last quarter, a turnaround from a loss of USD 2.3 billion in the same period last year. Excluding one-time charges, Micron’s EPS reached USD 0.42 last quarter. Mehrotra reportedly attributed Micron’s return to profitability last quarter to the company’s efforts in pricing, product, and operational costs.

Over the past year, memory manufacturers have reduced production, coupled with the explosive growth of the AI industry, which has led to a surge in demand for NVIDIA AI processors, benefiting upstream memory manufacturers.

Mehrotra stated, “We believe Micron is one of the biggest beneficiaries in the semiconductor industry of the multiyear opportunity enabled by AI.”

The projected growth rates for DRAM and NAND Flash bit demand in 2024 are close to 15% and in the mid-teens, respectively. However, the supply growth rates for DRAM and NAND Flash bits in 2024 are both lower than the demand growth rates.

Micron utilizes 176 and 232-layer processes for over 90% of its NAND Flash production. As for HBM3e, it is expected to contribute to revenue starting from the second quarter.

Per a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

In terms of capital expenditures, the company maintains an amount of USD 7.5 to USD 8 billion (taking into account U.S. government subsidies), primarily allocated for enhancing HBM-related capacity.

Micron stated that due to the more complex packaging of HBM, it consumes three times the DRAM capacity of DDR5, indirectly constraining the capacity for non-HBM products, thereby improving overall DRAM market supply.

As per Micron’s report, regarding growth outlooks for various end markets in 2024, the annual growth rate for the data center industry has been revised upward from mid-single digits to mid-to-high single digits, while the PC industry’s annual growth rate remains at low to mid-single digits. AI PCs are expected to capture a certain market share in 2025. The annual growth rate for the mobile phone industry has been adjusted upward from modest growth to low to mid-single digits.

Read more

(Photo credit: Micron)

Please note that this article cites information from Micron and Washington Post.

2024-03-19

[News] TSMC’s 4nm Process Powers NVIDIA’s Blackwell Architecture GPU, AI Performance Surpasses Previous Generations by Multiples

Chip giant NVIDIA kicked off its annual Graphics Processing Unit (GPU) Technology Conference (GTC) today, with CEO Jensen Huang announcing the launch of the new artificial intelligence chip, Blackwell B200.

According to a report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

NVIDIA’s HBM supplier, South Korean chipmaker SK Hynix, also issued a press release today announcing the commencement of mass production of its high-performance DRAM new product, HBM3e, with shipments set to begin at the end of March.

Source: SK Hynix

Recently, global tech companies have been heavily investing in AI, leading to increasing demands for AI chip performance. SK Hynix points out that HBM3e is the optimal product to meet these demands. As memory operations for AI are extremely fast, efficient heat dissipation is crucial. HBM3e incorporates the latest Advanced MR-MUF technology for heat dissipation control, resulting in a 10% improvement in cooling performance compared to the previous generation.

Per SK Hynix’s press release, Sungsoo Ryu, the head of HBM Business at SK Hynix, said that mass production of HBM3e has completed the company’s lineup of industry-leading AI memory products.

“With the success story of the HBM business and the strong partnership with customers that it has built for years, SK hynix will cement its position as the total AI memory provider,” he stated.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews, Tom’s Hardware and SK Hynix.

2024-03-15

[News] Following February’s Advance Production of HBM3e, Micron Reportedly Secures Order from NVIDIA for H200

According to a report from the South Korean newspaper “Korea Joongang Daily,” following Micron’s initiation of mass production of the latest high-bandwidth memory HBM3e in February 2024, it has recently secured an order from NVIDIA for the H200 AI GPU. It is understood that NVIDIA’s upcoming H200 processor will utilize the latest HBM3e, which are more powerful than the HBM3 used in the H100 processor.

The same report further indicates that Micron secured the H200 order due to its adoption of 1b nanometer technology in its HBM3e, which is equivalent to the 12-nanometer technology used by SK Hynix in producing HBM. In contrast, Samsung Electronics currently employs 1a nanometer technology, which is equivalent to 14-nanometer technology, reportedly lagging behind Micron and SK Hynix.

The report from Commercial Times indicates that Micron’s ability to secure the NVIDIA order for H200 is attributed to the chip’s outstanding performance, energy efficiency, and seamless scalability.

As per a previous report from TrendForce, starting in 2024, the market’s attention will shift from HBM3 to HBM3e, with expectations for a gradual ramp-up in production through the second half of the year, positioning HBM3e as the new mainstream in the HBM market.

TrendForce reports that SK Hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter. With Samsung having already made significant strides in HBM3 and its HBM3e validation expected to be completed soon, the company is poised to significantly narrow the market share gap with SK Hynix by the end of the year, reshaping the competitive dynamics in the HBM market.

Read more

(Photo credit: Micron)

Please note that this article cites information from Korea Joongang Daily and Commercial Times.

2024-03-15

[News] Three-way Contest for HBM Dominance, Uncertainties Surrounding China’s Supply Chain Involvement

With numerous cloud computing companies and large-scale AI model manufacturers investing heavily in AI computing infrastructure, the demand for AI processors is rapidly increasing. As per a report from IJIWEI, the demand for HBM (High Bandwidth Memory), a key component among them, has been on the rise as well.

Amid the opportunity brought about by the surge in demand for computing power, which has in turn created a wave of opportunities for storage capabilities, when looking at the entire HBM industry chain, the number of China’s local companies which are able to enter the field is limited.

Faced with significant technological challenges but vast prospects, whether from the perspective of independent controllability or market competition, it is imperative to accelerate the pace of catching up.

HBM Demand Grows Against the Trend, Dominated by Three Giants

The first TSV HBM product debuted in 2014, but it wasn’t until after the release of ChatGPT in 2023 that the robust demand for AI servers drove rapid iterations of HBM technology in the order of HBM1, HBM2, HBM2e, HBM3, and HBM3e.

The fourth-generation HBM3 has been mass-produced and applied, with significant improvements in bandwidth, stack height, capacity, I/O speed, and more compared to the first generation. Currently, only three storage giants—SK Hynix, Samsung Electronics, and Micron—are capable of mass-producing HBM.

According to a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

In 2023, the primary applications in the market were HBM2, HBM2e, and HBM3, with the penetration rate of HBM3 increasing in the latter half of the year due to the push from NVIDIA’s H100 and AMD’s MI300.

According to TrendForce’s report, SK Hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter.

Driven by market demand, major players such as SK Hynix, Samsung, and Micron Technology are increasing their efforts to expand production capacity. SK Hynix revealed in February that all its HBM products had been fully allocated for the year, prompting preparations for 2025 to maintain market leadership.

Reportedly, Samsung, aiming to compete in the 2024 HBM market, plans to increase the maximum production capacity to 150,000 to 170,000 units per month before the end of the fourth quarter of this year. Previously, Samsung also invested KRW 10.5 billion to acquire Samsung Display’s factory and equipment in Cheonan, South Korea, with the aim of expanding HBM production capacity.

Micron Technology CEO Sanjay Mehrotra recently revealed that Micron’s HBM production capacity for 2024 is expected to be fully allocated.

Although the three major HBM suppliers continue to focus on iterating HBM3e, there is still room for improvement in single-die DRAM and stacking layers. However, the development of HBM4 has been put on the agenda.

Trendforce previously predicted that HBM4 will mark the first use of a 12nm process wafer for its bottommost logic die (base die), to be supplied by foundries. This advancement signifies a collaborative effort between foundries and memory suppliers for each HBM product, reflecting the evolving landscape of high-speed memory technology.

Continuous Surge in HBM Demand and Prices, Local Supply Chains in China Catching Up

In the face of a vast market opportunity, aside from the continuous efforts of the three giants to ramp up research and production, some second and third-tier Chinese DRAM manufacturers have also entered the HBM race. With the improvement in the level of locally produced AI processors, the demand for independent HBM supply chains in China has become increasingly urgent.

Top global manufacturers operate DRAM processes at the 1alpha and 1beta levels, while China’s DRAM processes operate at the 25-17nm level. China’s DRAM processes are approaching those overseas, and there are advanced packaging technology resources and GPU customer resources locally, indicating a strong demand for HBM localization. In the future, local DRAM manufacturers in China are reportedly expected to break through into HBM.

It is worth noting that the research and manufacturing of HBM involve complex processes and technical challenges, including wafer-level packaging, testing technology, design compatibility, and more. CoWoS is currently the mainstream packaging solution for AI processors, and in AI chips utilizing CoWoS technology, HBM integration is also incorporated.

CoWoS and HBM involves processes such as TSV (Through-Silicon Via), bumps, microbumps, and RDL (Redistribution Layer). Among these, TSV accounts for the highest proportion of the 3D packaging cost of HBM, close to 30%.

Currently, China has only a few leading packaging companies such as JCET Group, Tongfu Microelectronics, and SJSemi that possess the technology (such as TSV through-silicon via) and equipment required to support HBM production.

However, despite these efforts, the number of Chinese companies truly involved in the HBM industry chain remains limited, with most focusing on upstream materials.
With GPU acquisition restricted, breakthroughs in China’s AI processors are urgently needed both from its own self-sufficiency perspective and in terms of market competition. Therefore, synchronized breakthroughs in HBM are also crucial from Chinese manufacturers.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from IJIWEI.

  • Page 8
  • 11 page(s)
  • 51 result(s)

Get in touch with us