News
Amidst the AI frenzy, HBM has become a focal point for major semiconductor manufacturers, and another storage giant is making new moves.
According to Korean media “THE ELEC”, Samsung Electronics plans to establish an HBM development office to enhance its competitiveness in the HBM market. The size of the team has not been determined yet, but Samsung’s HBM task force is expected to undergo upgrades.
The report indicates that if the task force is upgraded to a development office, Samsung will then establish specialized design and solution teams for HBM development. The head of the development office will be appointed from among vice president-level personnel.
In terms of research and development progress, the three major manufacturers have all advanced to the stage of HBM3e.
Regarding Samsung, in February, the company just released its first 36GB HBM3e 12H DRAM, which is currently Samsung’s largest capacity HBM product. Presently, Samsung has begun providing samples of HBM3e 12H to customers, with mass production expected to commence in the latter half of this year.
On the other hand, Micron Technology has announced the commencement of mass production of high-frequency memory “HBM3e,” which will be utilized in NVIDIA’s latest AI chip, the “H200” Tensor Core graphics processing unit (GPU). The H200 is scheduled for shipment in the second quarter of 2024.
Another major player, SK Hynix, as per Business Korea, plans to begin mass production of HBM3e in the first half of 2024.
In terms of production capacity, both SK Hynix and Micron Technology have previously disclosed that HBM production capacity is fully allocated. This indicates a strong market demand for HBM, reaffirming manufacturers’ determination to expand production.
As per previous report by Bloomberg, SK Hynix plans to invest an additional USD 1 billion in advanced packaging for HBM. The company stated in its recent financial report that it intends to increase capital expenditures in 2024 and shift production focus to high-end storage products such as HBM.
The capacity for HBM is expected to more than double compared to last year. From a demand perspective, it is anticipated that over 60% of future demand for HBM will stem from the primary application of AI servers.
Read more
(Photo credit: Samsung)
News
The surge in demand for NVIDIA’s AI processors has made High Bandwidth Memory (HBM) a key product that memory giants are eager to develop. However, according to South Korean media DealSite cited by Wccftech on March 4th, the complex architecture of HBM has resulted in low yields, making it difficult to meet NVIDIA’s testing standards and raising concerns about limited production capacity.
The report has further pointed out that HBM manufacturers like Micron and SK Hynix are grappling with low yields. They are engaged in fierce competition to pass NVIDIA’s quality tests for the next-generation AI GPU.
The yield of HBM is closely tied to the complexity of its stacking architecture, which involves multiple memory layers and Through-Silicon Via (TSV) technology for inter-layer connections. These intricate techniques increase the probability of process defects, potentially leading to lower yields compared to simpler memory designs.
Furthermore, if any of the HBM chips are defective, the entire stack is discarded, resulting in inherently low production yields. As per the source cited by Wccftech, it has indicated that the overall yield of HBM currently stands at around 65%, and attempts to improve yield may result in decreased production volume.
Micron announced on February 26th the commencement of mass production of High Bandwidth Memory “HBM3e,” to be used in NVIDIA’s latest AI chip “H200” Tensor Core GPU. The H200 is scheduled for shipment in the second quarter of 2024, replacing the current most powerful H100.
On the other hand, Kim Ki-tae, Vice President of SK Hynix, stated on February 21st in an official blog post that while external uncertainties persist, the memory market is expected to gradually heat up this year. Reasons include the recovery in product demand from global tech giants. Additionally, the application of AI in devices such as PCs or smartphones is expected to increase demand not only for HBM3e but also for products like DDR5 and LPDDR5T.
Kim Ki-tae pointed out that all of their HBM inventory has been sold out this year. Although it’s just the beginning of 2024, the company has already begun preparations for 2025 to maintain its market-leading position.
Per a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.
Read more
(Photo credit: SK Hynix)
News
The U.S. memory giant Micron Technology has started the mass production of high-bandwidth memory “HBM3e,” which will be utilized in NVIDIA’s latest AI chips.
Micron stated on February 26th that HBM3e consumes 30% less power than its competitors, meeting the demands of generative AI applications. Micron’s 24GB 8H HBM3e will be part of NVIDIA’s “H200” Tensor Core GPUs, breaking the previous exclusivity of SK Hynix as the sole supplier for the H100.
Per TrendForce’s earlier research into the HBM market, it has indicated that NVIDIA plans to diversify its HBM suppliers for more robust and efficient supply chain management. The progress of HBM3e, as outlined in the timeline below, shows that Micron provided its 8hi (24GB) samples to NVIDIA by the end of July, SK hynix in mid-August, and Samsung in early October.
As per a previous report from NVIDIA last year, the H200 is scheduled to ship in the second quarter of this year (2024), replacing the current most powerful H100 in terms of computing power. Micron’s press release on February 26th has further solidified that Micron will begin shipping its 24GB 8H HBM3e in the second calendar quarter of 2024.
In the same press release, Micron’s Chief Business Officer, Sumit Sadana, has also indicated that“AI workloads are heavily reliant on memory bandwidth and capacity, and Micron is very well-positioned to support the significant AI growth ahead through our industry-leading HBM3e and HBM4 roadmap, as well as our full portfolio of DRAM and NAND solutions for AI applications.”
HBM is one of Micron’s most profitable products, and its complex construction process is part of the reason. Micron previously predicted that HBM revenue could reach hundreds of millions of dollars in 2024, with further growth expected in 2025.
Micron has further announced that it will share more about its industry-leading AI memory portfolio and roadmaps at the “GPU Technology Conference” (also known as the GTC conference) hosted by NVIDIA on March 18th.
Previously, Micron indicated in a December 2023 conference call with investors that generative AI could usher in a multi-year growth period for the company, with projected memory industry revenue reaching historic highs in 2025. They also mentioned at the time that HBM3e, developed for NVIDIA’s H200, had entered its final quality control phase.
Read more
(Photo credit: Micron)
News
Currently, the top three leaders—Samsung, SK Hynix, and Micron—in the HBM sector are undergoing unprecedented expansion. Below is an overview of the progress made by each of these giants in the realm of HBM:
Samsung Electronics has begun expanding its HBM3 supply since the fourth quarter of 2023. Prior to this, internal messages within Samsung during the fourth quarter of 2023 indicated that samples of the next-generation HBM3e with an 8-layer stack had been provided to customers, with plans for mass production to commence in the first half of this year.
Han Jin-man, Executive Vice President in charge of Samsung’s semiconductor business in the United States, stated at CES 2024 this year that Samsung’s HBM chip production volume will increase 2.5 times compared to last year and is projected to double again next year.
Samsung officials also revealed that the company plans to increase the maximum production of HBM to 150,000 to 170,000 units per month before the fourth quarter of this year in a bid to compete for the HBM market in 2024.
Previously, Samsung Electronics spent KRW 10.5 billion to acquire the plant and equipment of Samsung Display located in Tianan City, South Korea, to expand HBM capacity. They also plan to invest KRW 700 billion to 1 trillion in building new packaging lines.
According to the latest report from Korean media Moneytoday on February 20th, SK Hynix will commence mass production of the world’s first fifth-generation high-bandwidth memory, HBM3e, in March this year. The company plans to supply the first batch of products to NVIDIA within the next month.
However, SK hynix noted that it “cannot confirm any details related to its partner.”
In its financial report, SK Hynix indicated plans to increase capital expenditure in 2024, with a focus on high-end storage products such as HBM. The HBM production capacity is expected to more than double compared to last year.
Previously, SK Hynix forecasted that by 2030, its HBM shipments would reach 100 million units annually. As a result, the company has decided to allocate approximately KRW 10 trillion (approximately USD 7.6 billion) in CAPEX for 2024. This represents a significant increase compared to the projected CAPEX of KRW 6 to 7 trillion in 2023, with an increase ranging from 43% to 67%.
The focus of the expansion is on constructing and expanding factories. In June of last year, Korean media reported that SK Hynix was preparing to invest in backend process equipment to expand its HBM3 packaging capabilities at its Icheon plant. By the end of this year, it is expected that the scale of backend process equipment at this plant will nearly double.
Furthermore, SK Hynix is also set to construct a state-of-the-art manufacturing facility in Indiana, USA. According to the Financial Times, this South Korean chip manufacturer will produce HBM stacks at this facility, which will be used for NVIDIA GPUs produced by TSMC.
Micron holds a relatively low share in the global HBM market. In order to narrow this gap, Micron has placed a significant bet on its next-generation product, HBM3e.
Sanjay Mehrotra, CEO of Micron, stated, ” Micron is in the final stages of qualifying our industry-leading HBM3e to be used in NVIDIA’s next-generation Grace Hopper GH200 and H200 platforms.”
Micron plans to begin mass shipments of HBM3e memory in early 2024. Mehrotra emphasized that their new product has garnered significant interest across the industry, implying that NVIDIA may not be the sole customer ultimately utilizing Micron’s HBM3e.
In this competition where there is no first-mover advantage, Micron seems to be betting on the yet-to-be-determined standard of the next-generation HBM4. Official announcements reveal that Micron has disclosed its next-generation HBM memory, tentatively named HBM Next. It is expected that HBM Next will offer capacities of 36GB and 64GB, available in various configurations.
Unlike Samsung and SK Hynix, Micron does not intend to integrate HBM and logic chips into a single chip. In the development of the next-generation HBM, the Korean and American memory manufacturers have distinct strategies.
Micron may address AMD, Intel, and NVIDIA that faster memory access speeds can be achieved through combination chips like HBM-GPU. However, relying solely on a single chip means greater risk.
As per TrendForce, HBM4 is planned to be launched in 2026. It is expected that specifications and performance, including those for NVIDIA and other CSP (Cloud Service Providers) in future product applications, will be further optimized.
With specifications evolving towards higher speeds, it will be the first time that the base die of HBM, also known as the Logic die, will adopt a 12nm process wafer. This part will be provided by foundries, necessitating collaboration between foundries and memory manufacturers for single HBM product integration.
Furthermore, as customer demands for computational efficiency increase, HBM4 is expected to evolve beyond the existing 12hi (12-layer) stack to 16hi (16-layer) configurations. The anticipation of higher layer counts is also expected to drive demand for new stacking methods such as hybrid bonding. HBM4 12hi products are slated for release in 2026, while 16hi products are expected to debut in 2027.
Read more
(Photo credit: Samsung)
News
South Korean memory giant SK Hynix has confirmed record-breaking sales of High Bandwidth Memory (HBM) over the past few months, driving profitability in the fourth quarter and predicting an industry-wide recovery.
According to Wccftech, SK Hynix Vice President Kim Ki-tae stated on February 21st that the demand for HBM, as an AI memory solution, is experiencing explosive growth as generative AI services become increasingly diverse and continue to evolve.
The report has cited insights from Kim Ki-tae, who stated, “HBM, with its high-performance and high-capacity characteristics, is a monumental product that shakes the conventional wisdom that memory semiconductors are only a part of the overall system. ”
Kim Ki-tae also mentioned that despite ongoing external uncertainties, the memory market is expected to gradually warm up in 2024. This is attributed to the recovery in product demand from global tech giants.
Moreover, AI devices such as PCs or smartphones are expected to increase the demand for artificial intelligence. This surge is anticipated to boost the sales of HBM3E and potentially drive up the demand for products like DDR5 and LPDDR5T.
Kim Ki-tae emphasized that their HBM products have already sold out for this year. Although it’s just the beginning of 2024, the company has already begun gearing up for 2025.
SK Hynix Plans to Establish Advanced Packaging Plant in the US
SK Hynix is reportedly set to establish an advanced packaging plant in Indiana, with the US government aiming to reduce dependence on advanced chips from Taiwan.
As per the Financial Times on February 1st, citing unnamed sources, SK Hynix’s rumored new packaging facility in Indiana may specialize in 3D stacking processes to produce HBM, which will also be integrated into NVIDIA’s GPUs.
Currently, SK Hynix produces HBM in South Korea and then ships it to Taiwan for integration into NVIDIA GPUs by TSMC.
Read more
(Photo credit: SK Hynix)