News
After its 8-Hi HBM3e entered mass production in February, Micron officially introduced the 12-Hi HBM3e memory stacks on Monday, which features a 36 GB capacity, according to a report by Tom’s Hardware. The new products are designed for cutting-edge processors used in AI and high-performance computing (HPC) workloads, including NVIDIA’s H200 and B100/B200 GPUs.
It is worth noting that the achievement has made the US memory chip giant almost on the same page with the current HBM leader, SK hynix. Citing Justin Kim, president and head of the company’s AI Infra division at SEMICON Taiwan last week, another report by Reuters notes that SK hynix is set to begin mass production of its 12-Hi HBM3e chips by the end of this month.
Samsung, on the other hand, is said to have completed NVIDIA’s quality test for the shipment of 8-Hi HBM3e memory, while the company is still working on the verification of its 12-Hi HBM3e.
Micron’s 12-Hi HBM3e memory stacks, according to Tom’s Hardware, feature a 36GB capacity, a 50% increase over the previous 8-Hi models, which had 24GB. This expanded capacity enables data centers to handle larger AI models, such as Meta AI’s Llama 2, with up to 70 billion parameters on a single processor. In addition, this capability reduces the need for frequent CPU offloading and minimizes communication delays between GPUs, resulting in faster data processing.
According to Tom’s Hardware, in terms of performance, Micron’s 12-Hi HBM3e stacks deliver over 1.2 TB/s. Despite offering 50% more memory capacity than competing products, Micron’s HBM3e consumes less power than the 8-Hi HBM3e stacks.
Regarding the future roadmap of HBM, Micron is said to be working on its next-generation memory solutions, including HBM4 and HBM4e. These upcoming memory technologies are set to further enhance performance, solidifying Micron’s position as a leader in addressing the increasing demand for advanced memory in AI processors, such as NVIDIA’s GPUs built on the Blackwell and Rubin architectures, the report states.
Read more
(Photo credit: Micron)
News
SK hynix President Justin Kim shared insights on SK hynix’s current memory products and HBM-related offerings in a speech titled “Unleashing the Possibilities of AI Memory Technology.” Per a report from TechNews, he announced at Semicon Taiwan that the company would begin mass production of 12-stack HBM3e by the end of this month, marking a pivotal moment in the HBM battlefield.
He also stated that AI development is only at its first stage, with future growth expected to reach a fifth stage, where AI will interact with humans through intellect and emotion. Kim outlined AI’s key challenges, including power, heat dissipation, and memory bandwidth requirements.
The biggest challenge currently, according to Kim, is power shortages, with data centers expected to need twice the power they do now. Relying solely on renewable energy will not meet this demand, and increased power use will also generate more heat, requiring more efficient heat dissipation solutions.
Thus, SK hynix is working on AI memory that is more energy-efficient, lower in power consumption, and has greater capacity, while also offering solutions tailored to different applications.
Kim then shared the latest progress on HBM3e, noting that SK Hynix was the first supplier to produce 8-layer HBM3e and will begin mass production of 12-layer HBM3e by the end of the month. Additionally, SK Hynix introduced its latest products in DIMM, enterprise SSDs (QLC eSSD), LPDDR5T, LPDDR6, and GDDR7 as well.
Regarding technology development, Kim highlighted that HBM4 will be the first product based on a base die, combining SK hynix’s advanced HBM technology with TSMC’s cutting-edge manufacturing to achieve unparalleled performance. Mass production schedules will be aligned with customer demands.
On a global scale, Kim announced the establishment of a new facility in Yongin, South Korea, with plans to begin mass production in 2027, positioning Yongin as one of the largest and most advanced semiconductor hubs.
Moreover, SK hynix will invest in Indiana, USA, expected to start operations at a new plant in 2028, focusing on advanced HBM packaging.
Eventually, Kim stated that SK hynix will concentrate on AI business, looking to build AI infrastructure with SK Group. This includes integrating power, software, glass substrates, and immersion cooling technology, and working to become a core player in the ecosystem, overcoming challenges with partners to achieve goals in the AI era.
Read more
News
Among memory giants which are accelerating their development of next-gen HBM amid the AI boom, SK hynix, NVIDIA’s major HBM supplier, is at the forefront as it dominates the market. According to Kangwook Lee, Senior Vice President and Vice President of Packaging, though it might be the case that certain startups would choose to forgo HBM in their AI chip design due to cost considerations, high-performance computing products still require HBM, a report by Technews notes.
Lee’s attendance marks the first time SK hynix has delivered a keynote speech in SEMICON Taiwan, as he gave a presentation on September 3rd at the Heterogeneous Integration Global Summit, sharing the company’s observation on HBM trends in the future. Here are the key takeaways complied by Technews.
Customized HBM Will Be the Future
Citing Lee’s remarks, the report states that customization will be a crucial trend in the HBM sector. Lee further noted that the major difference between standard and customized HBM lies in the base logic die, as customers’ IPs have been integrated. The two categories of HBMs, though, share similar core dies.
TrendForce also predicted that HBM industry will become more customization-oriented in the future. Unlike other DRAM products, HBM will increasingly break away from the standard DRAM framework in terms of pricing and design, turning to more specialized production.
SK hynix has been in collaboration with TSMC to develop the sixth generation of HBM products, known as HBM4, which is expected to enter production in 2026. Unlike previous generations, which were based on SK hynix’s own process technology, HBM4 will leverage TSMC’s advanced logic process, which is anticipated to significantly enhance the performance of HBM products, while enabling the addition of more features in the meantime.
SK hynix: Chiplet to Be Applied Not Only in HBM But in SSD
Regarding the challenges of HBM in the future, Lee mentioned that there are many obstacles in packaging and design. In terms of packaging, the main challenge is the limitation on the number of stacked layers.
According to Lee, SK hynix is particularly interested in directly integrating logic chips with HBM stacks. On the other hand, customers are also showing interest in 3D System-in-Package (3D SIP) technology. In sum, 3D SIP, memory bandwidth, customer need alignment and collaboration will be among the challenges going forward.
Per a report by Korean media outlet TheElec, SK hynix intends to integrate the chiplet technology into its memory controllers over the next three years to improve cost management, which means that parts of the controller would be manufactured with advanced nodes, while other sections will use legacy nodes.
In response, Lee stated that this technology will be used not only for HBM but also for SSD SoC controllers.
When asked about whether some startups might choose to forgo HBM in AI chip design due to cost considerations, Lee responded that it largely depends on the product application. Some companies claim that HBM is too expensive, so they may seek alternative solutions without HBM. High-performance computing products, on the other hand, still require HBM.
Read more
(Photo credit: SK hynix)
News
According to a report from Korean media outlet BusinessKorea, Rebellions’ CTO Oh Jin-wook announced that the company will adjust its production plans, bypassing the initially planned Rebel ‘Single’ product to focus on the mass production of the Rebel Quad AI chip by the end of the year.
This chip, manufactured using Samsung’s 4nm process, will be equipped with four Samsung’s 12-stack 5th generation High Bandwidth Memory (HBM3e), by the end of the year, offering a total memory capacity of 144GB.
Per the report, Oh explained that the company decided to accelerate the release of Rebel Quad due to internal assessments.
He also emphasized the superior power efficiency of Rebel, stating that it has demonstrated more than four times the power efficiency compared to products from Groq, a competing NPU company in the U.S.
Per a report from Reuters, Rebellions has recently signed a formal merger agreement with Sapeon Korea. The merged entity will retain the name Rebellions, and the combined company is expected to be valued at over 1 trillion Korean won, aiming to strengthen its competitiveness in the global AI chip market.
Read more
(Photo credit: Rebellions)
News
Samsung Electronics, which has been struggling at the final stage of its HBM3e qualification with NVIDIA, may unexpectedly emerge as the pacemaker for the AI ecosystem, as the company may somehow ease the cost pressure for building AI servers by balancing the market, as well as alleviating the tight HBM supply, according to a recent report by Korean media outlet Invest Chosun.
Samsung, in its second quarter earnings call, has confirmed that the company’s fifth-generation 8-layer HBM3e is undergoing customer valuation. The product is reportedly to enter mass production as soon as the third quarter.
Invest Chosun analyzes that while there is growing anticipation that NVIDIA could come up with a conclusion regarding Samsung’s HBM3e verification, the market’s attitude towards AI has also been gradually shifting in the meantime, as the main concern now is that semiconductors are becoming too expensive.
The report, citing remarks from a consultant, notes that the price of an NVIDIA chip may cost tens of thousands of dollars each, leading to concerns that the industry’s overall investment capex cycle might not last more than three years.
In addition, the report highlights that the cost of building an AI server for learning is about 40 times that of a standard server, with over 80% attributed to NVIDIA’s AI accelerators. Due to the cost pressure, big techs have been closely examining the cost structure for building AI servers.
Therefore, NVIDIA has to take its customers’ budgets into consideration when planning its roadmap. The move has also sparked speculation that NVIDIA, which is prompted to lower product prices, might compromise to bring Samsung onboard as an HBM3e supplier, the report states.
Citing an industry insider, the report highlights the dilemma of NVIDIA and its HBM suppliers. As the AI giant tries to shorten its product cycle, releasing the Blackwell (B100) series just two years after the Hopper (H100), HBM suppliers have been struggling except for SK hynix, as the company is the only one with the most experience.
If Samsung doesn’t join the HBM lineup, the overall supply of NVIDIA’s AI accelerators could be limited, driving prices even higher, the report suggests.
Under this backdrop, Samsung may have taken on the role of pacemaker in the AI semiconductor market, as it may help balance the market during a time when there are concerns about overheating in the AI industry. Also, if it is able to form a strong collaboration with NVIDIA by supplying 8-layer HBM3e, its technological gap with competitors will noticeably narrow.
TrendForce notes that Samsung’s recent progress on HBM3e qualification seems to be solid, and we can soon expect both 8hi and 12hi to be qualified in the near future. The company is eager to gain higher HBM market share from SK hynix so its 1alpha capacity has reserved for HBM3e. TrendForce believes that Samsung is going to be a very important supplier on HBM category.
Read more
(Photo credit: Samsung)