HBM4


2024-06-17

[News] Samsung Plans to Introduce 3D HBM Chip Packaging Service in 2024

In 2023, Samsung disclosed plans to launch its advanced three-dimensional (3D) chip packaging technology, which would be able to integrate memory and processors needed for high-performance chips, in much smaller sizes. Now, at the Samsung Foundry Forum in San Jose taken place in June, the tech giant made it public that it would introduce 3D packaging services for HBM within this year, according to the latest report by The Korea Economic Daily.

For now, HBM chips are predominantly packaged with 2.5D technology. Citing industry sources as well as personnel from Samsung, the company’s 3D chip packaging technology is expected to hit the market for HBM4, the sixth generation of the HBM family.

Samsung’s announcement regarding its 3D HBM packaing roadmap has been made after NVIDIA CEO Jensen Huang revealed Rubin at COMPUTEX 2024, the company’s upcoming architecture of its AI platform after Blackwell. The Rubin GPU will reportedly feature 8 HBM4, while the Rubin Ultra GPU will come with 12 HBM4 chips, targeting to be released in 2026.

Currently, Samsung’s SAINT (Samsung Advanced Interconnect Technology) platform includes three types of 3D stacking technologies: SAINT S, SAINT L, and SAINT D.

SAINT S involves vertically stacking SRAM on logic chips such as CPUs, while SAINT L involves stacking logic chips on top of other logic chips or application processors (APs). SAINT D, on the other hand, entails vertical stacking of DRAM with logic chips like CPUs and GPUs.

The Korea Economic Daily noted that unlike 2.5D technology, under which HBM chips are horizontally connected with a GPU on a silicon interposer, by stacking HBM chips vertically on top of a GPU, 3D packaging could further accelerate data learning and inference processing, and thus does not require a silicon interposer, a thin substrate that sits between chips to allow them to communicate and work together.

It is also understood that Samsung plans to offer 3D HBM packaging on a turnkey basis, according to the Korea Economic Daily. To achieve this, its advanced packaging team will vertically interconnect HBM chips produced by its memory business division, with GPUs assembled for fabless companies by its foundry unit, the report noted.

Regarding Samsung’s long-time rival, TSMC, the company’s Chip on Wafer on Substrate (CoWoS) has been a key enabler for the AI revolution by allowing customers to pack more processor cores and HBM stacks side by side on one interposer. TSMC also made similar announcement in May, reportedly utilizing 12nm and 5nm process nodes in manufacturing HBM4, according to a report by AnandTech.

Read more

(Photo credit: Samsung)

Please note that this article cites information from The Korea Economic Daily and AnandTech.

 

2024-06-07

[News] The HBM4 Battle Begins! Memory Stacking Challenges Remain, Hybrid Bonding as the Key Breakthrough

According to a report from TechNews, South Korean memory giant SK Hynix is participating in COMPUTEX 2024 for the first time, showcasing the latest HBM3e memory and MR-MUF technology (Mass Re-flow Molded Underfill), and revealing that hybrid bonding will play a crucial role in chip stacking.

MR-MUF technology attaches semiconductor chips to circuits, using EMC (liquid epoxy molding compound) to fill gaps between chips or between chips and bumps during stacking. Currently, MR-MUF technology enables tighter chip stacking, improving heat dissipation performance by 10%, energy efficiency by 10%, achieving a product capacity of 36GB, and allowing for the stacking of up to 12 layers.

In contrast, competitors like Samsung and Micron use TC-NCF technology (thermal compression with non-conductive film), which requires high temperatures and high pressure to solidify materials before melting them, followed by cleaning. This process involves more than 2-3 steps, whereas MR-MUF completes the process in one step without needing cleaning. As per SK Hynix, compared to NCF, MR-MUF has approximately twice the thermal conductivity, significantly impacting process speed and yield.

As the number of stacking layers increases, the HBM package thickness is limited to 775 micrometers (μm). Therefore, memory manufacturers must consider how to stack more layers within a certain height, which poses a significant challenge to current packaging technology. Hybrid bonding is likely to become one of the solutions.

The current technology uses micro bump materials to connect DRAM modules, but hybrid bonding can eliminate the need for micro bumps, significantly reducing chip thickness.

SK Hynix has revealed that in future chip stacking, bumps will be eliminated and special materials will be used to fill and connect the chips. This material, similar to a liquid or glue, will provide both heat dissipation and chip protection, resulting in a thinner overall chip stack.

SK Hynix plans to begin mass production of 16-layer HBM4 memory in 2026, using hybrid bonding to stack more DRAM layers. Kim Gwi-wook, head of SK Hynix’s advanced HBM technology team, noted that they are currently researching hybrid bonding and MR-MUF for HBM4, but yield rates are not yet high. If customers require products with more than 20 layers, due to thickness limitations, new processes might be necessary. However, at COMPUTEX, SK Hynix expressed optimism that hybrid bonding technology could potentially allow stacking of more than 20 layers without exceeding 775 micrometers.

Per a report from Korean media Maeil Business Newspaper, HBM4E is expected to be a 16-20 layer product, potentially debuting in 2028. SK Hynix plans to apply 10nm-class 1c DRAM in HBM4E for the first time, significantly increasing memory capacity.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from TechNews and the Financial Times.

2024-06-03

[News] NVIDIA CEO Jensen Huang Announces the Latest Rubin Architecture – Rubin Ultra GPU to Feature 12 HBM4

NVIDIA CEO Jensen Huang delivered a keynote speech at the NTU (National Taiwan University) Sports Center on June 2. As per a report from TechNews, during the speech, he unveiled the new generation Rubin architecture, showcasing NVIDIA’s accelerated rollout of new architectures, which became the highlight of the evening.

While discussing NVIDIA’s next-generation architecture, Huang mentioned the Blackwell Ultra GPU and indicated that it may continue to be upgraded. He then revealed that the next-generation architecture following Blackwell will be the Rubin architecture.

The Rubin GPU will feature 8 HBM4, while the Rubin Ultra GPU will come with 12 HBM4 chips, he noted.

The new architecture has been named after American astronomer Vera Rubin, who made significant contributions to our understanding of dark matter in the universe and conducted pioneering work on galaxy rotation rates.

Notably, despite the fact that NVIDIA has just launched the new Blackwell platform, it appears that NVIDIA keeps accelerating its roadmap. According to the latest information announced by Jensen Huang, the Rubin GPU will become part of the R series products and is expected to go into mass production in the fourth quarter of 2025. The Rubin GPU and its corresponding platform are anticipated to be released in 2026, followed by the Ultra version in 2027. NVIDIA also confirmed that the Rubin GPU will use HBM4.

Per a report from global media outlet Wccftech, NVIDIA’s Rubin GPU is expected to adopt a 4x reticle design and utilize TSMC’s CoWoS-L packaging technology, along with the N3 process. Moreover, NVIDIA will use next-generation HBM4 DRAM to power its Rubin GPU.

According to Wccftech, currently, NVIDIA employs the fastest HBM3e in its B100 GPU, and it plans to update these chips with the HBM4 version when HBM4 solutions are produced in large quantities by the end of 2025.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews and Wccftech.

2024-04-19

[News] SK Hynix Partners with TSMC to Strengthen HBM Technological Leadership

South Korean memory giant SK Hynix announced today that it has recently signed a memorandum of understanding with TSMC for collaboration to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass produced from 2026, through this initiative.

SK Hynix said the collaboration between the global leader in the AI memory space and TSMC, a top global logic foundry, will lead to more innovations in HBM technology. The collaboration is also expected to enable breakthroughs in memory performance through trilateral collaboration between product design, foundry, and memory provider.

The two companies will first focus on improving the performance of the base die that is mounted at the very bottom of the HBM package. HBM is made by stacking a core DRAM die on top of a base die that features TSV technology, and vertically connecting a fixed number of layers in the DRAM stack to the core die with TSV into an HBM package. The base die located at the bottom is connected to the GPU, which controls the HBM.

SK Hynix has used a proprietary technology to make base dies up to HBM3E, but plans to adopt TSMC’s advanced logic process for HBM4’s base die so additional functionality can be packed into limited space. That also helps SK hynix produce customized HBM that meets a wide range of customer demand for performance and power efficiency.

SK Hynix and TSMC also agreed to collaborate to optimize the integration of SK Hynix’s HBM and TSMC’s CoWoS technology, while cooperating in responding to common customers’ requests related to HBM.

“We expect a strong partnership with TSMC to help accelerate our efforts for open collaboration with our customers and develop the industry’s best-performing HBM4,” said Justin Kim, President and the Head of AI Infra, at SK Hynix. “With this cooperation in place, we will strengthen our market leadership as the total AI memory provider further by beefing up competitiveness in the space of the custom memory platform.” 

“TSMC and SK Hynix have already established a strong partnership over the years. We’ve worked together in integrating the most advanced logic and state-of-the art HBM in providing the world’s leading AI solutions,” said Dr. Kevin Zhang, Senior Vice President of TSMC’s Business Development and Overseas Operations Office, and Deputy Co-Chief Operating Officer. “Looking ahead to the next-generation HBM4, we’re confident that we will continue to work closely in delivering the best-integrated solutions to unlock new AI innovations for our common customers.”

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from SK Hynix.

2024-04-02

[News] Samsung Reportedly Establishes New HBM Team, Looking to Improve AI Chip Yield

Samsung Electronics Co. has recently established a HBM team within its memory division, with the goal of enhancing yield during the development of the sixth-generation AI memory HBM4 and the AI accelerator Mach-1.

According to a report of the Korea Economic Daily (KED) citing industry sources on March 29th, Samsung’s HBM team is primarily responsible for the research, development, and sales of DRAM and NAND flash memory. Samsung’s Executive Vice President and Chief of DRAM Product and Technology, Hwang Sang-joon, will lead the new team. This marks the second team focused on HBM since Samsung initiated its HBM task force in January.

Per KED’s report, Samsung is stepping up its efforts in hopes of surpassing SK Hynix, the leader in the advanced HBM field. In 2019, Samsung dissolved its HBM team due to a mistaken belief that the market would not see significant growth.

Per a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

To vie for dominance in the AI chip market, Samsung is pursuing a “two-track” strategy by concurrently developing two cutting-edge memory chips: HBM and Mach-1.

According to TrendForce’s report, SK Hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter.

According to the same report from KED, Samsung is also gearing up to develop the next-generation accelerator, “Mach-2,” tailored for AI inference. According to Kyung on March 29th, Samsung must expedite the development of Mach-2 as there is strong interest from customers in this regard.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Korea Economic Daily.

  • Page 6
  • 7 page(s)
  • 33 result(s)

Get in touch with us