News
Starting from October, 2022, the U.S. has launched a series of export controls, targeting to limit China’s access to advanced semiconductor technologies, while tech giants including Intel, Qualcomm and NVIDIA are not allowed to ship some of their most cutting-edge chips to China. Now a new development seems to emerge, as the White House is said to consider additional restrictions on China’s access to gate-all-around (GAA) transistor technology as well as high-bandwidth memory (HBM), according to reports from Bloomberg and Tom’s hardware.
For now, the Big Three in the semiconductor industry have all announced their roadmaps regarding GAA. TSMC plans to adopt GAAFET (gate-all-around field-effect transistor) in its A16 process (2 nm), targeting for mass production in 2026. Intel aims to implement GAA in its upcoming 20A node, which may enter mass production by 2024. Samsung, on the other hand, is the only company to adopt GAA as early as in its 3nm node.
GAA transistors are crucial for pushing Moore’s Law further. By replacing the vertical fin used in FinFET transistors with a stack of horizontal sheets, the structure could further reduce leakage while increase drive current, which enables better chip performance.
Citing sources familiar with the matter, Bloomberg noted that in March, UK has imposed controls on GAAFET structures, which are typically used for chips manufactured with advanced nodes, and now the U.S. and other allies are expected to follow. The related restrictions are reportedly expected to be implemented as soon as this summer, according to the report, though further details have yet to be confirmed.
Also, it remains unclear whether the ban would restrict China’s ability to develop its own GAA chips or prevent U.S. and other international chipmakers from selling their products to Chinese firms, the report noted.
In addition to GAA, the Bloomberg report also mentioned that there have been preliminary discussions about restricting exports of high-bandwidth memory (HBM) chips. HBM chips, produced by memory giants like SK Hynix, Samsung and Micron, could enhance the performance of AI applications and are utilized by companies such as NVIDIA.
Recently, Huawei successfully mass-produced 7nm chips without using lithography technology. This development has surprised the global semiconductor market and has led to speculation that Huawei may soon also mass-produce 5nm chips. However, Zhang Ping’an, the Chief Executive Officer of Huawei Cloud Services, expressed concern earlier that China, due to US sanctions, is unable to purchase 3.5nm chip equipment.
Read more
(Photo credit: Intel)
News
As the standard DRAM market experiences an unprecedented cycle of supply-demand imbalance, the shortage of DDR3 production capacity has become even more severe.
According to a report from the Economic Daily News, with leading manufacturers like Samsung exiting DDR3 production, while demand for DDR3 from AI and edge computing devices continuing to increase, the storage capacity per single device is rising sharply. This is expected to drive a rebound in DDR3 prices, potentially benefiting related Taiwanese manufacturers such as Winbond, Elite Semiconductor Microelectronics Technology (ESMT), and Etron.
In response to the shift of operational focus to high-bandwidth memory (HBM) and DDR5, the world’s top three memory manufacturers are gradually phasing out the DDR3 market.
Reportedly, Samsung has informed customers that it will cease DDR3 production by the end of the second quarter. SK Hynix had already converted its DDR3 production at its Wuxi plant in China to DDR4 by the end of last year. Meanwhile, Micron has significantly reduced its DDR3 supply to expand its DDR5 and HBM production capacity.
As per industry sources cited in the same report, it’s said that as the reduction in production by major DRAM manufacturers continues to take effect, it has driven standard DRAM prices up from the second half of 2023 to the present, with further increases expected.
Thus, prices for niche memory like DDR3 tend to lag behind standard DRAM by one to two quarters. For Taiwanese manufacturers such as Winbond, ESMT, and Etron, which focus on DDR3, the benefits of DDR3 price increases will gradually become apparent this quarter and next.
The industry sources cited by the same report also point out that DDR3 applications remain quite widespread. For example, WiFi 6 devices predominantly uses DDR3, and the next generation, WiFi 7 devices, will still primarily use DDR3/DDR4. Additionally, edge computing devices would continue to adopt DDR3. With supply significantly decreasing while demand remaining strong, DDR3 prices are expected to continue their upward trend.
Read more
(Photo credit: Samsung)
Press Releases
According to the latest report by TheElec, though Samsung has been using thermal compression (TC) bonding until its 12-stack HBM, the company now confirms its belief that hybrid bonding is necessary for manufacturing 16-stack HBM.
Regarding its future HBM roadmap, Samsung reportedly plans to produce its HBM4 sample in 2025, which will mostly be 16 stacks, with mass production slated for 2026, the report noted. According to TheElec, earlier in April, Samsung used hybrid bonding equipment from its subsidiary, Semes, to produce a 16-stack HBM sample, of which it indicated to operate normally.
Citing information Samsung revealed during the 2024 IEEE 74th Electronic Components and Technology Conference last month, TheElec learned that Samsung considered hybrid bonding essential for HBM with 16 stacks and above.
According to the report, Samsung has been using thermal compression (TC) bonding until its 12-stack HBM. However, now it emphasized on hybrid bonding’s ability to reduce height, which would be indispensable for 16-stack HBM. By further narrowing the gap between chips, 17 chips (one base die and 16 core dies) can be fitted within a 775-micrometer form factor.
According to an earlier report from TechNews, Samsung and Micron use TC-NCF technology (thermal compression with non-conductive film) on HBM production, which requires high temperatures and high pressure to solidify materials before melting them, followed by cleaning. The industry has relied on traditional copper micro bumps as the interconnect scheme for packages, while their sizes pose challenges when trying to allow more chips to be stacked at a lower height.
Samsung stated that though making the core die as thin as possible or reducing the bump pitch could help, these methods have reached their limits. Sources cited by the Elec mentioned that it is very challenging to make the core die thinner than 30 micrometers. Also, using bumps to connect the chips has limitations due to the volume of the bumps. Thus, hybrid bonding technology may emerge as a promising solution.
While the current technology uses micro bump materials to connect DRAM modules, hybrid bonding, which could stack chips veritically by using through-silicon-via (TSV), can eliminate the need for micro bumps, significantly reducing chip thickness.
On the other hand, according to another report by Business Korea, SK hynix has shown its confidence in the HBM produced with Mass Reflow-Molded Underfill (MR-MUF) technology. MR-MUF technology attaches semiconductor chips to circuits, using EMC (liquid epoxy molding compound) to fill gaps between chips or between chips and bumps during stacking.
SK hynix reportedly plans to begin mass production of 16-layer HBM4 memory in 2026, and the memory heavyweight is currently researching hybrid bonding and MR-MUF for HBM4, but yield rates are not yet high, the report said.
Read more
(Photo credit: Samsung)
News
According to a report from TechNews, South Korean memory giant SK Hynix is participating in COMPUTEX 2024 for the first time, showcasing the latest HBM3e memory and MR-MUF technology (Mass Re-flow Molded Underfill), and revealing that hybrid bonding will play a crucial role in chip stacking.
MR-MUF technology attaches semiconductor chips to circuits, using EMC (liquid epoxy molding compound) to fill gaps between chips or between chips and bumps during stacking. Currently, MR-MUF technology enables tighter chip stacking, improving heat dissipation performance by 10%, energy efficiency by 10%, achieving a product capacity of 36GB, and allowing for the stacking of up to 12 layers.
In contrast, competitors like Samsung and Micron use TC-NCF technology (thermal compression with non-conductive film), which requires high temperatures and high pressure to solidify materials before melting them, followed by cleaning. This process involves more than 2-3 steps, whereas MR-MUF completes the process in one step without needing cleaning. As per SK Hynix, compared to NCF, MR-MUF has approximately twice the thermal conductivity, significantly impacting process speed and yield.
As the number of stacking layers increases, the HBM package thickness is limited to 775 micrometers (μm). Therefore, memory manufacturers must consider how to stack more layers within a certain height, which poses a significant challenge to current packaging technology. Hybrid bonding is likely to become one of the solutions.
The current technology uses micro bump materials to connect DRAM modules, but hybrid bonding can eliminate the need for micro bumps, significantly reducing chip thickness.
SK Hynix has revealed that in future chip stacking, bumps will be eliminated and special materials will be used to fill and connect the chips. This material, similar to a liquid or glue, will provide both heat dissipation and chip protection, resulting in a thinner overall chip stack.
SK Hynix plans to begin mass production of 16-layer HBM4 memory in 2026, using hybrid bonding to stack more DRAM layers. Kim Gwi-wook, head of SK Hynix’s advanced HBM technology team, noted that they are currently researching hybrid bonding and MR-MUF for HBM4, but yield rates are not yet high. If customers require products with more than 20 layers, due to thickness limitations, new processes might be necessary. However, at COMPUTEX, SK Hynix expressed optimism that hybrid bonding technology could potentially allow stacking of more than 20 layers without exceeding 775 micrometers.
Per a report from Korean media Maeil Business Newspaper, HBM4E is expected to be a 16-20 layer product, potentially debuting in 2028. SK Hynix plans to apply 10nm-class 1c DRAM in HBM4E for the first time, significantly increasing memory capacity.
Read more
(Photo credit: SK Hynix)
News
Driven by the rapid growth in demand for high-bandwidth memory (HBM) fueled by artificial intelligence (AI), memory manufacturers are vying for market opportunities. According to a report from CNA, Micron has announced its target to achieve a 20% to 25% market share in HBM by 2025.
Targeting the swiftly growing demand for HBM, along with its better product pricing and profitability, the three major memory manufacturers—SK Hynix, Micron, and Samsung—are all aggressively advancing in this area. Currently, SK Hynix holds the leading position, but Micron is also making significant progress.
Micron stated that its progress of HBM3e could be contributed to the company’s advanced packaging and design capabilities, along with the integration of its own processes. The company is also developing the next generation HBM4 products.
Regarding Micron’s global capacity expansion plan, the memory heavyweight has been considering Hiroshima, Japan, as one of the potential sites. The company’s HBM capacity in fiscal year 2024 has already been sold out, of which is expected to contribute hundreds of millions of dollars in revenue.
As for its ambition regarding HBM, Micron stated that it aims to capture 20-25% market share by 2025.
Notably, per a previous report from the South Korean newspaper “Korea Joongang Daily,” following Micron’s initiation of mass production of the latest high-bandwidth memory HBM3e in February 2024, it has recently secured an order from NVIDIA for the H200 AI GPU. It is understood that NVIDIA’s upcoming H200 processor will utilize the latest HBM3e, which are more powerful than the HBM3 used in the H100 processor.
During the press conference on 5 June, Micron announced the launch of its GDDR7 graphics memory, which is currently being sampled. Utilizing Micron’s 1-beta technology, GDDR7 offers more than a 50% improvement in energy efficiency compared to the previous generation GDDR6, effectively addressing thermal issues and extending battery life.
Micron highlighted that the GDDR7 system bandwidth is increased to 1.5TB per second, 60% higher than GDDR6. Its applications range from AI and gaming to high-performance computing. The product is expected to start shipping in the second half of this year.
Read more
(Photo credit: Micron)