News
In order to address the growing demand for high-performance memory solutions fueled by the expansion of the artificial intelligence (AI) market, Samsung Electronics has formed a new “HBM Development Team” within its Device Solutions (DS) Division to enhance its competitive edge in high-bandwidth memory (HBM), according to the latest report from Business Korea. The new team will concentrate on advancing the progress on HBM3, HBM3e, and the next-generation HBM4 technologies, the report noted.
This initiative comes shortly after the Korean memory giant changed its semiconductor business leader in May. Citing industry sources, the report stated that Samsung’s DS Division carried out an organizational restructuring centered on the establishment of the HBM Development Team.
Also, the move attracts attention as on July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from NVIDIA for qualification of its 5th generation HBM, HBM3e, though the company denied the market rumor afterwards.
Samsung has a long history of dedicating to HBM development. Since 2015, it has maintained an HBM development organization within its Memory Business Division. Earlier this year, the tech heavyweight also created a task force (TF) to boost its HBM competitiveness, and the new team will unify and enhance these ongoing efforts, the report noted.
According to the report, Samsung reached a significant milestone in February by developing the industry’s first HBM3e 12-layer stack, which offers the industry’s largest capacity of 36 gigabytes (GB). Samples of the HBM3e 8-layer and 12-layer stacks have already been sent to NVIDIA for quality testing.
Regarding the latest development, TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.
According to TrendForce’s latest analysis on the HBM market, HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50–60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.
Read more
(Photo credit: Samsung)
News
In early June, NVIDIA CEO Jensen Huang revealed that Samsung’s High Bandwidth Memory (HBM) is still striving on the certification process, but is one step away from beginning supply. On July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from the GPU giant for qualification of its 5th generation HBM, HBM3e. It is expected that Samsung will soon proceed with the subsequent procedures to officially start mass production for HBM supply, the report noted.
Citing sources from the semiconductor industry, the report stated that Samsung recently received the HBM3e quality test PRA (Product Readiness Approval) notification from NVIDIA. It is expected that negotiations for supply will commence afterward.
However, just one hour after the news reported that Samsung’s HBM3e passed NVIDIA’s tests, another media outlet, Hankyung, noted that Samsung has denied the rumor, clarifying it is “not true,” and that the company is consistently undergoing quality assessments.
TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. The successful qualification mentioned in the article was only an internal achievement for Samsung. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.
Per a previous report from Reuters, Samsung has been attempting to pass NVIDIA’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.
According to the latest report from Newdaily, Samsung’s breakthrough came about a month after the memory heavyweight sent executives in charge of HBM and memory development to the U.S. at NVIDIA’s request. Previously, it was reported that Samsung had failed to pass the quality test as scheduled due to issues such as overheating.
The report further stated that though from Samsung’s perspective, supplying HBM to NVIDIA was crucial, NVIDIA is also eager to receive more HBM, with the overwhelming demand for AI semiconductors and the impending mass production of its next-generation GPU Rubin, which significantly increases HBM usage.
According to the report, Samsung is expected to eliminate uncertainties in HBM and start full-scale mass production, giving a significant boost to its memory business. There are also suggestions that its HBM performance could see a quantum leap starting in the second half of the year.
On the other hand, Samsung’s major rival, SK hynix, is the primary supplier of NVIDIA’s HBM3 and HBM3e. According to an earlier TrendForce’s analysis, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix.
According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Read more
(Photo credit: Samsung)
News
Driven by memory giants ramping up high-bandwidth memory (HBM) production, according to a report from Korean media outlet TheElec, ASMPT, a back-end equipment maker, has supplied a demo thermal compression (TC) bonder for Micron’s HBM production.
TC bonders play a pivotal role in HBM production by employing thermal compression to bond and stack chips on processed wafers, thereby significantly influencing HBM yield.
ASMPT is reportedly collaborating with the US memory giant to co-develop a TC bonder for use in HBM4 production. Notably, ASMPT has supplied TC bonders to SK Hynix as well and plans to deliver more units later in the year.
Micron is also procuring TC bonders from Shinkawa and Hanmi Semiconductor for the production of HBM3e. However, as per the same report citing sources, Shinkawa has its handful in supplying the bonders to its largest customer, so Micron added Hanmi Semiconductor as a secondary supplier.
In addition to Micron, Samsung Electronics and SK Hynix have developed distinct supply chains for TC bonders. Samsung sources its equipment from Japan’s Toray and Sinkawa, as well as its subsidiary SEMES. In contrast, SK Hynix relies on Singapore’s ASMPT, HANMI Semiconductor, and Hanhwa Precision Machinery.
According to industry sources cited by The Chosun Daily, TC bonder orders driven by memory giants have been strong, as Samsung Electronics’ subsidiary SEMES has delivered nearly 100 TC bonders over the past year. Meanwhile, SK Hynix has inked a approximately $107.98 million contract with HANMI Semiconductor, which commands a 65% share of the TC bonder market.
Regarding the latest developments in HBM, TrendForce indicates that HBM3e will become the market mainstream this year, with shipments concentrated in the second half of the year. Currently, SK hynix remains the primary supplier, along with Micron, both utilizing 1beta nm processes and already shipping to NVIDIA.
According to TrendForce predictions, the annual growth rate of HBM demand will approach 200% in 2024 and is expected to double in 2025.
Read more
(Photo credit: Micron)
News
According to a report by the Economic Daily News, TSMC has secured another AI business opportunity. Following its exclusive contract manufacturing of AI chips for tech giants such as NVIDIA and AMD, TSMC, in collaboration with its subsidiary, the ASIC design service provider Global Unichip Corporation (GUC), has reportedly made significant progress in producing essential peripheral components for AI servers, specifically high-bandwidth memory (HBM). Together, they have secured a major order for the foundational base die chips of next-generation HBM4.
TSMC and GUC typically do not comment on order details. SK Hynix, on the other hand, has clarified in a press release to Bloomberg that it has not signed a contract with GUC for its next-generation AI memory chips, according to the Economic Daily News.
Industry sources cited by the report point out that the strong demand for AI is not only making high-performance computing (HPC) related chips highly sought after, but also driving robust demand for HBM, creating new market opportunities. This surge in demand has attracted major memory manufacturers such as SK Hynix, Samsung, and Micron to actively invest. Under the influence of AI engines, the current production capacity for HBM3 and HBM3e is in a state of supply shortage.
As AI chip manufacturing advances to the 3nm generation next year, the existing HBM3 and HBM3e, limited by capacity and speed constraints, may prevent the new generation of AI chips from reaching their maximum computational power. Consequently, the three major memory manufacturers are unanimously increasing their capital expenditures and starting to invest in the development of next-generation HBM4 products, aiming for mass production by the end of 2025 and large-scale shipments by 2026.
While memory manufacturers are delving into the research and development of next-generation HBM4, the semiconductor standardization organization JEDEC Solid State Technology Association is also busy establishing new standards related to HBM4. It’s also rumored that JEDEC will relax the stacking height limit for HBM4 to 775 micrometers, hinting that the previously required advanced packaging technology using hybrid bonding can be postponed until the next generation of HBM specifications.
Industry sources cited by the report also suggest that the most significant change in HBM4, besides increasing the stacking height to 16 layers of DRAM, will be the addition of a logic IC at the base to enhance bandwidth transmission speed. This logic IC, known as the base die, is expected to be the major innovation in the new generation of HBM4 and possibly a reason for JEDEC’s relaxation of the stacking height limitation.
On the other hand, SK Hynix has announced its collaboration with TSMC to advance HBM4 and capture opportunities in advanced packaging. Industry sources also indicate that GUC has successfully secured the critical design order for SK Hynix’s HBM4 base die.
The design is expected to be finalized as early as next year, with production to be carried out using TSMC’s 12nm and 5nm processes, depending on whether high performance or low power consumption is prioritized.
Reportedly, it’s suggested that SK Hynix’s decision to entrust the base die chip orders to GUC and TSMC is primarily because TSMC currently dominates over 90% of the CoWoS advanced packaging market used in HPC chips.
Read more
(Photo credit: TSMC)
News
According to a report from TechNews, South Korean memory giant SK Hynix is participating in COMPUTEX 2024 for the first time, showcasing the latest HBM3e memory and MR-MUF technology (Mass Re-flow Molded Underfill), and revealing that hybrid bonding will play a crucial role in chip stacking.
MR-MUF technology attaches semiconductor chips to circuits, using EMC (liquid epoxy molding compound) to fill gaps between chips or between chips and bumps during stacking. Currently, MR-MUF technology enables tighter chip stacking, improving heat dissipation performance by 10%, energy efficiency by 10%, achieving a product capacity of 36GB, and allowing for the stacking of up to 12 layers.
In contrast, competitors like Samsung and Micron use TC-NCF technology (thermal compression with non-conductive film), which requires high temperatures and high pressure to solidify materials before melting them, followed by cleaning. This process involves more than 2-3 steps, whereas MR-MUF completes the process in one step without needing cleaning. As per SK Hynix, compared to NCF, MR-MUF has approximately twice the thermal conductivity, significantly impacting process speed and yield.
As the number of stacking layers increases, the HBM package thickness is limited to 775 micrometers (μm). Therefore, memory manufacturers must consider how to stack more layers within a certain height, which poses a significant challenge to current packaging technology. Hybrid bonding is likely to become one of the solutions.
The current technology uses micro bump materials to connect DRAM modules, but hybrid bonding can eliminate the need for micro bumps, significantly reducing chip thickness.
SK Hynix has revealed that in future chip stacking, bumps will be eliminated and special materials will be used to fill and connect the chips. This material, similar to a liquid or glue, will provide both heat dissipation and chip protection, resulting in a thinner overall chip stack.
SK Hynix plans to begin mass production of 16-layer HBM4 memory in 2026, using hybrid bonding to stack more DRAM layers. Kim Gwi-wook, head of SK Hynix’s advanced HBM technology team, noted that they are currently researching hybrid bonding and MR-MUF for HBM4, but yield rates are not yet high. If customers require products with more than 20 layers, due to thickness limitations, new processes might be necessary. However, at COMPUTEX, SK Hynix expressed optimism that hybrid bonding technology could potentially allow stacking of more than 20 layers without exceeding 775 micrometers.
Per a report from Korean media Maeil Business Newspaper, HBM4E is expected to be a 16-20 layer product, potentially debuting in 2028. SK Hynix plans to apply 10nm-class 1c DRAM in HBM4E for the first time, significantly increasing memory capacity.
Read more
(Photo credit: SK Hynix)