News
According to a report from Taiwan’s Commercial Times, NVIDIA is aggressively establishing a non-TSMC CoWoS supply chain. Sources in the supply chain reveal that UMC is proactively expanding silicon interposer capacity, doubling it in advance, and now planning to further increase production by over two times. The monthly capacity for silicon interposers will surge from the current 3 kwpm (thousand wafers per month) to 10 kwpm, potentially aligning its capacity with TSMC’s next year, significantly alleviating the supply strain in the CoWoS process.
A prior report from Nomura Securities highlighted NVIDIA’s efforts since the end of Q2 this year to construct a non-TSMC supply chain. Key players include UMC for wafer fabrication, Amkor and SPIL for packaging and testing. NVIDIA aims to add suppliers to meet the surging demand for CoWoS solutions.
The pivotal challenge in expanding CoWoS production lies in insufficient silicon interposer supply. In the future, UMC will provide the silicon interposers for front-end CoW process, while Amkor and SPLI will take charge of the back-end WoS packaging. These collaborations will establish a non-TSMC CoWoS supply chain.
UMC states its current silicon interposer capacity stands at 3 kwpm. However, the company has decided to undertake a one-fold expansion at its Singaporean plant, targeting a capacity of around 6 kwpm. The additional capacity is anticipated to be progressively operational within 6 to 9 months, with the earliest projections for the first quarter of next year.
Yet, due to persistent robust market demand, it’s expected that even with UMC’s capacity expansion to 6 kwpm, it may not completely meet market needs. Consequently, industry sources suggest UMC has opted to further amplify silicon interposer capacity to 10 kwpm, aiming for a two-fold acceleration of production expansion. Addressing these expansion rumors, UMC affirms that growth in advanced packaging demand is an inherent trend and future focus, asserting their evaluation of capacity options and not ruling out the possibility of continuous enlargement of silicon interposer capabilities.
(Photo credit: Amkor)
News
According to a report by Taiwan’s Economic Daily, the latest GH200 module released by NVIDIA has seen its assembly orders exclusively undertaken by Foxconn, while the assembly orders for L40S are also entirely managed by Foxconn.
Foxconn has traditionally refrained from commenting on individual business and order dynamics. It is believed that AI chip modules constitute the highest-margin product within the entire server supply chain.
Foxconn has been a longstanding partner of NVIDIA, providing an end-to-end solution across chip modules, baseboards, motherboards, servers, and chassis. Foxconn’s capabilities have facilitated the creation of a comprehensive solution for NVIDIA’s AI server supply chain.
Previously, Foxconn had an exclusive assembly partnership with NVIDIA for the “H100” and “H800” modules, not only retaining the existing orders but also securing a substantial portion of the HGX module orders. Now, reports indicate that Foxconn will exclusively supply even NVIDIA’s newly unveiled GH 200, and the L40S.
Industry sources indicate that due to severe constraints on TSMC’s advanced CoWoS packaging capacity, the scaling up of NVIDIA’s AI chip production has been hindered. However, with new CoWoS production capacity set to gradually open up in the late third quarter to the fourth quarter, shipments of Foxconn’s AI chip modules are anticipated to rapidly increase.
Industry sources reveal that in business negotiations, NVIDIA is known for demanding from its suppliers, but it is also generous in its offerings. As long as suppliers provide products that meet or even exceed expectations, NVIDIA is willing to offer reasonable prices, fostering mutually beneficial relationships with its partners.
(Photo credit: NVIDIA)
News
According to the Korea Economic Daily. Samsung Electronics’ HBM3 and packaging services have passed AMD’s quality tests. The upcoming Instinct MI300 series AI chips from AMD are planned to incorporate Samsung’s HBM3 and packaging services. These chips, which combine central processing units (CPUs), graphics processing units (GPUs), and HBM3, are expected to be released in the fourth quarter of this year.
Samsung is noted as the sole provider capable of offering advanced packaging solutions and HBM products simultaneously. Originally considering TSMC’s advanced packaging services, AMD had to alter its plans due to capacity constraints.
The surge in demand for high-performance GPUs within the AI landscape benefits not only GPU manufacturers like NVIDIA and AMD, but also propels the development of HBM and advanced packaging.
In the backdrop of the AI trend, AIGC model training and inference require the deployment of AI servers. These servers typically require mid-to-high-end GPUs, with HBM penetration nearing 100% among these GPUs.
Presently, Samsung, SK Hynix, and Micron are the primary HBM manufacturers. According to the latest research by TrendForce, driven by the expansion efforts of these original manufacturers, the estimated annual growth rate of HBM supply in 2024 is projected to reach 105%.
In terms of competitive dynamics, SK Hynix leads with its HBM3 products, serving as the primary supplier for NVIDIA’s Server GPUs. Samsung, on the other hand, focuses on fulfilling orders from other cloud service providers. With added orders from customers, the gap in market share between Samsung and SK Hynix is expected to narrow significantly this year. The estimated HBM market share for both companies is about 95% for 2023 to 2024. However, variations in customer composition might lead to sequential variations in bit shipments.
In the realm of advanced packaging capacity, TSMC’s CoWoS packaging technology dominates as the main choice for AI server chip suppliers. Amidst strong demand for high-end AI chips and HBM, TrendForce estimates that TSMC’s CoWoS monthly capacity could reach 12K by the end of 2023.
With strong demand driven by NVIDIA’s A100 and H100 AI Server requirements, demand for CoWoS capacity is expected to rise by nearly 50% compared to the beginning of the year. Coupled with the growth in high-end AI chip demand from companies like AMD and Google, the latter half of the year could experience tighter CoWoS capacity. This robust demand is expected to continue into 2024, potentially leading to a 30-40% increase in advanced packaging capacity, contingent on equipment readiness.
(Photo credit: Samsung)
News
Dell, a major server brand, placed a substantial order for AI servers just before NVIDIA’s Q2 financial report. This move is reshaping Taiwan’s supply chain dynamics, favoring companies like Wistron and Lite-On.
Dell is aggressively entering the AI server market, ordering NVIDIA’s top-tier H100 chips and components. The order’s value this year is estimated in hundreds of billions of Taiwanese dollars, projected to double in the next year. Wistron and Lite-On are poised to benefit, securing vital assembly and power supply orders. EMC and Chenbro are also joining the supply chain.
Dell’s AI server order, which includes assembly (including complete machines, motherboards, GPU boards, etc.) and power supply components, stands out with its staggering value. The competition was most intense in the assembly sector, ultimately won by Wistron. In the power supply domain, industry leaders like Delta, Lite-On, secured a notable share, with Lite-On emerging as a winner, sparking significant industry discussions.
According to Dell’s supply chain data, AI server inventory will reach 20,000 units this year and increase next year. The inventory primarily features the highest-end H100 chips from NVIDIA, with a few units integrating the A100 chips. With each H100 unit priced at $300,000 and A100 units exceeding $100,000, even with a seemingly modest 20,000 units, the total value remains in the billions of New Taiwan Dollars.
Wistron is a standout winner in Dell’s AI server assembly order, including complete machines, motherboards, and GPU boards. Wistron has existing H100 server orders and will supply new B100 baseboard orders. Their AI server baseboard plant in Hsinchu, Taiwan will expand by Q3 this year. Wistron anticipates year-round growth in the AI server business.
News
According to a report by Taiwan’s Commercial Times, JPMorgan’s latest analysis reveals that AI demand will remain robust in the second half of the year. Encouragingly, TSMC’s CoWoS capacity expansion progress is set to exceed expectations, with production capacity projected to reach 28,000 to 30,000 wafers per month by the end of next year.
The trajectory of CoWoS capacity expansion is anticipated to accelerate notably in the latter half of 2024. This trend isn’t limited to TSMC alone; other players outside the TSMC are also actively expanding their CoWoS-like production capabilities to meet the soaring demands of AI applications.
Gokul Hariharan, Head of Research for JPMorgan Taiwan, highlighted that industry surveys indicate strong and unabated AI demand in the latter half of the year. Shortages amounting to 20% to 30% are observed with CoWoS capacity being a key bottleneck and high-bandwidth memory (HBM) also facing supply shortages.
JPMorgan’s estimates indicate that Nvidia will account for 60% of the overall CoWoS demand in 2023. TSMC is expected to produce around 1.8 to 1.9 million sets of H100 chips, followed by significant demand from Broadcom, AWS’ Inferentia chips, and Xilinx. Looking ahead to 2024, TSMC’s continuous capacity expansion is projected to supply Nvidia with approximately 4.1 to 4.2 million sets of H100 chips.
Apart from TSMC’s proactive expansion of CoWoS capacity, Hariharan predicts that other assembly and test facilities are also accelerating their expansion of CoWoS-like capacities.
For instance, UMC is preparing to have a monthly capacity of 5,000 to 6,000 wafers for the interposer layer by the latter half of 2024. Amkor is expected to provide a certain capacity for chip-on-wafer stacking technology, and ASE Group will offer chip-on-substrate bonding capacity. However, these additional capacities might face challenges in ramping up production for the latest products like H100, potentially focusing more on older-generation products like A100 and A800.
(Photo credit: TSMC)