Nvidia


2023-08-24

[News] Foxconn Rumored to Secure Significant Orders for NVIDIA’s New GH200, L40S Module

According to a report by Taiwan’s Economic Daily, the latest GH200 module released by NVIDIA has seen its assembly orders exclusively undertaken by Foxconn, while the assembly orders for L40S are also entirely managed by Foxconn.

Foxconn has traditionally refrained from commenting on individual business and order dynamics. It is believed that AI chip modules constitute the highest-margin product within the entire server supply chain.

Foxconn has been a longstanding partner of NVIDIA, providing an end-to-end solution across chip modules, baseboards, motherboards, servers, and chassis. Foxconn’s capabilities have facilitated the creation of a comprehensive solution for NVIDIA’s AI server supply chain.

Previously, Foxconn had an exclusive assembly partnership with NVIDIA for the “H100” and “H800” modules, not only retaining the existing orders but also securing a substantial portion of the HGX module orders. Now, reports indicate that Foxconn will exclusively supply even NVIDIA’s newly unveiled GH 200, and the L40S.

Industry sources indicate that due to severe constraints on TSMC’s advanced CoWoS packaging capacity, the scaling up of NVIDIA’s AI chip production has been hindered. However, with new CoWoS production capacity set to gradually open up in the late third quarter to the fourth quarter, shipments of Foxconn’s AI chip modules are anticipated to rapidly increase.

Industry sources reveal that in business negotiations, NVIDIA is known for demanding from its suppliers, but it is also generous in its offerings. As long as suppliers provide products that meet or even exceed expectations, NVIDIA is willing to offer reasonable prices, fostering mutually beneficial relationships with its partners.

(Photo credit: NVIDIA)

2023-08-23

[News] TSMC Faces Capacity Shortage, Samsung May Provide Advanced Packaging and HBM Services to AMD

According to the Korea Economic Daily. Samsung Electronics’ HBM3 and packaging services have passed AMD’s quality tests. The upcoming Instinct MI300 series AI chips from AMD are planned to incorporate Samsung’s HBM3 and packaging services. These chips, which combine central processing units (CPUs), graphics processing units (GPUs), and HBM3, are expected to be released in the fourth quarter of this year.

Samsung is noted as the sole provider capable of offering advanced packaging solutions and HBM products simultaneously. Originally considering TSMC’s advanced packaging services, AMD had to alter its plans due to capacity constraints.

The surge in demand for high-performance GPUs within the AI landscape benefits not only GPU manufacturers like NVIDIA and AMD, but also propels the development of HBM and advanced packaging.

In the backdrop of the AI trend, AIGC model training and inference require the deployment of AI servers. These servers typically require mid-to-high-end GPUs, with HBM penetration nearing 100% among these GPUs.

Presently, Samsung, SK Hynix, and Micron are the primary HBM manufacturers. According to the latest research by TrendForce, driven by the expansion efforts of these original manufacturers, the estimated annual growth rate of HBM supply in 2024 is projected to reach 105%.

In terms of competitive dynamics, SK Hynix leads with its HBM3 products, serving as the primary supplier for NVIDIA’s Server GPUs. Samsung, on the other hand, focuses on fulfilling orders from other cloud service providers. With added orders from customers, the gap in market share between Samsung and SK Hynix is expected to narrow significantly this year. The estimated HBM market share for both companies is about 95% for 2023 to 2024. However, variations in customer composition might lead to sequential variations in bit shipments.

In the realm of advanced packaging capacity, TSMC’s CoWoS packaging technology dominates as the main choice for AI server chip suppliers. Amidst strong demand for high-end AI chips and HBM, TrendForce estimates that TSMC’s CoWoS monthly capacity could reach 12K by the end of 2023.

With strong demand driven by NVIDIA’s A100 and H100 AI Server requirements, demand for CoWoS capacity is expected to rise by nearly 50% compared to the beginning of the year. Coupled with the growth in high-end AI chip demand from companies like AMD and Google, the latter half of the year could experience tighter CoWoS capacity. This robust demand is expected to continue into 2024, potentially leading to a 30-40% increase in advanced packaging capacity, contingent on equipment readiness.

(Photo credit: Samsung)

2023-08-22

[News] Dell’s Large Orders Boost Wistron and Lite-On, AI Server Business to Grow Quarterly

Dell, a major server brand, placed a substantial order for AI servers just before NVIDIA’s Q2 financial report. This move is reshaping Taiwan’s supply chain dynamics, favoring companies like Wistron and Lite-On.

Dell is aggressively entering the AI server market, ordering NVIDIA’s top-tier H100 chips and components. The order’s value this year is estimated in hundreds of billions of Taiwanese dollars, projected to double in the next year. Wistron and Lite-On are poised to benefit, securing vital assembly and power supply orders. EMC and Chenbro are also joining the supply chain.

Dell’s AI server order, which includes assembly (including complete machines, motherboards, GPU boards, etc.) and power supply components, stands out with its staggering value. The competition was most intense in the assembly sector, ultimately won by Wistron. In the power supply domain, industry leaders like Delta, Lite-On, secured a notable share, with Lite-On emerging as a winner, sparking significant industry discussions.

According to Dell’s supply chain data, AI server inventory will reach 20,000 units this year and increase next year. The inventory primarily features the highest-end H100 chips from NVIDIA, with a few units integrating the A100 chips. With each H100 unit priced at $300,000 and A100 units exceeding $100,000, even with a seemingly modest 20,000 units, the total value remains in the billions of New Taiwan Dollars.

Wistron is a standout winner in Dell’s AI server assembly order, including complete machines, motherboards, and GPU boards. Wistron has existing H100 server orders and will supply new B100 baseboard orders. Their AI server baseboard plant in Hsinchu, Taiwan will expand by Q3 this year. Wistron anticipates year-round growth in the AI server business.

2023-08-16

[News] CoWoS Production Surges at TSMC, UMC, Amkor, and ASE Hasten to Catch Up

According to a report by Taiwan’s Commercial Times, JPMorgan’s latest analysis reveals that AI demand will remain robust in the second half of the year. Encouragingly, TSMC’s CoWoS capacity expansion progress is set to exceed expectations, with production capacity projected to reach 28,000 to 30,000 wafers per month by the end of next year.

The trajectory of CoWoS capacity expansion is anticipated to accelerate notably in the latter half of 2024. This trend isn’t limited to TSMC alone; other players outside the TSMC are also actively expanding their CoWoS-like production capabilities to meet the soaring demands of AI applications.

Gokul Hariharan, Head of Research for JPMorgan Taiwan, highlighted that industry surveys indicate strong and unabated AI demand in the latter half of the year. Shortages amounting to 20% to 30% are observed with CoWoS capacity being a key bottleneck and high-bandwidth memory (HBM) also facing supply shortages.

JPMorgan’s estimates indicate that Nvidia will account for 60% of the overall CoWoS demand in 2023. TSMC is expected to produce around 1.8 to 1.9 million sets of H100 chips, followed by significant demand from Broadcom, AWS’ Inferentia chips, and Xilinx. Looking ahead to 2024, TSMC’s continuous capacity expansion is projected to supply Nvidia with approximately 4.1 to 4.2 million sets of H100 chips.

Apart from TSMC’s proactive expansion of CoWoS capacity, Hariharan predicts that other assembly and test facilities are also accelerating their expansion of CoWoS-like capacities.

For instance, UMC is preparing to have a monthly capacity of 5,000 to 6,000 wafers for the interposer layer by the latter half of 2024. Amkor is expected to provide a certain capacity for chip-on-wafer stacking technology, and ASE Group will offer chip-on-substrate bonding capacity. However, these additional capacities might face challenges in ramping up production for the latest products like H100, potentially focusing more on older-generation products like A100 and A800.

(Photo credit: TSMC)

2023-08-15

[News] Foxconn Secures Major NVIDIA Order, Leads in AI Chip Base Boards

According to a report by Taiwan’s Economic Daily, Foxconn Group has achieved another triumph in its AI endeavors. The company has secured orders for over 50% of NVIDIA’s HGX GPU base boards, marking the first instance of such an achievement. Adding to this success, Foxconn had previously acquired an order for another NVIDIA DGX GPU base board, solidifying its pivotal role in NVIDIA’s two most crucial AI chip base board orders.

The report highlights that in terms of supply chain source, Foxconn Group stands as the exclusive provider of NVIDIA’s AI chip modules (GPU Modules). As for NVIDIA’s AI motherboards, the suppliers encompass Foxconn, Quanta, Inventec, and Super Micro.

Industry experts analyze that DGX and HGX are currently NVIDIA’s two most essential AI servers, and Foxconn Group has undertaken the monumental task of fulfilling the large order for NVIDIA’s AI chipboards through its subsidiary, Foxconn Industrial Internet (FII). Having previously secured orders for NVIDIA’s DGX base boards, Foxconn Group has now garnered additional orders from FII for the HGX base boards. This expanded supply constitutes more than half of the total, solidifying Foxconn Group’s role as a primary supplier for NVIDIA’s two critical AI chip base board orders.

Furthermore, Foxconn’s involvement doesn’t end with AI chip modules, base boards, and motherboards. The company’s engagement extends downstream to servers and server cabinets, creating a vertically integrated approach that covers the entire AI ecosystem.

(Photo credit: Nvidia)

  • Page 42
  • 46 page(s)
  • 230 result(s)

Get in touch with us