News
According to a report by Taiwan’s Commercial Times, Wistron AI server orders are surging. Following their successful acquisition of orders for NVIDIA’s next-generation DGX/HGX H100 series AI server GPU baseboards, there are industry sources suggesting that Wistron has secured orders for AMD’s next-generation MI300 series AI server baseboards. The earliest shipments are expected before the end of the year, making Wistron the first company to win orders from both major AI server providers. Wistron has refrained from commenting on specific products and individual customers.
The global AI server market is experiencing rapid growth. Industry estimates global production capacity to reach 500,000 units next year, with a market value exceeding a trillion NT dollars. NVIDIA still holds the dominant position in the AI chip market with a market share of over 90%. However, with the launch of AMD’s new products, they are poised to capture nearly 10% of the market share.
There have been recent reports of production yield issues with AMD’s MI300 series, potentially delaying the originally planned fourth-quarter shipments. Nevertheless, supply chain sources reveal that Wistron has secured exclusive large orders for MI300 series GPU baseboards and will begin supplying AMD in the fourth quarter. Meanwhile, in NVIDIA’s L10, Wistron has recently received an urgent order from a non-U.S. CSP (Cloud Service Provider) for at least 3,000 AI servers, expected to be delivered in February of next year.
Supply chain analysts note that while time is tight, Wistron is not billing its customers using the NRE (Non-Recurring Engineering), indicating their confidence in order visibility and customer demand growth. They aim to boost revenue and profit contributions through a “quantity-based” approach.
On another front, Wistron is currently accelerating shipments for not only NVIDIA DGX/HGX architecture’s H100-GPU baseboards but also exclusive supply orders for NVIDIA DGX architecture and AI server front-end L6 mainboard (SMT PCBA) orders for both NVIDIA and AMD architectures under the Dell brand. These orders have been steadily increasing Wistron’s shipment momentum since the third quarter.
(Photo credit: NVIDIA)
Press Releases
US-based CSPs have been establishing SMT production lines in Southeast Asia since late 2022 to mitigate geopolitical risks and supply chain disruptions. TrendForce reports that Taiwan-based server ODMs, including Quanta, Foxconn, Wistron (including Wiwynn), and Inventec, have set up production bases in countries like Thailand, Vietnam, and Malaysia. It’s projected that by 2023, the production capacity from these regions will account for 23%, and by 2026, it will approach 50%.
TrendForce reveals that Quanta, due to its geographical ties, has established several production lines in its Thai facilities centered around Google and Celestica, aiming for optimal positioning to foster customer loyalty. Meanwhile, Foxconn has renovated its existing facilities in Hanoi, Vietnam, and uses its Wisconsin plant to accommodate customer needs. Both Wistron and Wiwynn are progressively establishing assembly plants and SMT production lines in Malaysia. Inventec’s current strategy mirrors that of Quanta, with plans to build SMT production lines in Thailand by 2024 and commence server production in late 2024.
CSPs aim to control the core supply chain, AI server supply chain trends toward decentralization
TrendForce suggests that changes in the supply chain aren’t just about circumventing geopolitical risks—equally vital is increased control over key high-cost components, including CPUs, GPUs, and other critical materials. With rising demand for next-generation AI and Large Language Models, supply chain stockpiling grows each quarter. Accompanied by a surge in demand in 1H23, CSPs will become especially cautious in their supply chain management.
Google, with its in-house developed TPU machines, possesses both the core R&D and supply chain leadership. Moreover, its production stronghold primarily revolves around its own manufacturing sites in Thailand. However, Google still relies on cooperative ODMs for human resource allocation and production scheduling, while managing other materials internally. To avoid disruptions in the supply chain, companies like Microsoft, Meta, and AWS are not only aiming for flexibility in supply chain management but are also integrating system integrators into ODM production. This approach allows for more dispersed and meticulous coordination and execution of projects.
Initially, Meta heavily relied on direct purchases of complete server systems, with Intel’s Habana system being one of the first to be integrated into Meta’s infrastructure. This made sense since the CPU for their web-type servers were often semi-custom versions from Intel. Based on system optimization levels, Meta found Habana to be the most direct and seamless solution. Notably, it was only last year that Meta began to delegate parts of its Metaverse project to ODMs. This year, as part of its push into generative AI, Meta has also started adopting NVIDIA’s solutions extensively.
News
Dell, a major server brand, placed a substantial order for AI servers just before NVIDIA’s Q2 financial report. This move is reshaping Taiwan’s supply chain dynamics, favoring companies like Wistron and Lite-On.
Dell is aggressively entering the AI server market, ordering NVIDIA’s top-tier H100 chips and components. The order’s value this year is estimated in hundreds of billions of Taiwanese dollars, projected to double in the next year. Wistron and Lite-On are poised to benefit, securing vital assembly and power supply orders. EMC and Chenbro are also joining the supply chain.
Dell’s AI server order, which includes assembly (including complete machines, motherboards, GPU boards, etc.) and power supply components, stands out with its staggering value. The competition was most intense in the assembly sector, ultimately won by Wistron. In the power supply domain, industry leaders like Delta, Lite-On, secured a notable share, with Lite-On emerging as a winner, sparking significant industry discussions.
According to Dell’s supply chain data, AI server inventory will reach 20,000 units this year and increase next year. The inventory primarily features the highest-end H100 chips from NVIDIA, with a few units integrating the A100 chips. With each H100 unit priced at $300,000 and A100 units exceeding $100,000, even with a seemingly modest 20,000 units, the total value remains in the billions of New Taiwan Dollars.
Wistron is a standout winner in Dell’s AI server assembly order, including complete machines, motherboards, and GPU boards. Wistron has existing H100 server orders and will supply new B100 baseboard orders. Their AI server baseboard plant in Hsinchu, Taiwan will expand by Q3 this year. Wistron anticipates year-round growth in the AI server business.
News
Following Saudi Arabia’s $13 billion investment, the UK government is dedicating £100 million (about $130 million) to acquire thousands of NVIDIA AI chips, aiming to establish a strong global AI foothold. Potential beneficiaries include Wistron, GIGABYTE, Asia Vital Components, and Supermicro.
Projections foresee a $150 billion AI application opportunity within 3-5 years, propelling the semiconductor market to $1 trillion by 2030. Taiwan covers the full industry value chain. Players like TSMC, Alchip, GUC, Auras, Asia Vital Components, SUNON, EMC, Unimicron, Delta, and Lite-On are poised to gain.
Reports suggest the UK is in advanced talks with NVIDIA for up to 5,000 GPU chips, but models remain undisclosed. The UK government recently engaged with chip giants NVIDIA, Supermicro, Intel, and others through the UK Research and Innovation (UKRI) to swiftly acquire necessary resources for Prime Minister Sunak’s AI development initiative. Critics question the adequacy of the £100 million investment in NVIDIA chips, urging Chancellor Jeremy Hunt to allocate more funds to support the AI project.
NVIDIA’s high-performance GPU chips have gained widespread use in AI fields. Notably, the AI chatbot ChatGPT relies heavily on NVIDIA chips to meet substantial computational demands. The latest iteration of AI language model, GPT-4, requires a whopping 25,000 NVIDIA chips for training. Consequently, experts contend that the quantity of chips procured by the UK government is notably insufficient.
Of the UK’s £1 billion investment in supercomputing and AI, £900 million is for traditional supercomputers, leaving £50 million for AI chip procurement. The budget recently increased from £70 million to £100 million due to global chip demand.
Saudi Arabia and the UAE also ordered thousands of NVIDIA AI chips, and Saudi Arabia’s order includes at least 3,000 of the latest H100 chips. Prime Minister Sunak’s AI initiative begins next summer, aiming for a UK AI chatbot like ChatGPT and AI tools for healthcare and public services.
As emerging AI applications proliferate, countries are actively competing in the race to bolster AI data centers, turning the acquisition of AI-related chips into an alternative arms race. Compal said, “An anticipate significant growth in the AI server sector in 2024, primarily within hyperscale data centers, with a focus on European expansion in the first half of the year and a shift toward the US market in the latter half.”