Press Releases
With the approach to the end of 2023, TrendForce revealed the tech trends in every sector, apparently, AI continues as the main focus to decide the direction of how the tech supply chain will be in the next few years, here are the seeings:
CSPs increase AI investment, driving a 38% growth in AI server shipments by 2024
HBM3e set to drive an annual increase of 172% in HBM revenue
Rising demand for advanced packaging in 2024, the emergence of 3D IC technology
NTN is set to begin with small-scale commercial tests, broader applications of this technology are on the way in 2024
6G communication to begin in 2024, with satellite communication taking center stage
Innovative entrants drive cost optimization for Micro LED technology in 2024
Intensifying competition in AR/VR micro-display technologies
Advancements in material and component technologies are propelling the commercialization of gallium oxide
Solid-state batteries poised to reshape the EV battery landscape over the next decade
BEVs in 2024 rely on power conversion efficiency, driving range, and charging efficiency
Green solutions with AI simulations emerging as a linchpin for renewable energy and decarbonized manufacturing
OLED’s expansion will across various applications driven by the innovation of foldable phones
News
Reports indicate that the United States is poised to unveil an updated set of restrictions on chip exports to China this week. Beyond the previously reported tightening measures on AI chips and equipment exports, these new regulations are expected to restrict the supply to chip design companies. The aim is to enhance control over the sale of graphic chips and advanced chip manufacturing equipment for AI applications to Chinese enterprises, with the possibility of adding Chinese chip design companies to the list of restricted entities.
As reported from Reuters and Bloomberg, U.S. authorities will demand that overseas manufacturers obtain licenses to fulfill orders from these companies and subject Chinese firms attempting to circumvent restrictions by using third-party countries for shipping to additional inspections. While the new regulations are expected to be announced this week, the potential for delays should not be ruled out.
In October 2022, the United States declared export restrictions on advanced semiconductor processes and chip manufacturing equipment bound for China, as a measure to prevent the development of cutting-edge technology that could potentially bolster military capabilities for geopolitical adversaries.
Following the implementation of these export bans, U.S. tech companies, such as Nvidia and Applied Materials, incurred significant losses in orders. For example, Nvidia was unable to sell its two most advanced AI chips to Chinese companies, leading to the introduction of a “downgraded” chip, the H800, designed specifically for the Chinese market to bypass existing regulations.
U.S. officials have revealed plans to introduce new guidelines for AI chips, including the restriction of certain advanced data center AI chips that currently do not fall under any limitations. They are considering the removal of “bandwidth parameters” to prevent the entry of AI chips perceived as too powerful into China.
However, they plan to introduce expanded guidelines for chip control, which may reduce communication speeds among AI chips. Slower communication could increase the complexity and cost of AI development, particularly when many chips need to be connected for training large AI models. Additionally, the U.S. will introduce “performance density parameters” to guard against potential future workarounds by companies.
Reports suggest that the United States is looking to prohibit the export of Nvidia’s H800 chip, a “downgraded” chip designed for the Chinese market to legally bypass existing regulations.
The Biden administration is also preparing for additional scrutiny of Chinese companies attempting to modify shipping and manufacturing locations in a bid to evade specific country restrictions. This rule will continue to limit sales of specific chips to Chinese companies through overseas subsidiaries and related entities, requiring authorization before exporting restricted technology to countries that could serve as intermediaries.
Furthermore, the progress in Huawei’s new smartphones has prompted the U.S. authorities to tighten control further, initiating investigations for actions against Huawei or SMIC that will be carried out independently of the new export control regulations.
In response to the anticipated expansion of U.S. export controls on Chinese companies, Chinese Foreign Ministry Spokesperson Mao Ning stated, “We have made our position clear on US restrictions of chip exports to China. The US needs to stop politicizing and weaponizing trade and tech issues and stop destabilizing global industrial and supply chains. We will closely follow the developments and firmly safeguard our rights and interests.”
(Image: Nvidia)
News
Source to China Times, in response to increased visibility in AI server orders and optimistic future demand, two ODM-Direct based in Taiwan, Wiwynn, and Quanta, are accelerating the expansion of their server production lines in non-Chinese regions. Recently, there have been updates on their progress. Wiwynn has completed the first phase of its self-owned new factory in Malaysia, specifically for L10. As for Quanta, has further expanded its L10 production line in California, both gearing up for future AI server orders.
Wiwynn’s new server assembly factory, located in the Senai Airport City in Johor, Malaysia, was officially inaugurated on the 12th, and it will provide full cabinet assembly services for large-scale data centers. Additionally, the second phase of the front-end server motherboard production line is expected to be completed and operational next year, allowing Wiwynn to offer high-end AI servers and advanced cooling technology to cloud service providers and customers in the SEA region
While Wiwynn has experienced some slowdown in shipments and revenue due to its customers adjusting to inventory and CAPEX impacts in recent quarters, Wiwynn still chooses to continue its overseas factory expansion efforts. Notably, with the addition of the new factory in Malaysia, Wiwynn’s vision of establishing a one-stop manufacturing, service, and engineering center in the APAC region is becoming a reality.
Especially as we enter Q4, the shipment of AI servers based on NVIDIA’s AI-GPU architecture is expected to boost Wiwynn’s revenue. The market predicts that after a strong fourth quarter, this momentum will carry forward into the next year.
How significant is the demand for AI servers?
According to TrendForce projection, a dramatic surge in AI server shipments for 2023, with an estimated 1.2 million units—outfitted with GPUs, FPGAs, and ASICs—destined for markets around the world, marking a robust YoY growth of 38.4%. This increase resonates with the mounting demand for AI servers and chips, resulting in AI servers poised to constitute nearly 9% of the total server shipments, a figure projected to increase to 15% by 2026. TrendForce has revised its CAGR forecast for AI server shipments between 2022 and 2026 upwards to an ambitious 29%.
Quanta has also been rapidly expanding its production capacity in North America and Southeast Asia in recent years. This year, in addition to establishing new facilities in Vietnam, they have recently expanded their production capacity at their California-based Fremont plant.
The Fremont plant in California has been Quanta’s primary location for the L10 production line in the United States. In recent years, it has expanded several times. With the increasing demand for data center construction by Tier 1 CSP, Quanta’s Tennessee plant has also received multiple investments to prepare for operational needs and capacity expansion.
In August of this year, Quanta initially injected $135 million USD into its California subsidiary, which then leased a nearly 4,500 square-meter site in the Bay Area. Recently, Quanta announced a $79.6 million USD contract awarded to McLarney Construction, Inc. for three construction projects within their new factory locations.
It is expected that Quanta’s new production capacity will gradually come online, with the earliest capacity expected in 2H24, and full-scale production scheduled for 1H25. With the release of new high-end AI servers featuring the H100 architecture, Quanta has been shipping these products since August and September, contributing to its revenue growth. They aim to achieve a 20% YoY increase in server sales for 2023, with the potential for further significant growth in 2024.
News
In the industry buzz, it’s reported that TSMC expects a significant upswing in the proportion of AI orders within its 2024 revenue, driven by the increased demand for wafer starts from its six key AI customer groups in the coming year.
These six major AI customer groups encompass NVIDIA, AMD, Tesla, Apple, Intel, and international giants with in-house AI chip development, entrusting TSMC for production. The orders in this domain continue to heat up, not only benefiting TSMC but also signaling a robust year ahead for AI server manufacturer like Quanta and Wistron.
TSMC traditionally refrains from commenting on specific customer details and remained silent on market speculations on the October 10th. Meanwhile, AI server manufacturers, including Quanta and Wistron, hold a positive outlook for the upcoming year, with expectations of a continued upward trend in AI-related business operations.
As the demand for AI wafer starts from key customers intensifies, market experts are keenly watching TSMC’s investor conference on the October 19th. There is anticipation regarding whether TSMC will revise its previous July forecast by further increasing the Compound Annual Growth Rate (CAGR) of AI-related product revenue for the next five years.
TSMC categorizes server AI processors as those handling training and inference functions, including CPUs, GPUs, and AI accelerators. This category accounts for approximately 6% of TSMC’s total revenue. During TSMC’s July investor conference, it was projected that the demand for AI-related products would see a nearly 50% Compound Annual Growth Rate (CAGR) increase over the next five years, pushing its revenue share into the low teens range.
(Photo credit: TSMC)
News
According to a report by Taiwanese media TechNews, industry sources have indicated that Microsoft has recently reduced its orders for Nvidia’s H100 graphics cards. This move suggests that the demand for H100 graphics cards in the large-scale artificial intelligence computing market has tapered off, and the frenzy of orders from previous customers is no longer as prominent.
In this wave of artificial intelligence trends, the major purchasers of related AI servers come from large-scale cloud computing service providers. Regarding Microsoft’s reported reduction in orders for Nvidia’s H100 graphics cards, market experts point to a key factor being the usage of Microsoft’s AI collaboration tool, Microsoft 365 Copilot, which did not perform as expected.
Another critical factor affecting Microsoft’s decision to reduce orders for Nvidia’s H100 graphics cards is the usage statistics of ChatGPT. Since its launch in November 2022, this generative AI application has experienced explosive growth in usage and has been a pioneer in the current artificial intelligence trend. However, ChatGPT experienced a usage decline for the first time in June 2023.
Industry insiders have noted that the reduction in Microsoft’s H100 graphics card orders was predictable. In May, both server manufacturers and direct customers stated that they would have to wait for over six months to receive Nvidia’s H100 graphics cards. However, in August, Tesla announced the deployment of a cluster of ten thousand H100 graphics cards, meaning that even those who placed orders later were able to receive sufficient chips within a few months. This indicates that the demand for H100 graphics cards, including from customers like Microsoft, has already been met, signifying that the fervent demand observed several months ago has waned.
(Photo credit: Nvidia)