News
According to a report by Taiwanese media Money DJ, after establishing a stable position as a major supplier for NVIDIA GPU baseboard, Wistron has secured orders for AMD MI300 baseboards. Reliable sources indicate that Wistron has expanded its involvement beyond AMD baseboards and entered the module assembling segment.
In addition to NVIDIA and AMD, Wistron has also entered the Intel AI chip module and baseboard supply chain, encompassing orders from the three major AI chip manufacturers.
The NVIDIA AI server supply chain includes GPU modules, GPU baseboards, motherboards, server systems, complete server cabinets, and more. Wistron holds a significant share in GPU baseboard supply and is also involved in server system assembly.
Currently, NVIDIA commands a 70% market share in AI chips, but various chip manufacturers are eager to compete. Both AMD and Intel have introduced corresponding solutions. While Wistron was previously rumored to have entered AMD baseboard supply, it has also ventured into AMD GPU module assembling, serving as the sole source, according to reliable sources.
Regarding the news of Wistron’s involvement in AMD and Intel chip manufacturing, the company has chosen not to respond to market rumors.
(Photo credit: Google)
News
According to a report by Taiwan’s Economic Daily, AI is driving a massive demand for data transmission, and silicon photonics and Co-Packaged Optics (CPO) have become new focal points in the industry. TSMC is actively entering this field and is rumored to be collaborating with major customers such as Broadcom and NVIDIA to jointly develop these technologies. The earliest large orders are expected to come in the second half of next year.
TSMC has already assembled a research and development team of over 200 people, aiming to seize the business opportunities in the emerging market of ultra-high-speed computing chips based on silicon photonics, which are expected to arrive gradually starting next year.
Regarding these rumors, TSMC has stated that they do not comment on customer and product situations. However, TSMC has a high regard for silicon photonics technology. TSMC Vice President Douglas Yu recently stated publicly, “If we can provide a good silicon photonics integration system, it can address two key issues: energy efficiency and AI computing capability. This could be a paradigm shift. We may be at the beginning of a new era.”
Silicon photonics was a hot topic at the recent SEMICON Taiwan 2023 with major semiconductor giants like TSMC and ASE giving related keynote speeches. This surge in interest is mainly due to the proliferation of AI applications, which have raised questions about how to make data transmission faster and achieve signal latency reduction. The traditional method of using electricity for signal transmission no longer meets the demands, and silicon photonics, which converts electricity into faster optical transmission, has become the highly anticipated next-generation technology to enhance high-volume data transmission speeds in the industry.
Industry reports suggest that TSMC is currently collaborating with major customers like Broadcom and NVIDIA to develop new products in the field of silicon photonics and Co-Packaged Optics. The manufacturing process technology ranges from 45 nanometers to 7 nanometers, and with mass production slated for 2025. At that time, it is expected to bring new business opportunities to TSMC.
Industry sources reveal that TSMC has already organized a research and development team of approximately 200 people. In the future, silicon photonics is expected to be incorporated into CPU, GPU, and other computing processes. By changing from electronic transmission lines to faster optical transmission internally, computing capabilities are expected to increase several tens of times compared to existing processors. Currently, this technology is still in the research and academic paper stage, but the industry has high hopes that it will become a new driver of explosive growth for TSMC’s operations in the coming years.
(Photo credit: Google)
Insights
In the realm of specifications competition, desktop computers continue to possess numerous irreplaceable advantages. These include ease of upgrading, superior heat dissipation capabilities, and robust and durable construction, resulting in extended usage lifespans. As a result, desktop computers maintain a steadfast market demand. Due to the ease of component replacement in desktops, expandability remains a significant advantage for PC gamers. For creators and business professionals, desktop computers satisfy extensive external connectivity needs while offering superior heat dissipation. Furthermore, owing to the size limitations of laptops, desktop computers continue to provide a more comfortable user experience during prolonged usage.
Windows 10 Exit and Hardware Updates Set to Drive 2024 Upgrade Trend
In the latter half of 2022, brands and retailers aggressively cleared their inventories, a trend that continued into 2023, resulting in a sustained challenging period for the PC market. In recent years, the PC market has approached saturation, making it difficult to drive market growth through sheer quantity. Consequently, brand manufacturers have focused on business, gaming, and creator products. However, PCs inherently belong to a cyclical terminal market. With the Windows 10 operating system set to retire in October 2025 and Windows 11’s heightened hardware specifications requirements, products released before 2017 will require replacements. Additionally, it is anticipated that companies like Intel, AMD, and NVIDIA will gradually unveil new products in the latter half of 2023. This, coupled with the demands of the new operating system, is expected to trigger a noticeable upgrade trend among consumers, ultimately providing a glimmer of hope for the PC market. (Image credit: Unsplash_Alienwaregaming)
In-Depth Analyses
In its FY2Q24 earnings report for 2023, NVIDIA disclosed that the U.S. government had imposed controls on its AI chips destined for the Middle East. However, on August 31, 2023, the U.S. Department of Commerce stated that they had “not prohibited the sale of chips to the Middle East” and declined to comment on whether new requirements had been imposed on specific U.S. companies. Both NVIDIA and AMD have not responded to this issue.
TrendForce’s analysis:
In NVIDIA’s FY2Q24 earnings report, it mentioned, “During the second quarter of fiscal year 2024, the USG informed us of an additional licensing requirement for a subset of A100 and H100 products destined to certain customers and other regions, including some countries in the Middle East.” It is speculated that the U.S. is trying to prevent high-speed AI chips from flowing into the Chinese market via the Middle East. This has led to controls on the export of AI chips to the Middle East.
Since August 2022, the U.S. has imposed controls on NVIDIA A100, H100, AMD MI100, MI200, and other AI-related GPUs, restricting the export of AI chips with bidirectional transfer rates exceeding 600GB/s to China. Saudi Arabia had already signed a strategic partnership with China in 2022 for cooperation in the digital economy sector, including AI, advanced computing, and quantum computing technologies. Additionally, the United Arab Emirates has expressed interest in AI cooperation with China. There have been recent reports of Saudi Arabia heavily acquiring NVIDIA’s AI chips, which has raised concerns in the U.S.
Affected by U.S. sanctions, Chinese companies are vigorously developing AI chips. iFlytek is planning to launch a new general-purpose LLM (Large Language Model) in October 2023, and the AI chip Ascend 910B, co-developed with Huawei, is expected to hit the market in the second half of 2024, with performance claimed to rival that of NVIDIA A100. In fact, Huawei had already introduced the Ascend 910, which matched the performance of NVIDIA’s V100, in 2019. Considering Huawei’s Kirin 9000s, featured in the flagship smartphone Mate 60 Pro released in August 2023, it is highly likely that Huawei can produce products with performance comparable to A100.
However, it’s important to note that the A100 was already announced by NVIDIA in 2020. This means that even if Huawei successfully launches a new AI chip, it will already be four years behind NVIDIA. Given the expected 7nm process for Huawei’s Ascend 910B and NVIDIA’s plan to release the 3nm process-based Blackwell architecture GPU B100 in the second half of 2024, Huawei will also lag behind by two generations in chip fabrication technology. With the parameters of LLM doubling annually, the competitiveness of Huawei’s new AI chip remains to be observed.
Despite the active development of AI chips by Chinese IC design house, NVIDIA’s AI chips remain the preferred choice for training LLM models among Chinese cloud companies. Looking at the revenue performance of the leading Chinese AI chip company, Cambricon, its revenue for the first half of 2023 was only CNY 114 million, a YoY decrease of 34%. While being added to the U.S. Entity List was a major reason for the revenue decline, NVIDIA’s dominance in the vast Chinese AI market is also a contributing factor. It is estimated that NVIDIA’s market share in the Chinese GPU market for AI training exceeded 95% in the first half of 2023. In fact, in the second quarter of 2023, the China market accounted for 20-25% of NVIDIA’s Data Center segment revenue.
The main reason for this is that the Chinese AI ecosystem is still quite fragmented and challenging to compete with NVIDIA’s CUDA ecosystem. Therefore, Chinese companies are actively engaged in software development. However, building a sufficiently attractive ecosystem to lure Chinese CSPs in the short term remains quite challenging. Consequently, it is expected that NVIDIA will continue to dominate the Chinese market for the next 2-3 years.
(Photo credit: NVIDIA)
News
According to Taiwan’s Liberty Times, in response to the global supply chain restructuring, electronic manufacturing plants have been implementing a “China+ N” strategy in recent years, catering shipments to customers in different regions. Among them, Inventec continues to strengthen its server production line in Thailand and plans to enter the NVIDIA B200 AI server sector in the second half of next year.
Currently, Inventec’s overall server production capacity is distributed as follows: Taiwan 25%, China 25%, Czech Republic 15%, and Mexico, after opening new capacity this quarter, is expected to reach 35%. It is anticipated that next year’s capital expenditure will increase by 25%, reaching 10 billion NTD, primarily allocated for expanding the server production line in Thailand. The company has already started receiving orders for the B100 AI server water-cooling project from NVIDIA and plans to enter the B200 product segment in the second half of next year.
Inventec’s statistics show that its server motherboard shipments account for 20% of the global total. This year, the focus has been on shipping H100 and A100 training-type AI servers, while next year, the emphasis will shift to the L40S inference-type AI servers. The overall project quantity for next year is expected to surpass this year’s.
(Photo credit: Google)