AI chip


2023-10-13

[News] Explosive AI Server Demand Ignites Aggressive Expansion by Wiwynn and Quanta

Source to China Times, in response to increased visibility in AI server orders and optimistic future demand, two ODM-Direct based in Taiwan, Wiwynn, and Quanta, are accelerating the expansion of their server production lines in non-Chinese regions. Recently, there have been updates on their progress. Wiwynn has completed the first phase of its self-owned new factory in Malaysia, specifically for L10. As for Quanta, has further expanded its L10 production line in California, both gearing up for future AI server orders.

Wiwynn’s new server assembly factory, located in the Senai Airport City in Johor, Malaysia, was officially inaugurated on the 12th, and it will provide full cabinet assembly services for large-scale data centers. Additionally, the second phase of the front-end server motherboard production line is expected to be completed and operational next year, allowing Wiwynn to offer high-end AI servers and advanced cooling technology to cloud service providers and customers in the SEA region

While Wiwynn has experienced some slowdown in shipments and revenue due to its customers adjusting to inventory and CAPEX impacts in recent quarters, Wiwynn still chooses to continue its overseas factory expansion efforts. Notably, with the addition of the new factory in Malaysia, Wiwynn’s vision of establishing a one-stop manufacturing, service, and engineering center in the APAC region is becoming a reality.

Especially as we enter Q4, the shipment of AI servers based on NVIDIA’s AI-GPU architecture is expected to boost Wiwynn’s revenue. The market predicts that after a strong fourth quarter, this momentum will carry forward into the next year.

How significant is the demand for AI servers?

According to TrendForce projection, a dramatic surge in AI server shipments for 2023, with an estimated 1.2 million units—outfitted with GPUs, FPGAs, and ASICs—destined for markets around the world, marking a robust YoY growth of 38.4%. This increase resonates with the mounting demand for AI servers and chips, resulting in AI servers poised to constitute nearly 9% of the total server shipments, a figure projected to increase to 15% by 2026. TrendForce has revised its CAGR forecast for AI server shipments between 2022 and 2026 upwards to an ambitious 29%.

Quanta has also been rapidly expanding its production capacity in North America and Southeast Asia in recent years. This year, in addition to establishing new facilities in Vietnam, they have recently expanded their production capacity at their California-based Fremont plant.

The Fremont plant in California has been Quanta’s primary location for the L10 production line in the United States. In recent years, it has expanded several times. With the increasing demand for data center construction by Tier 1 CSP, Quanta’s Tennessee plant has also received multiple investments to prepare for operational needs and capacity expansion.

In August of this year, Quanta initially injected $135 million USD into its California subsidiary, which then leased a nearly 4,500 square-meter site in the Bay Area. Recently, Quanta announced a $79.6 million USD contract awarded to McLarney Construction, Inc. for three construction projects within their new factory locations.

It is expected that Quanta’s new production capacity will gradually come online, with the earliest capacity expected in 2H24, and full-scale production scheduled for 1H25. With the release of new high-end AI servers featuring the H100 architecture, Quanta has been shipping these products since August and September, contributing to its revenue growth. They aim to achieve a 20% YoY increase in server sales for 2023, with the potential for further significant growth in 2024.

2023-10-11

[News] TSMC’s AI Orders Set for a Breakout Year in 2023 – Quanta, Wistron, and More Joining the Ride

In the industry buzz, it’s reported that TSMC expects a significant upswing in the proportion of AI orders within its 2024 revenue, driven by the increased demand for wafer starts from its six key AI customer groups in the coming year.

These six major AI customer groups encompass NVIDIA, AMD, Tesla, Apple, Intel, and international giants with in-house AI chip development, entrusting TSMC for production. The orders in this domain continue to heat up, not only benefiting TSMC but also signaling a robust year ahead for AI server manufacturer like Quanta and Wistron.

TSMC traditionally refrains from commenting on specific customer details and remained silent on market speculations on the October 10th. Meanwhile, AI server manufacturers, including Quanta and Wistron, hold a positive outlook for the upcoming year, with expectations of a continued upward trend in AI-related business operations.

As the demand for AI wafer starts from key customers intensifies, market experts are keenly watching TSMC’s investor conference on the October 19th. There is anticipation regarding whether TSMC will revise its previous July forecast by further increasing the Compound Annual Growth Rate (CAGR) of AI-related product revenue for the next five years.

TSMC categorizes server AI processors as those handling training and inference functions, including CPUs, GPUs, and AI accelerators. This category accounts for approximately 6% of TSMC’s total revenue. During TSMC’s July investor conference, it was projected that the demand for AI-related products would see a nearly 50% Compound Annual Growth Rate (CAGR) increase over the next five years, pushing its revenue share into the low teens range.

(Photo credit: TSMC)

2023-09-20

[News] Has the AI Chip Buying Frenzy Cooled Off? Microsoft Rumored to Decrease Nvidia H100 Orders

According to a report by Taiwanese media TechNews, industry sources have indicated that Microsoft has recently reduced its orders for Nvidia’s H100 graphics cards. This move suggests that the demand for H100 graphics cards in the large-scale artificial intelligence computing market has tapered off, and the frenzy of orders from previous customers is no longer as prominent.

In this wave of artificial intelligence trends, the major purchasers of related AI servers come from large-scale cloud computing service providers. Regarding Microsoft’s reported reduction in orders for Nvidia’s H100 graphics cards, market experts point to a key factor being the usage of Microsoft’s AI collaboration tool, Microsoft 365 Copilot, which did not perform as expected.

Another critical factor affecting Microsoft’s decision to reduce orders for Nvidia’s H100 graphics cards is the usage statistics of ChatGPT. Since its launch in November 2022, this generative AI application has experienced explosive growth in usage and has been a pioneer in the current artificial intelligence trend. However, ChatGPT experienced a usage decline for the first time in June 2023.

Industry insiders have noted that the reduction in Microsoft’s H100 graphics card orders was predictable. In May, both server manufacturers and direct customers stated that they would have to wait for over six months to receive Nvidia’s H100 graphics cards. However, in August, Tesla announced the deployment of a cluster of ten thousand H100 graphics cards, meaning that even those who placed orders later were able to receive sufficient chips within a few months. This indicates that the demand for H100 graphics cards, including from customers like Microsoft, has already been met, signifying that the fervent demand observed several months ago has waned.

(Photo credit: Nvidia)

2023-09-08

[News] PSMC to Launch Affordable AI Chips Next Year

According to a report by Taiwan’s Commercial Times, the semiconductor market is expected to slow down this year. PSMC Chairman Frank Huang stated that it is estimated that the current wave of semiconductor inventory clearance will not be completed until the end of the first quarter of next year, and the overall market conditions for next year are still not expected to rebound strongly.

When asked about the mature wafer fabs in mainland China aggressively capturing market share this year with low prices, Frank Huang emphasized that this was anticipated. He further stated that PSMC is planning to launch affordable AI chips primarily targeting the consumer market next year, completely differentiating them from Nvidia’s high-priced products. Given the large scale of the consumer market, he expressed optimism regarding future shipment growth.

Huang emphasized that PSMC’s planned AI chips with AI functionality are like miniature computers. Currently, international chip manufacturers offer AI chips with unit prices as high as $200,000, making them impossible for widespread adoption in the consumer market. Therefore, the AI chips PSMC plans to launch next year will have lower prices and will be specifically tailored for the massive consumer market. He gave examples, including affordable AI features being integrated into toys and household appliances. Toys, for instance, will be able to recognize their owners and engage in voice interactions.

Huang mentioned that, because they are targeting affordability and mass appeal, these AI chips will be produced using a 28-nanometer process and are expected to contribute to revenue through formal shipments next year. With a focus on the consumer market, Huang is optimistic about the future shipments and business contributions of these AI chips.

2023-09-01

[News] Inventec’s AI Strategy Will Boost Both NVIDIA’s and AMD’s AI Server Chips to Grow

According to Liberty Times Net, Inventec, a prominent player in the realm of digital technology, is making significant strides in research and development across various domains, including artificial intelligence, automotive electronics, 5G, and the metaverse. The company has recently introduced a new all-aluminum liquid-cooled module for its general-purpose graphics processing units (GPGPU) powered by NVIDIA’s A100 chips. Additionally, this innovative technology is being applied to AI server products featuring AMD’s 4th Gen EPYC dual processors, marking a significant step towards the AI revolution.

Inventec has announced that their Rhyperior general-purpose graphics processors previously offered two cooling solutions: air cooling and air cooling with liquid cooling. The new all-aluminum liquid-cooled module not only reduces material costs by more than 30% compared to traditional copper cooling plates but also comes with 8 graphics processors (GPUs) and includes 6 NVIDIA NVSwitch nodes. This open-loop cooling system eliminates the need for external refrigeration units and reduces fan power consumption by approximately 50%.

Moreover, Inventec’s AI server product, the K885G6, equipped with AMD’s 4th Gen EPYC dual processors, has demonstrated a significant reduction in data center air conditioning energy consumption of approximately 40% after implementing this new cooling solution. The use of water as a coolant, rather than environmentally damaging and costlier chemical fluids, further enhances the product’s appeal, as it can support a variety of hardware configurations to meet the diverse needs of AI customers.

Inventec’s new facility in Mexico has commenced mass production, with plans to begin supplying high-end NVIDIA AI chips, specifically the H100 motherboards, in September. They are poised to increase production further in the fourth quarter. Additionally, in the coming year, the company is set to release more Application-Specific Integrated Circuit (ASIC) products, alongside new offerings from NVIDIA and AMD. Orders for server system assembly from U.S. customers (L11 assembly line) are steadily growing. The management team anticipates showcasing their innovations at the Taiwan Excellence Exhibition in Dongguan, China, starting on October 7th, as they continue to deepen their collaboration with international customers.

(Source: https://ec.ltn.com.tw/article/breakingnews/4412765)
  • Page 4
  • 7 page(s)
  • 31 result(s)