Nvidia


2023-09-05

[News] Taking NVIDIA Server Orders, Inventec Expands Production in Thailand

According to Taiwan’s Liberty Times, in response to the global supply chain restructuring, electronic manufacturing plants have been implementing a “China+ N” strategy in recent years, catering shipments to customers in different regions. Among them, Inventec continues to strengthen its server production line in Thailand and plans to enter the NVIDIA B200 AI server sector in the second half of next year.

Currently, Inventec’s overall server production capacity is distributed as follows: Taiwan 25%, China 25%, Czech Republic 15%, and Mexico, after opening new capacity this quarter, is expected to reach 35%. It is anticipated that next year’s capital expenditure will increase by 25%, reaching 10 billion NTD, primarily allocated for expanding the server production line in Thailand. The company has already started receiving orders for the B100 AI server water-cooling project from NVIDIA and plans to enter the B200 product segment in the second half of next year.

Inventec’s statistics show that its server motherboard shipments account for 20% of the global total. This year, the focus has been on shipping H100 and A100 training-type AI servers, while next year, the emphasis will shift to the L40S inference-type AI servers. The overall project quantity for next year is expected to surpass this year’s.

(Photo credit: Google)

2023-09-04

[News] Wistron Secures Both NVIDIA and AMD AI Server Orders

According to a report by Taiwan’s Commercial Times, Wistron AI server orders are surging. Following their successful acquisition of orders for NVIDIA’s next-generation DGX/HGX H100 series AI server GPU baseboards, there are industry sources suggesting that Wistron has secured orders for AMD’s next-generation MI300 series AI server baseboards. The earliest shipments are expected before the end of the year, making Wistron the first company to win orders from both major AI server providers. Wistron has refrained from commenting on specific products and individual customers.

The global AI server market is experiencing rapid growth. Industry estimates global production capacity to reach 500,000 units next year, with a market value exceeding a trillion NT dollars. NVIDIA still holds the dominant position in the AI chip market with a market share of over 90%. However, with the launch of AMD’s new products, they are poised to capture nearly 10% of the market share.

There have been recent reports of production yield issues with AMD’s MI300 series, potentially delaying the originally planned fourth-quarter shipments. Nevertheless, supply chain sources reveal that Wistron has secured exclusive large orders for MI300 series GPU baseboards and will begin supplying AMD in the fourth quarter. Meanwhile, in NVIDIA’s L10, Wistron has recently received an urgent order from a non-U.S. CSP (Cloud Service Provider) for at least 3,000 AI servers, expected to be delivered in February of next year.

Supply chain analysts note that while time is tight, Wistron is not billing its customers using the NRE (Non-Recurring Engineering), indicating their confidence in order visibility and customer demand growth. They aim to boost revenue and profit contributions through a “quantity-based” approach.

On another front, Wistron is currently accelerating shipments for not only NVIDIA DGX/HGX architecture’s H100-GPU baseboards but also exclusive supply orders for NVIDIA DGX architecture and AI server front-end L6 mainboard (SMT PCBA) orders for both NVIDIA and AMD architectures under the Dell brand. These orders have been steadily increasing Wistron’s shipment momentum since the third quarter.

(Photo credit: NVIDIA)

2023-09-01

[News] Inventec’s AI Strategy Will Boost Both NVIDIA’s and AMD’s AI Server Chips to Grow

According to Liberty Times Net, Inventec, a prominent player in the realm of digital technology, is making significant strides in research and development across various domains, including artificial intelligence, automotive electronics, 5G, and the metaverse. The company has recently introduced a new all-aluminum liquid-cooled module for its general-purpose graphics processing units (GPGPU) powered by NVIDIA’s A100 chips. Additionally, this innovative technology is being applied to AI server products featuring AMD’s 4th Gen EPYC dual processors, marking a significant step towards the AI revolution.

Inventec has announced that their Rhyperior general-purpose graphics processors previously offered two cooling solutions: air cooling and air cooling with liquid cooling. The new all-aluminum liquid-cooled module not only reduces material costs by more than 30% compared to traditional copper cooling plates but also comes with 8 graphics processors (GPUs) and includes 6 NVIDIA NVSwitch nodes. This open-loop cooling system eliminates the need for external refrigeration units and reduces fan power consumption by approximately 50%.

Moreover, Inventec’s AI server product, the K885G6, equipped with AMD’s 4th Gen EPYC dual processors, has demonstrated a significant reduction in data center air conditioning energy consumption of approximately 40% after implementing this new cooling solution. The use of water as a coolant, rather than environmentally damaging and costlier chemical fluids, further enhances the product’s appeal, as it can support a variety of hardware configurations to meet the diverse needs of AI customers.

Inventec’s new facility in Mexico has commenced mass production, with plans to begin supplying high-end NVIDIA AI chips, specifically the H100 motherboards, in September. They are poised to increase production further in the fourth quarter. Additionally, in the coming year, the company is set to release more Application-Specific Integrated Circuit (ASIC) products, alongside new offerings from NVIDIA and AMD. Orders for server system assembly from U.S. customers (L11 assembly line) are steadily growing. The management team anticipates showcasing their innovations at the Taiwan Excellence Exhibition in Dongguan, China, starting on October 7th, as they continue to deepen their collaboration with international customers.

(Source: https://ec.ltn.com.tw/article/breakingnews/4412765)
2023-09-01

[News] With US Expanding AI Chip Control, the Next Chip Buying Frenzy Looms

According to a report by Taiwan’s Commercial Times, NVIDIA is facing repercussions from the US chip restriction, leading to controls on the export of high-end AI GPU chips to certain countries in the Middle East. Although NVIDIA claims that these controls won’t have an immediate impact on its performance, and industry insiders in the Taiwanese supply chain believe the initial effects are minimal. However, looking at the past practice of prohibiting exports to China, this could potentially trigger another wave of preemptive stockpiling.

Industry sources from the supply chain note that following the US restrictions on exporting chips to China last year, the purchasing power of Chinese clients increased rather than decreased, resulting in a surge in demand for secondary-level and below chip products, setting off a wave of stockpiling.

Take NVIDIA’s previous generation A100 chip for instance. After the US implemented export restrictions on China, NVIDIA replaced it with the lower-tier A800 chip, which quickly became a sought-after product in the Chinese market, driving prices to surge. It’s reported that the A800 has seen a cumulative price increase of 60% from the start of the year to late August, and it remains one of the primary products ordered by major Chinese CSPs.

Furthermore, the recently launched L40S GPU server by NVIDIA in August has become a market focal point. While it may not match the performance of systems like HGX H100/A100 in large-scale AI algorithm training, it outperforms the A100 in AI inference or small-scale AI algorithm training. As the L40S GPU is positioned in the mid-to-low range, it is currently not included in the list of chips subject to export controls to China.

Supply chain insiders suggest that even if the control measures on exporting AI chips to the Middle East are further enforced, local clients are likely to turn to alternatives like the A800 and  L40S. However, with uncertainty about whether the US will extend the scope of controlled chip categories, this could potentially trigger another wave of purchasing and stockpiling.

The primary direct beneficiaries in this scenario are still the chip manufacturers. Within the Taiwanese supply chain, Wistron, which supplies chip brands in the AI server front-end GPU board sector, stands to gain. Taiwanese supply chain companies producing A800 series AI servers and the upcoming L40S GPU servers, such as Quanta, Inventec, Gigabyte, and ASUS, have the opportunity to benefit as well.

(Photo credit: NVIDIA)

2023-08-31

[News] Asus AI Servers Swiftly Seize Business Opportunities

According to the news from Chinatimes, Asus, a prominent technology company, has announced on the 30th of this month the release of AI servers equipped with NVIDIA’s L40S GPUs. These servers are now available for order. The L40S GPU was introduced by NVIDIA in August to address the shortage of H100 and A100 GPUs. Remarkably, Asus has swiftly responded to this situation by unveiling AI server products within a span of less than two weeks, showcasing their optimism in the imminent surge of AI applications and their eagerness to seize the opportunity.

Solid AI Capabilities of Asus Group

Apart from being among the first manufacturers to introduce the NVIDIA OVX server system, Asus has leveraged resources from its subsidiaries, such as TaiSmart and Asus Cloud, to establish a formidable AI infrastructure. This not only involves in-house innovation like the Large Language Model (LLM) technology but also extends to providing AI computing power and enterprise-level generative AI applications. These strengths position Asus as one of the few all-encompassing providers of generative AI solutions.

Projected Surge in Server Business

Regarding server business performance, Asus envisions a yearly compounded growth rate of at least 40% until 2027, with a goal of achieving a fivefold growth over five years. In particular, the data center server business catering primarily to Cloud Service Providers (CSPs) anticipates a tenfold growth within the same timeframe, driven by the adoption of AI server products.

Asus CEO recently emphasized that Asus’s foray into AI server development was prompt and involved collaboration with NVIDIA from the outset. While the product lineup might be more streamlined compared to other OEM/ODM manufacturers, Asus had secured numerous GPU orders ahead of the AI server demand surge. The company is optimistic about the shipping momentum and order visibility for the new generation of AI servers in the latter half of the year.

Embracing NVIDIA’s Versatile L40S GPU

The NVIDIA L40S GPU, built on the Ada Lovelace architecture, stands out as one of the most powerful general-purpose GPUs in data centers. It offers groundbreaking multi-workload computations for large language model inference, training, graphics, and image processing. Not only does it facilitate rapid hardware solution deployment, but it also holds significance due to the current scarcity of higher-tier H100 and A100 GPUs, which have reached allocation stages. Consequently, businesses seeking to repurpose idle data centers are anticipated to shift their focus toward AI servers featuring the L40S GPU.

Asus’s newly introduced L40S GPU servers include the ESC8000-E11/ESC4000-E11 models with built-in Intel Xeon processors, as well as the ESC8000A-E12/ESC4000A-E12 models utilizing AMD EPYC processors. These servers can be configured with up to 4 or a maximum of 8 NVIDIA L40S GPUs. This configuration assists enterprises in enhancing training, fine-tuning, and inference workloads, facilitating AI model creation. It also establishes Asus’s platforms as the preferred choice for multi-modal generative AI applications.

(Source: https://www.chinatimes.com/newspapers/20230831000158-260202?chdtv)
  • Page 40
  • 46 page(s)
  • 230 result(s)

Get in touch with us