Articles


2023-09-01

[News] With US Expanding AI Chip Control, the Next Chip Buying Frenzy Looms

According to a report by Taiwan’s Commercial Times, NVIDIA is facing repercussions from the US chip restriction, leading to controls on the export of high-end AI GPU chips to certain countries in the Middle East. Although NVIDIA claims that these controls won’t have an immediate impact on its performance, and industry insiders in the Taiwanese supply chain believe the initial effects are minimal. However, looking at the past practice of prohibiting exports to China, this could potentially trigger another wave of preemptive stockpiling.

Industry sources from the supply chain note that following the US restrictions on exporting chips to China last year, the purchasing power of Chinese clients increased rather than decreased, resulting in a surge in demand for secondary-level and below chip products, setting off a wave of stockpiling.

Take NVIDIA’s previous generation A100 chip for instance. After the US implemented export restrictions on China, NVIDIA replaced it with the lower-tier A800 chip, which quickly became a sought-after product in the Chinese market, driving prices to surge. It’s reported that the A800 has seen a cumulative price increase of 60% from the start of the year to late August, and it remains one of the primary products ordered by major Chinese CSPs.

Furthermore, the recently launched L40S GPU server by NVIDIA in August has become a market focal point. While it may not match the performance of systems like HGX H100/A100 in large-scale AI algorithm training, it outperforms the A100 in AI inference or small-scale AI algorithm training. As the L40S GPU is positioned in the mid-to-low range, it is currently not included in the list of chips subject to export controls to China.

Supply chain insiders suggest that even if the control measures on exporting AI chips to the Middle East are further enforced, local clients are likely to turn to alternatives like the A800 and  L40S. However, with uncertainty about whether the US will extend the scope of controlled chip categories, this could potentially trigger another wave of purchasing and stockpiling.

The primary direct beneficiaries in this scenario are still the chip manufacturers. Within the Taiwanese supply chain, Wistron, which supplies chip brands in the AI server front-end GPU board sector, stands to gain. Taiwanese supply chain companies producing A800 series AI servers and the upcoming L40S GPU servers, such as Quanta, Inventec, Gigabyte, and ASUS, have the opportunity to benefit as well.

(Photo credit: NVIDIA)

2023-08-31

[News] Asus AI Servers Swiftly Seize Business Opportunities

According to the news from Chinatimes, Asus, a prominent technology company, has announced on the 30th of this month the release of AI servers equipped with NVIDIA’s L40S GPUs. These servers are now available for order. The L40S GPU was introduced by NVIDIA in August to address the shortage of H100 and A100 GPUs. Remarkably, Asus has swiftly responded to this situation by unveiling AI server products within a span of less than two weeks, showcasing their optimism in the imminent surge of AI applications and their eagerness to seize the opportunity.

Solid AI Capabilities of Asus Group

Apart from being among the first manufacturers to introduce the NVIDIA OVX server system, Asus has leveraged resources from its subsidiaries, such as TaiSmart and Asus Cloud, to establish a formidable AI infrastructure. This not only involves in-house innovation like the Large Language Model (LLM) technology but also extends to providing AI computing power and enterprise-level generative AI applications. These strengths position Asus as one of the few all-encompassing providers of generative AI solutions.

Projected Surge in Server Business

Regarding server business performance, Asus envisions a yearly compounded growth rate of at least 40% until 2027, with a goal of achieving a fivefold growth over five years. In particular, the data center server business catering primarily to Cloud Service Providers (CSPs) anticipates a tenfold growth within the same timeframe, driven by the adoption of AI server products.

Asus CEO recently emphasized that Asus’s foray into AI server development was prompt and involved collaboration with NVIDIA from the outset. While the product lineup might be more streamlined compared to other OEM/ODM manufacturers, Asus had secured numerous GPU orders ahead of the AI server demand surge. The company is optimistic about the shipping momentum and order visibility for the new generation of AI servers in the latter half of the year.

Embracing NVIDIA’s Versatile L40S GPU

The NVIDIA L40S GPU, built on the Ada Lovelace architecture, stands out as one of the most powerful general-purpose GPUs in data centers. It offers groundbreaking multi-workload computations for large language model inference, training, graphics, and image processing. Not only does it facilitate rapid hardware solution deployment, but it also holds significance due to the current scarcity of higher-tier H100 and A100 GPUs, which have reached allocation stages. Consequently, businesses seeking to repurpose idle data centers are anticipated to shift their focus toward AI servers featuring the L40S GPU.

Asus’s newly introduced L40S GPU servers include the ESC8000-E11/ESC4000-E11 models with built-in Intel Xeon processors, as well as the ESC8000A-E12/ESC4000A-E12 models utilizing AMD EPYC processors. These servers can be configured with up to 4 or a maximum of 8 NVIDIA L40S GPUs. This configuration assists enterprises in enhancing training, fine-tuning, and inference workloads, facilitating AI model creation. It also establishes Asus’s platforms as the preferred choice for multi-modal generative AI applications.

(Source: https://www.chinatimes.com/newspapers/20230831000158-260202?chdtv)
2023-08-31

[News] Continued Reduction in NAND Flash Production, Price Recovery Emerging

According to a report from Taiwan’s TechNews, the NAND Flash industry is gradually recovering in pricing as suppliers continue to reduce production. However, achieving a healthy market balance in terms of supply, demand, and pricing is expected to require more time and effort.

Regarding the memory market situation, TrendForce indicates that the reduction strategies for DRAM and NAND Flash by memory manufacturers are expected to continue into 2024. This is especially evident for the heavily loss-making NAND Flash segment. Despite TrendForce’s projection that visibility into consumer electronics market demand for the first half of 2024 remains uncertain, and with general server capital expenditures still weakened by AI server displacement, the memory market is anticipated to exhibit relatively weak demand.

Yet, TrendForce states, due to the low base in 2023 and certain memory product prices having reached comparatively low levels, DRAM and NAND Flash are forecasted to experience year-over-year growth rates of 13% and 16%, respectively.

On the other hand, even with demand picking up, effectively destocking and restoring supply-demand equilibrium in 2024 hinges on suppliers exercising restraint over production capacity. Once suppliers manage their production capacity appropriately, there’s a possibility for a rebound in the average memory prices.

Nomura Securities notes in their report that since late August 2023, NAND Flash prices have seen double-digit increases. This has largely resulted from the escalating scale of NAND Flash production cuts and the downstream inventory for smartphones and related components being low. Additionally, different brands have been launching new products over the past few months.

Citigroup’s recent update on global memory average selling price outlook reveals significant reductions in production volumes, including major memory manufacturers like Samsung. Memory manufacturers are expected to prevent further decline in memory average prices through substantial production cuts, as further decline could threaten the cash cost level of NAND Flash. Therefore, Samsung’s meaningful reduction in memory product production is expected to contribute to stabilizing the average memory selling price in 2023 and laying the groundwork for a stable recovery in the average memory market selling price throughout 2024.

2023-08-31

[News] Goldman Sachs: TSMC to Win Big with Intel’s Increased Outsourcing

According to a report by Taiwan’s Commercial Times, Goldman Sachs Securities has noted that Intel has been consistently grappling with process upgrade delays since the 10-nanometer fabrication process. Recently, the company has decided to establish a foundry-like relationship between its manufacturing groups and
internal product business units. With the market scale growing increasingly substantial, it is anticipated that Intel will expand its outsourcing to TSMC in 2024 and 2025. In the rising trend of outsourced manufacturing, TSMC stands as the major beneficiary.

Goldman Sachs’ analysis reveals that the total addressable market of Intel’s outsourcing orders for 2024 and 2025 is set at $18.6 billion and $19.4 billion, respectively. During the same period, the total addressable market scope for TSMC’s wafer fabrication services amounts to $5.6 billion and $9.7 billion, approximately accounting for 6.4% and 9.4% of TSMC’s overall revenue in the corresponding years.

Prominent semiconductor industry analyst Andrew Lu also explains that Intel’s wafer chip manufacturing division competes with TSMC, rather than its design division. The design division is striving for survival in the high-speed computing semiconductor sector, and it is currently hopeful for close collaboration with TSMC. Lu even predicts that Intel’s wafer manufacturing and design divisions will inevitably be further separated into two companies several years down the line.

2023-08-31

Understanding Chiplets, SoC, and SiP: Why TSMC, Intel, Samsung Invest?

Semiconductor process technology is nearing the boundaries of known physics. In order to continually enhance processor performance, the integration of small chips (chiplets) and heterogeneous Integration has become a prevailing trend. It is also regarded as a primary solution for extending Moore’s Law. Major industry players such as TSMC, Intel, Samsung, and others are vigorously developing these related technologies.

What are SoC, SiP, and Chiplet?

To understand Chiplet technology, we must first clarify two commonly used terms: SoC and SiP. SoC (System on Chip) involves redesigning multiple different chips to utilize the same manufacturing process and integrating them onto a single chip. On the other hand, SiP (System in Package) connects multiple chips with different manufacturing processes using heterogeneous integration techniques and integrates them within a single packaging form.

Chiplet technology employs advanced packaging techniques to create a SiP composed of multiple small chips. It integrates small chips with different functions onto a single substrate through advanced packaging techniques. While Chiplets and SiPs may seem similar, Chiplets are essentially chips themselves, whereas SiP refers to the packaging form. They have differences in functionality and purpose.

Chiplets: Today’s Semiconductor Development Trend

The design concept of Chiplet technology offers several advantages over SoC, notably in significantly improving chip manufacturing yield. As chip sizes increase to enhance performance, chip yield decreases due to the larger surface area. Chiplet technology can integrate various smaller chips with relatively high manufacturing yields, thus enhancing chip performance and yield.

Furthermore, Chiplet technology contributes to reduced design complexity and costs. Through heterogeneous integration, Chiplets can combine various types of small chips, reducing integration challenges in the initial design phase and facilitating design and testing. Additionally, since different Chiplets can be independently optimized, the final integrated product often achieves better overall performance.

Chiplets have the potential to lower wafer manufacturing costs. Apart from CPUs and GPUs, other units within chips can perform well without relying on advanced processes. Chiplets enable different functional small chips to use the most suitable manufacturing process, contributing to cost reduction.

With the evolution of semiconductor processes, chip design has become more challenging and complex, leading to rising design costs. In this context, Chiplet technology, which simplifies design and manufacturing processes, effectively enhances chip performance, and extends Moore’s Law, holds significant promise.

Applications and Development of Chiplets

In recent years, global semiconductor giants like AMD, TSMC, Intel, NVIDIA, and others have recognized the market potential in this field, intensively investing in Chiplet technology. For example, AMD’s recent products have benefited from the ‘SiP + Chiplet’ manufacturing approach. Moreover, Apple’s M1 Ultra chip achieved high performance through a customed UltraFusion packaging architecture. In academia, institutions like the University of California, Georgia Tech, and European research organizations have begun researching interconnect interfaces, packaging, and applications related to Chiplet technology.

In conclusion, due to Chiplet technology’s ability to lower design costs, reduce development time, enhance design flexibility and yield, while expanding chip functionality, it is an indispensable solution in the ongoing development of high-performance chips.

This article is from TechNews, a collaborative media partner of TrendForce.

  • Page 296
  • 386 page(s)
  • 1930 result(s)

Get in touch with us