Insights
With the flourishing development of technologies such as AI, cloud computing, big data analytics, and mobile computing, modern society has an increasingly high demand for computing power.
Moreover, with the advancement beyond 3 nanometers, wafer sizes have encountered scaling limitations and manufacturing costs have increased. Therefore, besides continuing to develop advanced processes, the semiconductor industry is also exploring other ways to maintain chip size while ensuring high efficiency.
The concept of “heterogeneous integration” has become a contemporary focus, leading to the transition of chips from single-layer to advanced packaging with multiple layers stacked together.
The term “CoWoS” can be broken down into the following definitions: “Cow” stands for “Chip-on-Wafer,” referring to the stacking of chips, while “WoS” stands for “Wafer-on-Substrate,” which involves stacking chips on a substrate.
Therefore, “CoWoS” collectively refers to stacking chips and packaging them onto a substrate. This approach reduces the space required for chips and offers benefits in reducing power consumption and costs.
Among these, CoWoS can be further divided into 2.5D horizontal stacking (most famously exemplified by TSMC’s CoWoS) and 3D vertical stacking versions. In these configurations, various processor and memory modules are stacked layer by layer to create chiplets. Because its primary application lies in advanced processes, it is also referred to as advanced packaging.
According to TrendForce’s data, it has provided insights into the heat of the AI chip market. In 2023, shipments of AI servers (including those equipped with GPU, FPGA, ASIC, etc.) reached nearly 1.2 million units, a 38.4% increase from 2022, accounting for nearly 9% of the overall server shipments.
Looking ahead to 2026, the proportion is expected to reach 15%, with a compound annual growth rate (CAGR) of AI server shipments from 2022 to 2026 reaching 22%.
Due to the advanced packaging requirements of AI chips, TSMC’s 2.5D advanced packaging CoWoS technology is currently the primary technology used for AI chips.
GPUs, in particular, utilize higher specifications of HBM, which require the integration of core dies using 2.5D advanced packaging technology. The initial stage of chip stacking in CoWoS packaging, known as Chip on Wafer (CoW), primarily undergoes manufacturing at the fab using a 65-nanometer process. Following this, through-silicon via (TSV) is carried out, and the finalized products are stacked and packaged onto the substrate, known as Wafer on Substrate (WoS).
As a result, the production capacity of CoWoS packaging technology has become a significant bottleneck in AI chip output over the past year, and it remains a key factor in whether AI chip demand can be met in 2024. Foreign analysts have previously pointed out that NVIDIA is currently the largest customer of TSMC’s 2.5D advanced packaging CoWoS technology.
This includes NVIDIA’s H100 GPU, which utilizes TSMC’s 4-nanometer advanced process, as well as the A100 GPU, which uses TSMC’s 7-nanometer process, both of which are packaged using CoWoS technology. As a result, NVIDIA’s chips account for 40% to 50% of TSMC’s CoWoS packaging capacity. This is also why the high demand for NVIDIA chips has led to tight capacity for TSMC’s CoWoS packaging.
TSMC’s Expansion Plans Expected to Ease Tight Supply Situation in 2024
During the earnings call held in July 2023, TSMC announced its plans to double the CoWoS capacity, indicating that the supply-demand imbalance in the market could be alleviated by the end of 2024.
Subsequently, in late July 2023, TSMC announced an investment of nearly NTD 90 billion (roughly USD 2.87 billion) to establish an advanced packaging fab in the Tongluo Science Park, with the construction expected to be completed by the end of 2026 and mass production scheduled for the second or third quarter of 2027.
In addition, during the earnings call on January 18, 2024, TSMC’s CFO, Wendell Huang, emphasized that TSMC would continue its expansion of advanced processes in 2024. Therefore, it is estimated that 10% of the total capital expenditure for the year will be allocated towards expanding capacity in advanced packaging, testing, photomasks, and other areas.
In fact, NVIDIA’s CFO, Colette Kress, stated during an investor conference that the key process of CoWoS advanced packaging has been developed and certified with other suppliers. Kress further anticipated that supply would gradually increase over the coming quarters.
Regarding this, J.P. Morgan, an investment firm, pointed out that the bottleneck in CoWoS capacity is primarily due to the supply-demand gap in the interposer. This is because the TSV process is complex, and expanding capacity requires more high-precision equipment. However, the long lead time for high-precision equipment, coupled with the need for regular cleaning and inspection of existing equipment, has resulted in supply shortages.
Apart from TSMC’s dominance in the CoWoS advanced packaging market, other Taiwanese companies such as UMC, ASE Technology Holding, and Powertek Technology are also gradually entering the CoWoS advanced packaging market.
Among them, UMC expressed during an investor conference in late July 2023 that it is accelerating the deployment of silicon interposer technology and capacity to meet customer needs in the 2.5D advanced packaging sector.
UMC Expands Interposer Capacity; ASE Pushes Forward with VIPack Advanced Packaging Platform
UMC emphasizes that it is the world’s first foundry to offer an open system solution for silicon interposer manufacturing. Through this open system collaboration (UMC+OSAT), UMC can provide a fully validated supply chain for rapid mass production implementation.
On the other hand, in terms of shipment volume, ASE Group currently holds approximately a 32% market share in the global Outsourced Semiconductor Assembly and Test (OSAT) industry and accounts for over 50% of the OSAT shipment volume in Taiwan. Its subsidiary, ASE Semiconductor, also notes the recent focus on CoWoS packaging technology. ASE Group has been strategically positioning itself in advanced packaging, working closely with TSMC as a key partner.
ASE underscores the significance of its VIPack advanced packaging platform, designed to provide vertical interconnect integration solutions. VIPack represents the next generation of 3D heterogeneous integration architecture.
Leveraging advanced redistribution layer (RDL) processes, embedded integration, and 2.5D/3D packaging technologies, VIPack enables customers to integrate multiple chips into a single package, unlocking unprecedented innovation in various applications.
Powertech Technology Seeks Collaboration with Foundries; Winbond Electronics Offers Heterogeneous Integration Packaging Technology
In addition, the OSAT player Powertech Technology is actively expanding its presence in advanced packaging for logic chips and AI applications.
The collaboration between Powertech and Winbond is expected to offer customers various options for CoWoS advanced packaging, indicating that CoWoS-related advanced packaging products could be available as early as the second half of 2024.
Winbond Electronics emphasizes that the collaboration project will involve Winbond Electronics providing CUBE (Customized Ultra-High Bandwidth Element) DRAM, as well as customized silicon interposers and integrated decoupling capacitors, among other advanced technologies. These will be complemented by Powertech Technology’s 2.5D and 3D packaging services.
Read more
(Photo credit: TSMC)
News
NVIDIA has begun accepting pre-orders for its customized artificial intelligence (AI) chips tailored for the Chinese market, as per a report from Reuters. The prices of the chips are said to be comparable to those of its competitor Huawei’s products.
The H20 graphics card, exclusively designed by NVIDIA for the Chinese market, is the most powerful among the three chips developed, although its computing power is lower than its own flagship AI chips, the H100 and H800. The H800, also tailored for China, was banned in October last year.
According to industry sources cited in the report, the specifications of the H20 are inferior to Huawei’s Ascend 910B in some critical areas. Additionally, NVIDIA has priced orders from Chinese H20 distributors between $12,000 and $15,000 per unit in recent weeks.
It is noteworthy that servers provided by distributors with 8 pre-configured AI chips are priced at CNY 1.4 million. In comparison, servers equipped with 8 H800 chips were priced at around CNY 2 million when they were launched a year ago.
Furthermore, it’s added in the report that distributors have informed customers that they will be able to begin small-scale deliveries of H20 products in the first quarter of 2024, with bulk deliveries starting in the second quarter.
In terms of specifications, the H20 appears to lag behind the 910B in FP32 performance, a critical metric that measures the speed at which chips process common tasks, with the H20’s performance being less than half of its competitor’s.
However, according to the source cited in the report, the H20 seems to have an advantage over the 910B in terms of interconnect speed, which measures the speed of data transfer between chips.
The source further indicates that in applications requiring numerous chips to be interconnected and function as a system, the H20 still possesses competitive capabilities compared to the 910B.
NVIDIA reportedly plans to commence mass production of the H20 in the second quarter of this year. Additionally, the company intends to introduce two other chips targeted at the Chinese market, namely the L20 and L2. However, the status of these two chips cannot be confirmed at the moment, as neither the H20, L20, nor L2 are currently listed on NVIDIA’s official website.
TrendForce believes Chinese companies will continue to buy existing AI chips in the short term. NVIDIA’s GPU AI accelerator chips remain a top priority—including H20, L20, and L2—designed specifically for the Chinese market following the ban.
At the same time, major Chinese AI firms like Huawei, will continue to develop general-purpose AI chips to provide AI solutions for local businesses. Beyond developing AI chips, these companies aim to establish a domestic AI server ecosystem in China.
TrendForce recognizes that a key factor in achieving success will come from the support of the Chinese government through localized projects, such as those involving Chinese telecom operators, which encourage the adoption of domestic AI chips.
Read more
(Photo credit: NVIDIA)
News
According to sources cited by the Financial Times, South Korean chip manufacturer SK Hynix is reportedly planning to establish a packaging facility in Indiana, USA. This move is expected to significantly advance the US government’s efforts to bring more artificial intelligence (AI) chip supply chains into the country.
SK Hynix’s new packaging facility will specialize in stacking standard dynamic random-access memory (DRAM) chips to create high-bandwidth memory (HBM) chips. These chips will then be integrated with NVIDIA’s GPUs for training systems like OpenAI’s ChatGPT.
Per one source close to SK Hynix cited by the report, the increasing demand for HBM from American customers and the necessity of close collaboration with chip designers have deemed the establishment of advanced packaging facilities in the US essential.
Regarding this, SK Hynix reportedly responded, “Our official position is that we are currently considering a possible investment in the US but haven’t made a final decision yet.”
The report quoted Kim Yang-paeng, a researcher at the Korea Institute for Industrial Economics and Trade, as saying, “If SK Hynix establishes an advanced HBM memory packaging facility in the United States, along with TSMC’s factory in Arizona, this means Nvidia can ultimately produce GPUs in the United States.”
Previously, the United States was reported to announce substantial chip subsidies by the end of March. The aim is to pave the way for chip manufacturers like TSMC, Samsung, and Intel by providing them with billions of dollars to accelerate the expansion of domestic chip production.
These subsidies are a core component of the US 2022 “CHIPS and Science Act,” which allocates a budget of USD 39 billion to directly subsidize and revitalize American manufacturing.
Read more
(Photo credit: SK Hynix)
News
NVIDIA’s AI chip supply faces constraints, with insufficient CoWoS advanced packaging production capacity at TSMC potentially being the main issue. According to Economic Daily News, NVIDIA is also providing advanced packaging services to Intel, with a monthly capacity of about 5,000 units. It is expected to join NVIDIA’s advanced packaging supply chain as early as the second quarter in 2024, grabbing a share of TSMC’s related orders.
Industry sources cited by the Economic Daily News believe that Intel’s participation will help alleviate the tight supply of AI chips.
TSMC declined to comment on the rumors on January 30th. As per industry sources cited by Economic Daily News, Intel’s entry into NVIDIA’s advanced packaging supply chain is expected to lead to a significant increase of nearly ten percent in total production capacity.
As per industry analysis cited in the report, even with Intel joining to provide advanced packaging capacity for NVIDIA, TSMC remains NVIDIA’s primary supplier for advanced packaging. When considering the expanded production capacity of TSMC and other related assembly and testing partners, it is estimated that they will supply approximately 90% of advanced packaging capacity for NVIDIA.
Supply chain sources cited by the report further indicate that TSMC is ramping up its advanced packaging production capacity. Production capacity is estimated to increase to nearly 50,000 units in the first quarter of this year, representing a 25% increase from the estimated nearly 40,000 units in December last year.
While Intel may potentially provide NVIDIA with nearly 5,000 units of advanced packaging capacity, this accounts for about 10% of the total. However, Intel is reportedly not involved in NVIDIA’s AI chip foundry orders.
Intel has advanced packaging capacity in Oregon and New Mexico in the United States and is actively expanding its advanced packaging capabilities in its new facility in Penang. It is noteworthy that Intel previously stated its intention to offer customers the option to only use its advanced packaging solutions, expected to provide customers with greater production flexibility.
Industry sources also indicate that the previous shortage of AI chips stemmed from three main factors: insufficient capacity in advanced packaging, tight supply of high-bandwidth memory (HBM3), and some cloud service providers placing duplicate orders. However, these bottlenecks have gradually been resolved, and the improvement rate is better than expected.
(Photo credit: Intel)
Read more
News
NVIDIA CEO Jensen Huang stated on January 25th that the current phase marks the beginning of AI expansion and growth. He emphasized that AI is set to transform everything and be omnipresent in the future. However, the most significant challenge at present is the ongoing tight supply of AI chips.
According to a report from Economic Daily News, Huang, who recently visited Taiwan, shared his thoughts during an interview before attending the NVIDIA Taiwan branch’s year-end banquet.
He revealed that during his trip to Taiwan, he had meetings with TSMC’s founder couple, Morris and Sophie Chang, as well as with CEO C.C. Wei, his semiconductor manufacturing partner. During the discussion, Huang and Wei talked about the substantial demand for NVIDIA’s products and the necessary collaborative efforts with TSMC to address market needs.
Regarding the challenges facing AI development, Jensen Huang believes that one key challenge lies in expanding the production capacity of AI chips. While there is a tight supply of NVIDIA products, the demand is incredibly strong.
As a result, NVIDIA actively collaborates with TSMC and other supply chain partners to meet this demand. The company continues to advance AI technology while also paying attention to related security issues.
When discussing the major trends in AI, Jensen Huang pointed out that the development of AI can help rejuvenate the computer industry. He further indicated that AI will operate in smartphones, computers, robots, automobiles, as well as in the cloud and data centers. Huang emphasized that NVIDIA is a pioneer in accelerating computation and AI computing, and in the next decade, he envisions a reshaping of computation, with every industry being impacted.
Before Huang’s visit to Taiwan, Huang also went to China recently, which is seen as an effort to alleviate concerns among customers about adopting the downgraded versions
Read more
(Photo credit: NVIDIA)