Nvidia


2024-02-22

[News] Hurdles in Acquiring NVIDIA’s High-End Products: Assessing the Progress of Eight Chinese AI Chip Companies in Self-Development

Under the formidable impetus of AI, global enterprises are vigorously strategizing for AI chip development, and China is no exception. Who are the prominent AI chip manufacturers in China presently? How do they compare with industry giants like NVIDIA, and what are their unique advantages? A report from TechNews has compiled an overview of eight Chinese AI chip manufacturers in self-development.

  • An Overview of AI Chips

In broad terms, AI chips refer to semiconductor chips capable of running AI algorithms. However, in the industry’s typical usage, AI chips specifically denote chips designed with specialized acceleration for AI algorithms, capable of handling large-scale computational tasks in AI applications. Under this concept, AI chips are also referred to as accelerator cards.

Technically, AI chips are mainly classified into three categories: GPU, FPGA, and ASIC. In terms of functionality, AI chips encompass two main types: training and inference. Regarding application scenarios, AI chips can be categorized into server-side and mobile-side, or cloud, edge, and terminal.

The global AI chip market is currently dominated by Western giants, with NVIDIA leading the pack. Industry sources cited by TechNews have revealed data that NVIDIA nearly monopolizes the AI chip market with an 80% market share.

China’s AI industry started relatively late, but in recent years, amid the US-China rivalry and strong support from Chinese policies, Chinese AI chip design companies have gradually gained prominence. They have demonstrated relatively outstanding performance in terminal and large model inference.

However, compared to global giants, they still have significant ground to cover, especially in the higher-threshold GPU and large model training segments.

GPUs are general-purpose chips, currently dominating the usage in the AI chip market. General-purpose GPU computing power is widely employed in artificial intelligence model training and inference fields. Presently, NVIDIA and AMD dominate the GPU market, while Chinese representative companies include Hygon Information Technology, Jingjia Micro, and Enflame Technology.

FPGAs are semi-customized chips known for low latency and short development cycles. Compared to GPUs, they are suitable for multi-instruction, single-data flow analysis, but not for complex algorithm computations. They are mainly used in the inference stage of deep learning algorithms. Frontrunners in this field include Xilinx and Intel in the US, with Chinese representatives including Baidu Kunlunxin and DeePhi.

ASICs are fully customized AI chips with advantages in power consumption, reliability, and integration. Mainstream products include TPU, NPU, VPU, and BPU. Global leading companies include Google and Intel, while China’s representatives include Huawei, Alibaba, Cambricon Technologies, and Horizon Robotics.

In recent years, China has actively invested in the field of self-developed AI chips. Major companies such as Baidu, Alibaba, Tencent, and Huawei have accelerated the development of their own AI chips, and numerous AI chip companies continue to emerge.

Below is an overview of the progress of 8 Chinese AI chip manufacturers:

1. Baidu Kunlunxin

Baidu’s foray into AI chips can be traced back to as early as 2011. After seven years of development, Baidu officially unveiled its self-developed AI chip, Kunlun 1, in 2018. Built on a 14nm process and utilizing the self-developed XPU architecture, Kunlun 1 entered mass production in 2020. It is primarily employed in Baidu’s search engine and Xiaodu businesses.

In August of the same year, Baidu announced the mass production of its second-generation self-developed AI chip, Kunlun 2. It adopts a 7nm process and integrates the self-developed second-generation XPU architecture, delivering a performance improvement of 2-3 times compared to the first generation. It also exhibits significant enhancements in versatility and ease of use.

The first two generations of Baidu Kunlunxin products have already been deployed in tens of thousands of units. The third-generation product is expected to be unveiled at the Baidu Create AI Developer Conference scheduled for April 2024.

2. T-Head (Alibaba)

Established in September 2018, T-Head is the semiconductor chip business entity fully owned by Alibaba. It provides a series of products, covering data center chips, IoT chips, processor IP licensing, and more, achieving complete coverage across the chip design chain.

In terms of AI chip deployment, T-Head introduced its first high-performance artificial intelligence inference chip, the HanGuang 800, in September 2019. It is based on a 12nm process and features a proprietary architecture.

In August 2023, Alibaba’s T-Head unveiled its first self-developed RISC-V AI platform, supporting over 170 mainstream AI models, thereby propelling RISC-V into the era of high-performance AI applications.

Simultaneously, T-Head announced the new upgrade of its XuanTie processor C920, which can accelerate GEMM (General Matrix Multiplication) calculations 15 times faster than the Vector scheme.

In November 2023, T-Head introduced three new processors on the XuanTie RISC-V platform (C920, C907, R910). These processors significantly enhance acceleration computing capabilities, security, and real-time performance, poised to accelerate the widespread commercial deployment of RISC-V in scenarios and domains such as autonomous driving, artificial intelligence, enterprise-grade SSD, and network communication.

3. Tencent 

In November 2021, Tencent announced substantial progress in three chip designs: Zixiao for AI computing, Canghai for image processing, and Xuanling for high-performance networking.

Zixiao has successfully undergone trial production and has been activated. Reportedly, Zixiao employs in-house storage-computing architecture and proprietary acceleration modules, delivering up to 3 times the computing acceleration performance and over 45% cost savings overall.

Zixiao chips are intended for internal use by Tencent and are not available for external sales. Tencent profits by renting out computing power through its cloud services.

Recently, according to sources cited by TechNews, Tencent is considering using Zixiao V1 as an alternative to the NVIDIA A10 chip for AI image and voice recognition applications. Additionally, Tencent is planning to launch the Zixiao V2 Pro chip optimized for AI training to replace the NVIDIA L40S chip in the future.

4. Huawei

Huawei unveiled its Huawei AI strategy and all-scenario AI solutions at the 2018 Huawei Connect Conference. Additionally, it introduced two new AI chips: the Ascend 910 and the Ascend 310. Both chips are based on Huawei’s self-developed Da Vinci architecture.

The Ascend 910, designed for training, utilizes a 7nm process and boasts computational density that is said to surpass the NVIDIA Tesla V100 and Google TPU v3.

On the other hand, the Ascend 310 belongs to the Ascend-mini series and is Huawei’s first commercial AI SoC, catering to low-power consumption areas such as edge computing.

Based on the Ascend 910 and Ascend 310 AI chips, Huawei has introduced the Atlas AI computing solution. As per the Huawei Ascend community, the Atlas 300T product line includes three models corresponding to the Ascend 910A, 910B, and 910 Pro B.

Among them, the 910 Pro B has already secured orders for at least 5,000 units from major clients in 2023, with delivery expected in 2024. Sources cited by the TechNews report indicate that the capabilities of the Huawei Ascend 910B chip are now comparable to those of the NVIDIA A100.

Due to the soaring demand for China-produced AI chips like the Huawei Ascend 910B in China, Reuters recently reported that Huawei plans to prioritize the production of the Ascend 910B. This move could potentially impact the production capacity of the Kirin 9000s chips, which are expected to be used in the Mate 60 series.

5. Cambricon Technologies

Founded in 2016, Cambricon Technologies focuses on the research and technological innovation of artificial intelligence chip products.

Since its establishment, Cambricon has launched multiple chip products covering terminal, cloud, and edge computing fields. Among them, the MLU 290 intelligent chip is Cambricon’s first training chip, utilizing TSMC’s 7nm advanced process and integrating 46 billion transistors. It supports the MLUv02 expansion architecture, offering comprehensive support for AI training, inference, or hybrid artificial intelligence computing acceleration tasks.

The Cambricon MLU 370 is the company’s flagship product, utilizing a 7nm manufacturing process and supporting both inference and training tasks. Additionally, the MLU 370 is Cambricon’s first AI chip to adopt chiplet technology, integrating 39 billion transistors, with a maximum computing power of up to 256TOPS (INT8).

6. Biren Technology 

Established in 2019, Biren Technology initially focuses on general smart computing in the cloud.

It aims to surpass existing solutions gradually in various fields such as artificial intelligence training, inference, and graphic rendering, thereby achieving a breakthrough in China’s produced high-end general smart computing chips.

In 2021, Biren Technology’s first general GPU, the BR100 series, entered trial production. The BR100 was officially released in August 2022.

Reportedly, the BR100 series is developed based on Biren Technology’s independently chip architecture and utilizes mature 7nm manufacturing processes.

7. Horizon Robotics 

Founded in July 2015, Horizon Robotics is a provider of smart driving computing solutions in China. It has launched various AI chips, notably the Sunrise and Journey series. The Sunrise series focuses on the AIoT market, while the Journey series is designed for smart driving applications.

Currently, the Sunrise series has advanced to its third generation, comprising the Sunrise 3M and Sunrise 3E models, catering to the high-end and low-end markets, respectively.

In terms of performance, the Sunrise 3 achieves an equivalent standard computing power of 5 TOPS while consuming only 2.5W of power, representing a significant upgrade from the previous generation.

The Journey series has now iterated to its fifth generation. The Journey 5 chip was released in 2021, with global mass production starting in September 2022. Each chip in the series boasts a maximum AI computing power of up to 128 TOPS.

In November 2023, Horizon Robotics announced that the Journey 6 series will be officially unveiled in April 2024, with the first batch of mass-produced vehicle deliveries scheduled for the fourth quarter of 2024.

Several automotive companies, including BYD, GAC Group, Volkswagen Group’s software company CARIAD, Bosch, among others, have reportedly entered into cooperative agreements with Horizon Robotics.

8. Enflame Technology

Enflame Technology, established in March 2018, specializes in cloud and edge computing in the field of artificial intelligence.

Over the past five years, it has developed two product lines focusing on cloud training and cloud inference. In September 2023, Enflame Technology announced the completion of Series D funding round of CNY 2 billion.

In addition, according to reports cited by TechNews, Enflame Technology’s third-generation AI chip products are set to hit the market in early 2024.

Conclusion

Looking ahead, the industry remains bullish on the commercial development of AI, anticipating a substantial increase in the demand for computing power, thereby creating a significant market opportunity for AI chips.

Per data cited by TechNews, it has indicated that the global AI chip market reached USD 580 billion in 2022 and is projected to exceed a trillion dollars by 2030.

Leading AI chip manufacturers like NVIDIA are naturally poised to continue benefiting from this trend. At the same time, Chinese AI chip companies also have the opportunity to narrow the gap and accelerate growth within the vast AI market landscape.

Read more

(Photo credit: iStock)

Please note that this article cites information from TechNews and Reuters.

2024-02-21

[News] Breaking Away from NVIDIA Dependency, Microsoft Reportedly Developing In-House AI Server High-Speed Network Card

Microsoft is reportedly developing a customized network card for AI servers, as per sources cited by global media The Information. This card is expected to enhance the performance of its in-house AI chip Azure Maia 100 while reducing dependency on NVIDIA as the primary supplier of high-performance network cards.

Leading this product initiative at Microsoft is Pradeep Sindhu, co-founder of Juniper Networks. Microsoft acquired Sindhu’s data center technology startup, Fungible, last year. Sindhu has since joined Microsoft and is leading the team in developing this network card.

According to the Information, this network card is similar to NVIDIA’s ConnectX-7 interface card, which supports a maximum bandwidth of 400 Gb Ethernet and is sold alongside NVIDIA GPUs.

Developing high-speed networking equipment tailored specifically for AI workloads may take over a year. If successful, it could reduce the time required for OpenAI to train models on Microsoft AI servers and lower the costs associated with the training process.

In November last year, Microsoft unveiled the Azure Maia 100 for data centers, manufactured using TSMC’s 5-nanometer process. The Azure Maia 100, introduced at the conference, is an AI accelerator chip designed for tasks such as running OpenAI models, ChatGPT, Bing, GitHub Copilot, and other AI workloads.

Microsoft is also in the process of designing the next generation of the chip. Not only is Microsoft striving to reduce its reliance on NVIDIA, but other companies including OpenAI, Tesla, Google, Amazon, and Meta are also investing in developing their own AI accelerator chips. These companies are expected to compete with NVIDIA’s flagship H100 AI accelerator chips.

Read more

(Photo credit: Microsoft)

Please note that this article cites information from TechNews and The Information.

2024-02-21

[News] Pioneering an AI Era: Assessing the Prosperity and Challenges of the NVIDIA

Last year’s AI boom propelled NVIDIA into the spotlight, yet the company finds itself at a challenging crossroads.

According to a report from TechNews, on one hand, NVIDIA dominates in high-performance computing and artificial intelligence, continuously expanding with its latest GPU products. On the other hand, global supply chain instability, rapid emergence of competitors, and uncertainties in technological innovation are exerting unprecedented pressure on NVIDIA.

NVIDIA’s stock price surged by 246% last year, driving its market value past USD 1 trillion and making it the first chip company to achieve this milestone. According to the Bloomberg Billionaires Index, NVIDIA CEO Jensen Huang’s personal wealth has soared to USD 55.7 billion.

However, despite the seemingly radiant outlook for the NVIDIA, as per a report from TechNews, it still faces uncontrollable internal and external challenges.

  • Internal Concern 1: CoWoS, HBM Capacity Bottlenecks

The most apparent issue lies in capacity constraints.

Currently, NVIDIA’s A100 and H100 GPUs are manufactured using TSMC’s CoWoS packaging technology. However, with the surge in demand for generative AI, TSMC’s CoWoS capacity is severely strained. Consequently, NVIDIA has certified other CoWoS packaging suppliers such as UMC, ASE, and American OSAT manufacturer Amkor as backup options.

Meanwhile, TSMC has relocated its InFo production capacity from Longtan to Southern Taiwan Science Park. The vacated Longtan fab is being repurposed to expand CoWoS capacity, while the Zhunan and Taichung fabs are also contributing to the expansion of CoWoS production to alleviate capacity constraints.

However, during the earnings call, TSMC also stated that despite a doubling of capacity in 2024, it still may not be sufficient to meet all customer demands.

In addition to TSMC’s CoWoS capacity, industry rumors suggest that NVIDIA has made significant upfront payments to Micron, SK Hynix, to secure HBM3 memory, ensuring a stable supply of HBM memory. However, the entire HBM capacity of Samsung, SK Hynix, and Micron for this year has already been allocated. Therefore, whether the capacity can meet market demand will be a significant challenge for NVIDIA.

  • Internal Concern 2: Major Customers Shifting Towards In-house Chips

While cloud service providers (CSPs) fiercely compete for GPUs, major players like Amazon, Microsoft, Google, and Meta are actively investing in in-house AI chips.

Amazon and Google have respectively introduced Trainium and TPU chips, Microsoft announced its first in-house AI chip Maia 100 along with in-house cloud computing CPU Cobalt 100, while Meta plans to unveil its first-generation in-house AI chip MTIA by 2025.

Although these hyperscale customers still rely on NVIDIA’s chips, in the long run, it may impact NVIDIA’s market share, inadvertently positioning them as competitors and affecting profits. Consequently, NVIDIA finds it challenging to depend solely on these hyperscale customers.

  • External Challenge 1: Export Control Pressures Lead to Loss of Chinese Customers

Due to escalating tensions between the US and China, the US issued new regulations prohibiting NVIDIA from exporting advanced AI chips to China. Consequently, NVIDIA introduced specially tailored versions such as A800 and H800 for the Chinese market.

However, they were ultimately blocked by the US, and products including A100, A800, H100, H800, and L40S were included in the export control list.Subsequently, NVIDIA decided to introduce new AI GPUs, namely HGXH20, L20 PCIe, and L2 PCIe, in compliance with export policies.

However, with only 20% of the computing power of H100, they are planned for mass production in the second quarter. Due to the reduced performance, major Chinese companies like Alibaba, Tencent, and Baidu reportedly refused to purchase, explicitly stating significant order cuts for the year. Consequently, NVIDIA’s revenue prospects in China appear grim, with some orders even being snatched by Huawei.

Currently, NVIDIA’s sales revenue from Singapore and China accounts for 15% of its total revenue. Moreover, the company holds over 90% market share in the AI chip market in China. Therefore, the cost of abandoning the Chinese market would be substantial. NVIDIA is adamant about not easily giving up on China; however, the challenge lies in how to comply with US government policies and pressures while meeting the demands of Chinese customers.

As per NVIDIA CEO Jensen Huang during its last earnings call, he mentioned that US export control measures would have an impact. Contributions from China and other regions accounted for 20-25% of data center revenue in the last quarter, with a significant anticipated decline this quarter.

He also expressed concerns that besides losing the Chinese market, the situation would accelerate China’s efforts to manufacture its own chips and introduce proprietary GPU products, providing Chinese companies with opportunities to rise.

  • External Challenge 2: Arch-Rivals Intel and AMD Begin Their Offensive

In the race to capture the AI market opportunity, arch-rivals Intel and AMD are closely after NVIDIA. As NVIDIA pioneered the adoption of TSMC’s 4-nanometer H100, AMD quickly followed suit by launching the first batch of “Instinct MI300X” for AI and HPC applications last year.

Currently, shipments of MI300X have commenced this year, with Microsoft’s data center division emerging as the largest buyer. Meta has also procured a substantial amount of Instinct MI300 series products, while LaminiAI stands as the first publicly known company to utilize MI300X.

According to official performance tests by AMD, the MI300X outperforms the existing NVIDIA H100 80GB available on the market, posing a potential threat to the upcoming H200 141GB.

Additionally, compared to the H100 chip, the MI300X offers a more competitive price for products of the same level. If NVIDIA’s production capacity continues to be restricted, some customers may switch to AMD.

Meanwhile, Intel unveiled the “Gaudi3” chip for generative AI software last year. Although there is limited information available, it is rumored that the memory capacity may increase by 50% compared to Gaudi 2’s 96GB, possibly upgrading to HBM3e memory. CEO Pat Gelsinger directly stated that “Gaudi 3 performance will surpass that of the H100.”

  • External Challenge 3: Startup Underdogs Form AI Platform Alliance in Attempt to Conquer

Several global chip design companies have recently announced the formation of the “AI Platform Alliance,” aiming to promote an open AI ecosystem. The founding members of the AI Platform Alliance include Ampere, Cerebras Systems, Furiosa, Graphcore, Kalray, Kinara, Luminous, Neuchips, Rebellions, and Sapeon, among others.

Notably absent is industry giant NVIDIA, leading to speculation that startups aspire to unite and challenge NVIDIA’s dominance.

However, with NVIDIA holding a 75-90% market share in AI, it remains in a dominant position. Whether the AI Platform Alliance can disrupt NVIDIA’s leading position is still subject to observation.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews.

2024-02-19

[News] TSMC Reportedly Doubles CoWoS Capacity while Amkor, ASE also Enter Advanced Packaging for AI

The surge in demand for advanced packaging is being primarily propelled by artificial intelligence (AI) chips. According to industry sources cited by CNA, TSMC’s CoWoS production capacity is set to double this year, yet demand continues to outstrip supply. In response, NVIDIA has enlisted the help of packaging and testing facilities to augment its advanced packaging capabilities.

In addition, to address the imbalance between supply and demand for advanced packaging due to AI, semiconductor backend specialty assembly and testing (OSAT) companies such as ASE Technology Holding (ASE), Powertech Technology, and KYEC have expanded their capital expenditures this year to enhance their advanced packaging capabilities, aligning with the needs of their customers.

AI and high-performance computing (HPC) chips are driving the demand for CoWoS advanced packaging. As per sources interviewed by CNA, from July to the end of last year, TSMC actively adjusted its CoWoS advanced packaging production capacity, gradually expanding and stabilizing mass production.

The source further indicates that in December of last year, TSMC’s CoWoS monthly production capacity increased to 14,000 to 15,000. It is estimated that by the fourth quarter of this year, TSMC’s CoWoS monthly production capacity will significantly expand to 33,000 to 35,000.

Per an earlier report from Commercial Times, TSMC has been outsourcing part of its CoWoS operations for some time, mainly targeting small-volume, high-performance chips. TSMC maintains in-house production of the CoW, while the back-end WoS is handed over to test and assembly houses to improve production efficiency and flexibility. 

However, the demand for advanced packaging capacity for AI chips still outstrips supply. Sources cited by CNA also reveal that NVIDIA has sought assistance from packaging and testing subcontractors outside of TSMC to augment their advanced packaging capabilities.

Amkor, among others, began gradually providing capacity support from the fourth quarter of last year, while SPIL, a subsidiary of ASE, is slated to commence supply in the first quarter of this year.

Read more

(Photo credit: TSMC)

Please note that this article cites information from CNA and Commercial Times.

2024-02-19

[News] CoWoS Capacity Shortage Challenges AI Chip Demand, while Taiwanese Manufacturers Expand to Seize Opportunities

With the flourishing development of technologies such as AI, cloud computing, big data analytics, and mobile computing, modern society has an increasingly high demand for computing power.

Moreover, with the advancement beyond 3 nanometers, wafer sizes have encountered scaling limitations and manufacturing costs have increased. Therefore, besides continuing to develop advanced processes, the semiconductor industry is also exploring other ways to maintain chip size while ensuring high efficiency.

The concept of “heterogeneous integration” has become a contemporary focus, leading to the transition of chips from single-layer to advanced packaging with multiple layers stacked together.

The term “CoWoS” can be broken down into the following definitions: “Cow” stands for “Chip-on-Wafer,” referring to the stacking of chips, while “WoS” stands for “Wafer-on-Substrate,” which involves stacking chips on a substrate.

Therefore, “CoWoS” collectively refers to stacking chips and packaging them onto a substrate. This approach reduces the space required for chips and offers benefits in reducing power consumption and costs.

Among these, CoWoS can be further divided into 2.5D horizontal stacking (most famously exemplified by TSMC’s CoWoS) and 3D vertical stacking versions. In these configurations, various processor and memory modules are stacked layer by layer to create chiplets. Because its primary application lies in advanced processes, it is also referred to as advanced packaging.

According to TrendForce’s data, it has provided insights into the heat of the AI chip market. In 2023, shipments of AI servers (including those equipped with GPU, FPGA, ASIC, etc.) reached nearly 1.2 million units, a 38.4% increase from 2022, accounting for nearly 9% of the overall server shipments.

Looking ahead to 2026, the proportion is expected to reach 15%, with a compound annual growth rate (CAGR) of AI server shipments from 2022 to 2026 reaching 22%.

Due to the advanced packaging requirements of AI chips, TSMC’s 2.5D advanced packaging CoWoS technology is currently the primary technology used for AI chips.

GPUs, in particular, utilize higher specifications of HBM, which require the integration of core dies using 2.5D advanced packaging technology. The initial stage of chip stacking in CoWoS packaging, known as Chip on Wafer (CoW), primarily undergoes manufacturing at the fab using a 65-nanometer process. Following this, through-silicon via (TSV) is carried out, and the finalized products are stacked and packaged onto the substrate, known as Wafer on Substrate (WoS).

As a result, the production capacity of CoWoS packaging technology has become a significant bottleneck in AI chip output over the past year, and it remains a key factor in whether AI chip demand can be met in 2024. Foreign analysts have previously pointed out that NVIDIA is currently the largest customer of TSMC’s 2.5D advanced packaging CoWoS technology.

This includes NVIDIA’s H100 GPU, which utilizes TSMC’s 4-nanometer advanced process, as well as the A100 GPU, which uses TSMC’s 7-nanometer process, both of which are packaged using CoWoS technology. As a result, NVIDIA’s chips account for 40% to 50% of TSMC’s CoWoS packaging capacity. This is also why the high demand for NVIDIA chips has led to tight capacity for TSMC’s CoWoS packaging.

TSMC’s Expansion Plans Expected to Ease Tight Supply Situation in 2024

During the earnings call held in July 2023, TSMC announced its plans to double the CoWoS capacity, indicating that the supply-demand imbalance in the market could be alleviated by the end of 2024.

Subsequently, in late July 2023, TSMC announced an investment of nearly NTD 90 billion (roughly USD 2.87 billion) to establish an advanced packaging fab in the Tongluo Science Park, with the construction expected to be completed by the end of 2026 and mass production scheduled for the second or third quarter of 2027.

In addition, during the earnings call on January 18, 2024, TSMC’s CFO, Wendell Huang, emphasized that TSMC would continue its expansion of advanced processes in 2024. Therefore, it is estimated that 10% of the total capital expenditure for the year will be allocated towards expanding capacity in advanced packaging, testing, photomasks, and other areas.

In fact, NVIDIA’s CFO, Colette Kress, stated during an investor conference that the key process of CoWoS advanced packaging has been developed and certified with other suppliers. Kress further anticipated that supply would gradually increase over the coming quarters.

Regarding this, J.P. Morgan, an investment firm, pointed out that the bottleneck in CoWoS capacity is primarily due to the supply-demand gap in the interposer. This is because the TSV process is complex, and expanding capacity requires more high-precision equipment. However, the long lead time for high-precision equipment, coupled with the need for regular cleaning and inspection of existing equipment, has resulted in supply shortages.

Apart from TSMC’s dominance in the CoWoS advanced packaging market, other Taiwanese companies such as UMC, ASE Technology Holding, and Powertek Technology are also gradually entering the CoWoS advanced packaging market.

Among them, UMC expressed during an investor conference in late July 2023 that it is accelerating the deployment of silicon interposer technology and capacity to meet customer needs in the 2.5D advanced packaging sector.

UMC Expands Interposer Capacity; ASE Pushes Forward with VIPack Advanced Packaging Platform

UMC emphasizes that it is the world’s first foundry to offer an open system solution for silicon interposer manufacturing. Through this open system collaboration (UMC+OSAT), UMC can provide a fully validated supply chain for rapid mass production implementation.

On the other hand, in terms of shipment volume, ASE Group currently holds approximately a 32% market share in the global Outsourced Semiconductor Assembly and Test (OSAT) industry and accounts for over 50% of the OSAT shipment volume in Taiwan. Its subsidiary, ASE Semiconductor, also notes the recent focus on CoWoS packaging technology. ASE Group has been strategically positioning itself in advanced packaging, working closely with TSMC as a key partner.

ASE underscores the significance of its VIPack advanced packaging platform, designed to provide vertical interconnect integration solutions. VIPack represents the next generation of 3D heterogeneous integration architecture.

Leveraging advanced redistribution layer (RDL) processes, embedded integration, and 2.5D/3D packaging technologies, VIPack enables customers to integrate multiple chips into a single package, unlocking unprecedented innovation in various applications.

Powertech Technology Seeks Collaboration with Foundries; Winbond Electronics Offers Heterogeneous Integration Packaging Technology

In addition, the OSAT player Powertech Technology is actively expanding its presence in advanced packaging for logic chips and AI applications.

The collaboration between Powertech and Winbond is expected to offer customers various options for CoWoS advanced packaging, indicating that CoWoS-related advanced packaging products could be available as early as the second half of 2024.

Winbond Electronics emphasizes that the collaboration project will involve Winbond Electronics providing CUBE (Customized Ultra-High Bandwidth Element) DRAM, as well as customized silicon interposers and integrated decoupling capacitors, among other advanced technologies. These will be complemented by Powertech Technology’s 2.5D and 3D packaging services.

Read more

(Photo credit: TSMC)

Please note that this article cites information from TechNews.

  • Page 27
  • 46 page(s)
  • 230 result(s)

Get in touch with us