Artificial Intelligence


2023-07-06

ASE, Amkor, UMC and Samsung Getting a Slice of the CoWoS Market from AI Chips, Challenging TSMC

AI Chips and High-Performance Computing (HPC) have been continuously shaking up the entire supply chain, with CoWoS packaging technology being the latest area to experience the tremors.

In the previous piece, “HBM and 2.5D Packaging: the Essential Backbone Behind AI Server,” we discovered that the leading AI chip players, Nvidia and AMD, have been dedicated users of TSMC’s CoWoS technology. Much of the groundbreaking tech used in their flagship product series – such as Nvidia’s A100 and H100, and AMD’s Instinct MI250X and MI300 – have their roots in TSMC’s CoWoS tech.

However, with AI’s exponential growth, chip demand from not just Nvidia and AMD has skyrocketed, but other giants like Google and Amazon are also catching up in the AI field, bringing an onslaught of chip demand. The surge of orders is already testing the limits of TSMC’s CoWoS capacity. While TSMC is planning to increase its production in the latter half of 2023, there’s a snag – the lead time of the packaging equipment is proving to be a bottleneck, severely curtailing the pace of this necessary capacity expansion.

Nvidia Shakes the foundation of the CoWoS Supply Chain

In these times of booming demand, maintaining a stable supply is viewed as the primary goal for chipmakers, including Nvidia. While TSMC is struggling to keep up with customer needs, other chipmakers are starting to tweak their outsourcing strategies, moving towards a more diversified supply chain model. This shift is now opening opportunities for other foundries and OSATs.

Interestingly, in this reshuffling of the supply chain, UMC (United Microelectronics Corporation) is reportedly becoming one of Nvidia’s key partners in the interposer sector for the first time, with plans for capacity expansion on the horizon.

From a technical viewpoint, interposer has always been the cornerstone of TSMC’s CoWoS process and technology progression. As the interposer area enlarges, it allows for more memory stack particles and core components to be integrated. This is crucial for increasingly complex multi-chip designs, underscoring Nvidia’s intention to support UMC as a backup resource to safeguard supply continuity.

Meanwhile, as Nvidia secures production capacity, it is observed that the two leading OSAT companies, Amkor and SPIL (as part of ASE), are establishing themselves in the Chip-on-Wafer (CoW) and Wafer-on-Substrate (WoS) processes.

The ASE Group is no stranger to the 2.5D packaging arena. It unveiled its proprietary 2.5D packaging tech as early as 2017, a technology capable of integrating core computational elements and High Bandwidth Memory (HBM) onto the silicon interposer. This approach was once utilized in AMD’s MI200 series server GPU. Also under the ASE Group umbrella, SPIL boasts unique Fan-Out Embedded Bridge (FO-EB) technology. Bypassing silicon interposers, the platform leverages silicon bridges and redistribution layers (RDL) for integration, which provides ASE another competitive edge.

Could Samsung’s Turnkey Service Break New Ground?

In the shifting landscape of the supply chain, the Samsung Device Solutions division’s turnkey service, spanning from foundry operations to Advanced Package (AVP), stands out as an emerging player that can’t be ignored.

After its 2018 split, Samsung Foundry started taking orders beyond System LSI for business stability. In 2023, the AVP department, initially serving Samsung’s memory and foundry businesses, has also expanded its reach to external clients.

Our research indicates that Samsung’s AVP division is making aggressive strides into the AI field. Currently in active talks with key customers in the U.S. and China, Samsung is positioning its foundry-to-packaging turnkey solutions and standalone advanced packaging processes as viable, mature options.

In terms of technology roadmap, Samsung has invested significantly in 2.5D packaging R&D. Mirroring TSMC, the company launched two 2.5D packaging technologies in 2021: the I-Cube4, capable of integrating four HBM stacks and one core component onto a silicon interposer, and the H-Cube, designed to extend packaging area by integrating HDI PCB beneath the ABF substrate, primarily for designs incorporating six or more HBM stack particles.

Besides, recognizing Japan’s dominance in packaging materials and technologies, Samsung recently launched a R&D center there to swiftly upscale its AVP business.

Given all these circumstances, it seems to be only a matter of time before Samsung carves out its own significant share in the AI chip market. Despite TSMC’s industry dominance and pivotal role in AI chip advancements, the rising demand for advanced packaging is set to undeniably reshape supply chain dynamics and the future of the semiconductor industry.

(Source: Nvidia)

2023-04-25

AI Sparks a Revolution Up In the Cloud

OpenAI’s ChapGPT, Microsoft’s Copilot, Google’s Bard, and latest Elon Musk’s TruthGPT – what will be the next buzzword for AI? In just under six months, the AI competition has heated up, stirring up ripples in the once-calm AI server market, as AI-generated content (AIGC) models take center stage.

The convenience unprecedentedly brought by AIGC has attracted a massive number of users, with OpenAI’s mainstream model, GPT-3, receiving up to 25 million daily visits, often resulting in server overload and disconnection issues.

Given the evolution of these models has led to an increase in training parameters and data volume, making computational power even more scarce, OpenAI has reluctantly adopted measures such as paid access and traffic restriction to stabilize the server load.

High-end Cloud Computing is gaining momentum

According to Trendforce, AI servers currently have a merely 1% penetration rate in global data centers, which is far from sufficient to cope with the surge in data demand from the usage side. Therefore, besides optimizing software to reduce computational load, increasing the number of high-end AI servers in hardware will be another crucial solution.

Take GPT-3 for instance. The model requires at least 4,750 AI servers with 8 GPUs for each, and every similarly large language model like ChatGPT will need 3,125 to 5,000 units. Considering ChapGPT and Microsoft’s other applications as a whole, the need for AI servers is estimated to reach some 25,000 units in order to meet the basic computing power.

As the emerging applications of AIGC and its vast commercial potential have both revealed the technical roadmap moving forward, it also shed light on the bottlenecks in the supply chain.

The down-to-earth problem: cost

Compared to general-purpose servers that use CPUs as their main computational power, AI servers heavily rely on GPUs, and DGX A100 and H100, with computational performance up to 5 PetaFLOPS, serve as primary AI server computing power. Given that GPU costs account for over 70% of server costs, the increase in the adoption of high-end GPUs has made the architecture more expansive.

Moreover, a significant amount of data transmission occurs during the operation, which drives up the demand for DDR5 and High Bandwidth Memory (HBM). The high power consumption generated during operation also promotes the upgrade of components such as PCBs and cooling systems, which further raises the overall cost.

Not to mention the technical hurdles posed by the complex design architecture – for example, a new approach for heterogeneous computing architecture is urgently required to enhance the overall computing efficiency.

The high cost and complexity of AI servers has inevitably limited their development to only large manufacturers. Two leading companies, HPE and Dell, have taken different strategies to enter the market:

  • HPE has continuously strengthened its cooperation with Google and plans to convert all products to service form by 2022. It also acquired startup Pachyderm in January 2023 to launch cloud-based supercomputing services, making it easier to train and develop large models.
  • In March 2023, Dell launched its latest PowerEdge series servers, which offers options equipped with NVIDIA H100 or A100 Tensor Core GPUs and NVIDIA AI Enterprise. They use the 4th generation Intel Xeon Scalable processor and introduce Dell software Smart Flow, catering to different demands such as data centers, large public clouds, AI, and edge computing.

With the booming market for AIGC applications, we seem to be one step closer to a future metaverse centered around fully virtualized content. However, it remains unclear whether the hardware infrastructure can keep up with the surge in demand. This persistent challenge will continue to test the capabilities of cloud server manufacturers to balance cost and performance.

(Photo credit: Google)

2022-09-22

Global AI Chip Market Estimated to Reach US$39 Billion in 2022, ASIC Chip Sector Set to Grow Fastest

According to TrendForce, the scope of IoT devices continue to expand under a wave of global digitization and smart machines including in industrial robotics, AGV/AMR, smart phones, smart speakers, smart cameras, etc. In addition, the deepening application of technologies such as autonomous driving, image recognition, speech and semantic recognition, and computing in various fields has catalyzed the rapid growth of AI chip and technology markets. The size of the global AI chip market is expected to reach US$39 billion in 2022, with a growth rate of 18.2%.

Since current utilization of AI chips are mostly in cloud computing, security, robotics, and automotive applications, they will enter a period of accelerated growth in 2023. In particular, the two fields of cloud computing and automotive applications will lead rapid market growth. By 2025, the size of the global AI chip market size is expected to reach US$74 billion, with CAGR from 2022 to 2025 reaching 23.8%.

ASIC chips have wide-ranging prospects, with market share in AI chips increasing year by year

From 2020 to 2021, the amount of data generated by datacenters and various terminal devices continued to rise, pushing chip technology to its limit, with demand for computing power becoming more difficult to meet. Therefore, many manufacturers have successively invested in high-end IC design and development. With increasing demand from various parties, the AI chip market is set to grow rapidly. The size of the AI chip market is expected to reach US$93 billion in 2026. CPU and GPU still occupy the lion’s share of the AI chip market and are growing steadily, while the ASIC market has expansive prospects and its advantages and characteristics can assist users in data processing, consumer electronics, telecommunication systems, and industrial computing develop product portfolios and shorten the innovation cycle of products, services, or systems.

TrendForce research shows that CPU, GPU, and ASIC chips will account for 33%, 34%, and 26% of the AI market, respectively, in 2026. The ASIC chip market will grow the fastest for two reasons. First, demand in the consumer electronic equipment market has increased and most developers of small and medium-sized equipment prefer 7nm ASICs. Second, workloads and structural demands of 5G, low-orbit satellite communications, cloud, and edge computing continue to increase, as telecommunications systems are the largest end-use market.

Since current utilization of AI chips are mostly in cloud computing, security, robotics, and automotive applications, they will enter a period of accelerated growth in 2023. In particular, the two fields of cloud computing and automotive applications will lead rapid market growth. By 2025, the size of the global AI chip market size is expected to reach US$74 billion, with CAGR from 2022 to 2025 reaching 23.8%.

(Image credit: Unsplash)

2021-07-30

Domestically Manufactured Substitutes in the Post-Pandemic Era Become Key Enabler of China’s 200,000 Industrial Robots

The onset of the COVID-19 pandemic in 2020 compelled the manufacturing industry to move towards a future of digitization and automation that attempts to reduce labor associated with production and operation. In light of this shift, the use of industrial robots quickly expanded from its earlier applications in the automotive industry to other industries, particularly pharmaceutical production and healthcare, which have grown rapidly in demand in the post-pandemic era.

The Chinese market, more specifically, has seen remarkable growths in industrial robot production, from just under 30,000 units in October 2020 to 45,000 units subsequently, according to TrendForce’s latest investigations. As of March 2021, about 30,000 industrial units were produced each month. In addition, annual sale of industrial robots in 2020 reached about 170,000 units, a 15% YoY increase. Non-automotive industries, namely, the electronics industry and the metal fabrication industry (which spans robotic machining, freight manufacturing, and rail manufacturing), accounted for about 70% of industrial robot sales in China.

While labor costs in China gradually increased, the corresponding cost advantages associated with domestic production underwent a corresponding decline. As such, industrial robots, the production of which began approaching economies of scale, became one of the key drivers of the Chinese manufacturing industry’s shift towards high-end, advanced manufacturing. Companies such as Estun, STEP, GSK, and Inovance have been either increasing their R&D funding or acquiring other companies in order to raise their technological competencies, and their efforts have been accelerating China’s goal of “domestically manufactured substitutes”.

Articulated robots and home appliances are, in order, the two most prevalent applications of industrial robots

In the industrial robot market, articulated robots comprise the most widely adopted option. Articulated robots are primarily used across three industries, namely, automotive, metal fabrication, and home appliances segments. SCARA robots, on the other hand, represent the other mainstream type of industrial robot and are mainly used for electronics, li-ion, and PV panel manufacturing. Aside from the two aforementioned options, collaborative robots are also used for manufacturing metal products, ICT products, and consumer electronics.

In the Chinese market, for instance, articulated robots from major foreign suppliers have a significant advantage in the automotive, metal fabrication, and home appliances industries. These suppliers had a 73% share in the heavy payload (>20kg) segment and a 51% share in the light payload (≤20kg) segment in the articulated robot market last year, with ABB, FANUC, KUKA, and Yaskawa possessing most of these market shares.

Relatively, Estun, STEP, Siasun, GSK, and other Chinese industrial robot suppliers were instead focused on cultivating their presence among SMEs in tier 2 and tier 3 cities. These companies’ products are now used across a wide variety of applications in the automotive manufacturing (including automotive components and NEVs), metal fabrication, home appliances, and food/beverages sectors.

In particular, industrial robot-based production lines for whole vehicles have already been deployed for automotive manufacturing industries in these cities. Unlike their foreign competitors, major Chinese suppliers had a 20% market share in the heavy payload (>20kg) segment and 22% market share in the light payload (≤20kg) segment last year. Notably, Chinese suppliers possessed a slight advantage in the latter segment because metal fabrication and home appliances manufacturing, compared to automotive manufacturing, has relatively less stringent requirements regarding product compactness and stability.

(Cover image source: International Federation of Robotics; IFR

  • Page 10
  • 10 page(s)
  • 49 result(s)

Get in touch with us