CoWoS


2023-07-31

High-Tech PCB Manufacturers Poised to Gain from Remarkable Increase in AI Server PCB Revenue

Looking at the impact of AI server development on the PCB industry, mainstream AI servers, compared to general servers, incorporate 4 to 8 GPUs. Due to the need for high-frequency and high-speed data transmission, the number of PCB layers increases, and there’s an upgrade in the adoption of CCL grade as well. This surge in GPU integration drives the AI server PCB output value to surpass that of general servers by several times. However, this advancement also brings about higher technological barriers, presenting an opportunity for high-tech PCB manufacturers to benefit.

TrendForce’s perspective: 

  • The increased value of AI server PCBs primarily comes from GPU boards.

Taking the NVIDIA DGX A100 as an example, its PCB can be divided into CPU boards, GPU boards, and accessory boards. The overall value of the PCB is about 5 to 6 times higher than that of a general server, with approximately 94% of the incremental value attributed to the GPU boards. This is mainly due to the fact that general servers typically do not include GPUs, while the NVIDIA DGX A100 is equipped with 8 GPUs.

Further analysis reveals that CPU boards, which consist of CPU boards, CPU mainboards, and functional accessory boards, make up about 20% of the overall AI server PCB value. On the other hand, GPU boards, including GPU boards, NV Switch, OAM (OCP Accelerator Module), and UBB (Unit Baseboard), account for around 79% of the total AI server PCB value. Accessory boards, composed of components such as power supplies, HDD, and cooling systems, contribute to only about 1% of the overall AI server PCB value.

  • The technological barriers of AI servers are rising, leading to a decrease in the number of suppliers.

Since AI servers require multiple card interconnections with more extensive and denser wiring compared to general servers, and AI GPUs have more pins and an increased number of memory chips, GPU board assemblies may reach 20 layers or more. With the increase in the number of layers, the yield rate decreases.

Additionally, due to the demand for high-frequency and high-speed transmission, CCL materials have evolved from Low Loss grade to Ultra Low Loss grade. As the technological barriers rise, the number of manufacturers capable of entering the AI server supply chain also decreases.

Currently, the suppliers for CPU boards in AI servers include Ibiden, AT&S, Shinko, and Unimicron, while the mainboard PCB suppliers consist of GCE and Tripod. For GPU boards, Ibiden serves as the supplier, and for OAM PCBs, Unimicron and Zhending are the suppliers, with GCE, ACCL, and Tripod currently undergoing certification. The CCL suppliers include EMC. For UBB PCBs, the suppliers are GCE, WUS, and ACCL, with TUC and Panasonic being the CCL suppliers.

Regarding ABF boards, Taiwanese manufacturers have not yet obtained orders for NVIDIA AI GPUs. The main reason for this is the limited production volume of NVIDIA AI GPUs, with an estimated output of only about 1.5 million units in 2023. Additionally, Ibiden’s yield rate for ABF boards with 16 layers or more is approximately 10% to 20% higher than that of Taiwanese manufacturers. However, with TSMC’s continuous expansion of CoWoS capacity, it is expected that the production volume of NVIDIA AI GPUs will reach over 2.7 million units in 2024, and Taiwanese ABF board manufacturers are likely to gain a low single-digit percentage market share.

(Photo credit: Google)

2023-06-26

HBM and 2.5D Packaging: the Essential Backbone Behind AI Server

With the advancements in AIGC models such as ChatGPT and Midjourney, we are witnessing the rise of more super-sized language models, opening up new possibilities for High-Performance Computing (HPC) platforms.

According to TrendForce, by 2025, the global demand for computational resources in the AIGC industry – assuming 5 super-sized AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products equivalent to Midjourney, and 80 small-sized AIGC products – would be approximately equivalent to 145,600 – 233,700 units of NVIDIA A100 GPUs. This highlights the significant impact of AIGC on computational requirements.

Additionally, the rapid development of supercomputing, 8K video streaming, and AR/VR will also lead to an increased workload on cloud computing systems. This calls for highly efficient computing platforms that can handle parallel processing of vast amounts of data.
However, a critical concern is whether hardware advancements can keep pace with the demands of these emerging applications.

HBM: The Fast Lane to High-Performance Computing

While the performance of core computing components like CPUs, GPUs, and ASICs has improved due to semiconductor advancements, their overall efficiency can be hindered by the limited bandwidth of DDR SDRAM.

For example, from 2014 to 2020, CPU performance increased over threefold, while DDR SDRAM bandwidth only doubled. Additionally, the pursuit of higher transmission performance through technologies like DDR5 or future DDR6 increases power consumption, posing long-term impacts on computing systems’ efficiency.

Recognizing this challenge, major chip manufacturers quickly turned their attention to new solutions. In 2013, AMD and SK Hynix made separate debuts with their pioneering products featuring High Bandwidth Memory (HBM), a revolutionary technology that allows for stacking on GPUs and effectively replacing GDDR SDRAM. It was recognized as an industry standard by JEDEC the same year.

In 2015, AMD introduced Fiji, the first high-end consumer GPU with integrated HBM, followed by NVIDIA’s release of P100, the first AI server GPU with HBM in 2016, marking the beginning of a new era for server GPU’s integration with HBM.

HBM’s rise as the mainstream technology sought after by key players can be attributed to its exceptional bandwidth and lower power consumption when compared to DDR SDRAM. For example, HBM3 delivers 15 times the bandwidth of DDR5 and can further increase the total bandwidth by adding more stacked dies. Additionally, at system level, HBM can effectively manage power consumption by replacing a portion of GDDR SDRAM or DDR SDRAM.

As computing power demands increase, HBM’s exceptional transmission efficiency unlocks the full potential of core computing components. Integrating HBM into server GPUs has become a prominent trend, propelling the global HBM market to grow at a compound annual rate of 40-45% from 2023 to 2025, according to TrendForce.

The Crucial Role of 2.5D Packaging

In the midst of this trend, the crucial role of 2.5D packaging technology in enabling such integration cannot be overlooked.

TSMC has been laying the groundwork for 2.5D packaging technology with CoWoS (Chip on Wafer on Substrate) since 2011. This technology enables the integration of logic chips on the same silicon interposer. The third-generation CoWoS technology, introduced in 2016, allowed the integration of logic chips with HBM and was adopted by NVIDIA for its P100 GPU.

With development in CoWoS technology, the interposer area has expanded, accommodating more stacked HBM dies. The 5th-generation CoWoS, launched in 2021, can integrate 8 HBM stacks and 2 core computing components. The upcoming 6th-generation CoWoS, expected in 2023, will support up to 12 HBM stacks, meeting the requirements of HBM3.

TSMC’s CoWoS platform has become the foundation for high-performance computing platforms. While other semiconductor leaders like Samsung, Intel, and ASE are also venturing into 2.5D packaging technology with HBM integration, we think TSMC is poised to be the biggest winner in this emerging field, considering its technological expertise, production capacity, and order capabilities.

In conclusion, the remarkable transmission efficiency of HBM, facilitated by the advancements in 2.5D packaging technologies, creates an exciting prospect for the seamless convergence of these innovations. The future holds immense potential for enhanced computing experiences.

 

2021-01-13

TSMC to Kick off Mass Production of Intel CPUs in 2H21 as Intel Shifts its CPU Manufacturing Strategies, Says TrendForce

Intel has outsourced the production of about 15-20% of its non-CPU chips, with most of the wafer starts for these products assigned to TSMC and UMC, according to TrendForce’s latest investigations. While the company is planning to kick off mass production of Core i3 CPUs at TSMC’s 5nm node in 2H21, Intel’s mid-range and high-end CPUs are projected to enter mass production using TSMC’s 3nm node in 2H22.

In recent years, Intel has experienced some setbacks in the development of 10nm and 7nm processes, which in turn greatly hindered its competitiveness in the market. With regards to smartphone processors, most of which are based on the ARM architecture, Apple and HiSilicon have been able to announce the most advanced mobile AP-SoC ahead of their competitors, thanks to TSMC’s technical breakthroughs in process technology.

With regards to CPUs, AMD, which is also outsourcing its CPU production to TSMC, is progressively threatening Intel’s PC CPU market share. Furthermore, Intel lost CPU orders for the MacBook and Mac Mini, since both of these products are now equipped with Apple Silicon M1 processors, which were announced by Apple last year and manufactured by TSMC. The aforementioned shifts in the smartphone and PC CPU markets led Intel to announce its intention to outsource CPU manufacturing in 2H20.

TrendForce believes that increased outsourcing of its product lines will allow Intel to not only continue its existence as a major IDM, but also maintain in-house production lines for chips with high margins, while more effectively spending CAPEX on advanced R&D. In addition, TSMC offers a diverse range of solutions that Intel can use during product development (e.g., chiplets, CoWoS, InFO, and SoIC). All in all, Intel will be more flexible in its planning and have access to various value-added opportunities by employing TSMC’s production lines. At the same time, Intel now has a chance to be on the same level as AMD with respect to manufacturing CPUs with advanced process technologies.

(Cover image source: Taiwan Semiconductor Manufacturing Company, Limited )

For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com

  • Page 15
  • 15 page(s)
  • 73 result(s)

Get in touch with us