CoWoS


2023-06-26

HBM and 2.5D Packaging: the Essential Backbone Behind AI Server

With the advancements in AIGC models such as ChatGPT and Midjourney, we are witnessing the rise of more super-sized language models, opening up new possibilities for High-Performance Computing (HPC) platforms.

According to TrendForce, by 2025, the global demand for computational resources in the AIGC industry – assuming 5 super-sized AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products equivalent to Midjourney, and 80 small-sized AIGC products – would be approximately equivalent to 145,600 – 233,700 units of NVIDIA A100 GPUs. This highlights the significant impact of AIGC on computational requirements.

Additionally, the rapid development of supercomputing, 8K video streaming, and AR/VR will also lead to an increased workload on cloud computing systems. This calls for highly efficient computing platforms that can handle parallel processing of vast amounts of data.
However, a critical concern is whether hardware advancements can keep pace with the demands of these emerging applications.

HBM: The Fast Lane to High-Performance Computing

While the performance of core computing components like CPUs, GPUs, and ASICs has improved due to semiconductor advancements, their overall efficiency can be hindered by the limited bandwidth of DDR SDRAM.

For example, from 2014 to 2020, CPU performance increased over threefold, while DDR SDRAM bandwidth only doubled. Additionally, the pursuit of higher transmission performance through technologies like DDR5 or future DDR6 increases power consumption, posing long-term impacts on computing systems’ efficiency.

Recognizing this challenge, major chip manufacturers quickly turned their attention to new solutions. In 2013, AMD and SK Hynix made separate debuts with their pioneering products featuring High Bandwidth Memory (HBM), a revolutionary technology that allows for stacking on GPUs and effectively replacing GDDR SDRAM. It was recognized as an industry standard by JEDEC the same year.

In 2015, AMD introduced Fiji, the first high-end consumer GPU with integrated HBM, followed by NVIDIA’s release of P100, the first AI server GPU with HBM in 2016, marking the beginning of a new era for server GPU’s integration with HBM.

HBM’s rise as the mainstream technology sought after by key players can be attributed to its exceptional bandwidth and lower power consumption when compared to DDR SDRAM. For example, HBM3 delivers 15 times the bandwidth of DDR5 and can further increase the total bandwidth by adding more stacked dies. Additionally, at system level, HBM can effectively manage power consumption by replacing a portion of GDDR SDRAM or DDR SDRAM.

As computing power demands increase, HBM’s exceptional transmission efficiency unlocks the full potential of core computing components. Integrating HBM into server GPUs has become a prominent trend, propelling the global HBM market to grow at a compound annual rate of 40-45% from 2023 to 2025, according to TrendForce.

The Crucial Role of 2.5D Packaging

In the midst of this trend, the crucial role of 2.5D packaging technology in enabling such integration cannot be overlooked.

TSMC has been laying the groundwork for 2.5D packaging technology with CoWoS (Chip on Wafer on Substrate) since 2011. This technology enables the integration of logic chips on the same silicon interposer. The third-generation CoWoS technology, introduced in 2016, allowed the integration of logic chips with HBM and was adopted by NVIDIA for its P100 GPU.

With development in CoWoS technology, the interposer area has expanded, accommodating more stacked HBM dies. The 5th-generation CoWoS, launched in 2021, can integrate 8 HBM stacks and 2 core computing components. The upcoming 6th-generation CoWoS, expected in 2023, will support up to 12 HBM stacks, meeting the requirements of HBM3.

TSMC’s CoWoS platform has become the foundation for high-performance computing platforms. While other semiconductor leaders like Samsung, Intel, and ASE are also venturing into 2.5D packaging technology with HBM integration, we think TSMC is poised to be the biggest winner in this emerging field, considering its technological expertise, production capacity, and order capabilities.

In conclusion, the remarkable transmission efficiency of HBM, facilitated by the advancements in 2.5D packaging technologies, creates an exciting prospect for the seamless convergence of these innovations. The future holds immense potential for enhanced computing experiences.

 

2021-01-13

TSMC to Kick off Mass Production of Intel CPUs in 2H21 as Intel Shifts its CPU Manufacturing Strategies, Says TrendForce

Intel has outsourced the production of about 15-20% of its non-CPU chips, with most of the wafer starts for these products assigned to TSMC and UMC, according to TrendForce’s latest investigations. While the company is planning to kick off mass production of Core i3 CPUs at TSMC’s 5nm node in 2H21, Intel’s mid-range and high-end CPUs are projected to enter mass production using TSMC’s 3nm node in 2H22.

In recent years, Intel has experienced some setbacks in the development of 10nm and 7nm processes, which in turn greatly hindered its competitiveness in the market. With regards to smartphone processors, most of which are based on the ARM architecture, Apple and HiSilicon have been able to announce the most advanced mobile AP-SoC ahead of their competitors, thanks to TSMC’s technical breakthroughs in process technology.

With regards to CPUs, AMD, which is also outsourcing its CPU production to TSMC, is progressively threatening Intel’s PC CPU market share. Furthermore, Intel lost CPU orders for the MacBook and Mac Mini, since both of these products are now equipped with Apple Silicon M1 processors, which were announced by Apple last year and manufactured by TSMC. The aforementioned shifts in the smartphone and PC CPU markets led Intel to announce its intention to outsource CPU manufacturing in 2H20.

TrendForce believes that increased outsourcing of its product lines will allow Intel to not only continue its existence as a major IDM, but also maintain in-house production lines for chips with high margins, while more effectively spending CAPEX on advanced R&D. In addition, TSMC offers a diverse range of solutions that Intel can use during product development (e.g., chiplets, CoWoS, InFO, and SoIC). All in all, Intel will be more flexible in its planning and have access to various value-added opportunities by employing TSMC’s production lines. At the same time, Intel now has a chance to be on the same level as AMD with respect to manufacturing CPUs with advanced process technologies.

(Cover image source: Taiwan Semiconductor Manufacturing Company, Limited )

For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com

  • Page 16
  • 16 page(s)
  • 77 result(s)

Get in touch with us