Insights
Samsung Electronics has disclosed its financial results for 2Q23, reporting quarterly revenue of 60.01 trillion Korean won. Although the DS division saw a rebound in revenue, the fall in smartphone shipments led to a 22% YoY decline.
Samsung highlighted an upturn in its memory business in Q2 due to a concentrated focus on HBM and DDR5 products. The company anticipates strong demand in AI applications, which has helped their DRAM shipments surpass expectations. In terms of panel production, earnings from smartphone panels paralleled those of the first quarter, largely owing to the sales of high-end panels. Meanwhile, the production of large panels continues to target the high-end QD-OLED market.
On the foundry side, Samsung reported a quarterly revenue increase during the second quarter, which was bolstered by growing sales to certain American clients. Yet, fab expansion and uncertainty in short-term demand contributed to a reduction in the utilization rate, triggering a substantial decrease in operating profit. The smartphone division experienced a drop in market demand, influenced by macroeconomic conditions such as high interest rates and inflation.
TrendForce reports that Samsung projects a rebound in global demand during 2H23, which could boost their earnings. Although potential macroeconomic risks are on the horizon, they aim to sustain profitability through sales of high-value products and the launch of innovative new products. However, considering the uncertainties for demand, TrendForce cautions that recovery in demand is expected to be gradual, with commodities prices only improving if suppliers maintain ongoing production cuts.
(Photo credit: Samsung)
Insights
China’s Automotive Price War Rages On: Some automakers have been gradually reclaiming outsourced orders for the battery, motor, electronic control system since May and June, shifting towards in-house production. Recently, they have asked suppliers to requote for second-half orders, with Samsung, Murata, Taiyo Yuden, PSA and Yageo actively vying for contracts.
Due to the more stringent certifications in the automakers’ supply chain compared to tier 1 suppliers, the majority of battery, motor, electronic control system MLCC suppliers still come from Taiwan, Japan, and Korea. Among them, Korean manufacturer Samsung has made significant progress in the Chinese automotive market this year. They have been actively providing sample for certifications and competitive pricing, securing a large share of orders and displacing Japanese manufacturers Murata and TDK, who had long held the lead.
Ongoing negotiations between automakers are expected to conclude with finalized orders by the end of August. According to the channel check from TrendForce, it appears that Samsung will maintain its leading position with a low-price strategy, while Murata, unwilling to be drawn into a price war reminiscent of consumer electronics, will remain conservative with pricing to secure a substantial market share. Taiyo Yuden, PSA and Yageo, though limited in automotive product offerings, have been proactive in their bidding efforts and have secured several orders.
(Photo credit: Yageo)
Insights
In the continued sluggish consumer electronics market and amidst the booming era of artificial intelligence, semiconductor manufacturers are actively targeting high-performance chips and intensifying the competition over the 2nm process node.
TSMC, Samsung, and the newcomer Rapidus are all actively positioning themselves in the 2nm chip race. Let’s take a look at the progress of these three enterprises.
TSMC: Roadmap for 3nm and 2nm Unveiled
TSMC believes that, at the same power level, the 2nm (N2) chip speed can increase by 15% compared to N3E, or reduce power consumption by 30%, with a density 1.15 times that of its predecessor.
TSMC’s current roadmap for the 3nm “family” includes N3, N3E, N3P, N3X, and N3 AE. N3 is the basic version, N3E is an improved version with further cost optimization, N3P offers enhanced performance, planned for production in the second half of 2024, N3X focuses on high-performance computing devices, and aims for mass production in 2025. N3 AE, designed for the automotive sector, boasts greater reliability and is expected to help customers shorten their product time-to-market by 2 to 3 years.
As for 2nm, TSMC foresees the N2 process to enter mass production in 2025. Media reports from June this year indicate that TSMC is fully committed and has already commenced pre-production work for 2nm chips. In July, the TSMC supply chain revealed that the company has informed equipment suppliers to start delivering 2nm-related machines in the third quarter of next year.
Samsung Electronics: 2nm Mass Production by 2025
In June this year, Samsung announced its latest foundry technology innovations and business strategies.
Embracing the AI era, Samsung’s semiconductor foundry plans to leverage GAA advanced process technology to provide robust support for AI applications. To achieve this, Samsung unveiled detailed plans and performance levels for 2nm process mass production. They aim to realize the application of 2nm process in the mobile sector by 2025, expanding to HPC and automotive electronics in 2026 and 2027, respectively.
Samsung states that the 2nm process (SF2) offers a 12% performance improvement and 25% power efficiency increase over the 3nm process (SF3), with a 5% reduction in chip area.
Rapidus: 2nm Chip Making Progress
Established in November 2022, Rapidus gained significant attention as eight major Japanese companies, including Sony Group, Toyota Motor, SoftBank, Kioxia, Denso, NTT, NEC and MUFG jointly announced their investment in the company. Just a month after its founding, Rapidus forged a strategic partnership with IBM to jointly develop 2nm chip manufacturing technology.
According to Rapidus’ plans, 2nm chips are set to begin trial production in 2025, with mass production commencing in 2027.
[Update] Intel: Being Ambitious to Start Mass Production Of Its 20A Process in The First Half of 2024
Intel is making a vigorous stride into the semiconductor foundry market, setting its sights on rivals like TSMC and Samsung in the arena of advanced process technologies. Intel’s ambitious road map includes kick-starting mass production of its 20A process in the first half of 2024, followed by an 18A process rollout in 2H24. TrendForce points out, however, Intel has a number of significant hurdles to overcome:
Intel’s longstanding focus on manufacturing CPUs, GPUs, FPGAs, and associated I/O chipsets leaves it short of the specialized processes mastered by other foundries. Therefore, the potential success of Intel’s acquisition of Tower—a move to broaden its product line and market reach—is a matter of crucial importance.
Beyond financial segregation, the division of Intel’s actual manufacturing capabilities poses a pivotal challenge. It remains to be seen whether Intel can emulate the complete separation models like those of AMD/GlobalFoundries or Samsung LSI/Samsung Foundry, staying true to the foundry principle of not competing with clients. Adding complexity to the mix, Intel faces the potential exodus of orders from a key customer—its own design division.
(Photo credit: TSMC)
In-Depth Analyses
AI Chips and High-Performance Computing (HPC) have been continuously shaking up the entire supply chain, with CoWoS packaging technology being the latest area to experience the tremors.
In the previous piece, “HBM and 2.5D Packaging: the Essential Backbone Behind AI Server,” we discovered that the leading AI chip players, Nvidia and AMD, have been dedicated users of TSMC’s CoWoS technology. Much of the groundbreaking tech used in their flagship product series – such as Nvidia’s A100 and H100, and AMD’s Instinct MI250X and MI300 – have their roots in TSMC’s CoWoS tech.
However, with AI’s exponential growth, chip demand from not just Nvidia and AMD has skyrocketed, but other giants like Google and Amazon are also catching up in the AI field, bringing an onslaught of chip demand. The surge of orders is already testing the limits of TSMC’s CoWoS capacity. While TSMC is planning to increase its production in the latter half of 2023, there’s a snag – the lead time of the packaging equipment is proving to be a bottleneck, severely curtailing the pace of this necessary capacity expansion.
Nvidia Shakes the foundation of the CoWoS Supply Chain
In these times of booming demand, maintaining a stable supply is viewed as the primary goal for chipmakers, including Nvidia. While TSMC is struggling to keep up with customer needs, other chipmakers are starting to tweak their outsourcing strategies, moving towards a more diversified supply chain model. This shift is now opening opportunities for other foundries and OSATs.
Interestingly, in this reshuffling of the supply chain, UMC (United Microelectronics Corporation) is reportedly becoming one of Nvidia’s key partners in the interposer sector for the first time, with plans for capacity expansion on the horizon.
From a technical viewpoint, interposer has always been the cornerstone of TSMC’s CoWoS process and technology progression. As the interposer area enlarges, it allows for more memory stack particles and core components to be integrated. This is crucial for increasingly complex multi-chip designs, underscoring Nvidia’s intention to support UMC as a backup resource to safeguard supply continuity.
Meanwhile, as Nvidia secures production capacity, it is observed that the two leading OSAT companies, Amkor and SPIL (as part of ASE), are establishing themselves in the Chip-on-Wafer (CoW) and Wafer-on-Substrate (WoS) processes.
The ASE Group is no stranger to the 2.5D packaging arena. It unveiled its proprietary 2.5D packaging tech as early as 2017, a technology capable of integrating core computational elements and High Bandwidth Memory (HBM) onto the silicon interposer. This approach was once utilized in AMD’s MI200 series server GPU. Also under the ASE Group umbrella, SPIL boasts unique Fan-Out Embedded Bridge (FO-EB) technology. Bypassing silicon interposers, the platform leverages silicon bridges and redistribution layers (RDL) for integration, which provides ASE another competitive edge.
Could Samsung’s Turnkey Service Break New Ground?
In the shifting landscape of the supply chain, the Samsung Device Solutions division’s turnkey service, spanning from foundry operations to Advanced Package (AVP), stands out as an emerging player that can’t be ignored.
After its 2018 split, Samsung Foundry started taking orders beyond System LSI for business stability. In 2023, the AVP department, initially serving Samsung’s memory and foundry businesses, has also expanded its reach to external clients.
Our research indicates that Samsung’s AVP division is making aggressive strides into the AI field. Currently in active talks with key customers in the U.S. and China, Samsung is positioning its foundry-to-packaging turnkey solutions and standalone advanced packaging processes as viable, mature options.
In terms of technology roadmap, Samsung has invested significantly in 2.5D packaging R&D. Mirroring TSMC, the company launched two 2.5D packaging technologies in 2021: the I-Cube4, capable of integrating four HBM stacks and one core component onto a silicon interposer, and the H-Cube, designed to extend packaging area by integrating HDI PCB beneath the ABF substrate, primarily for designs incorporating six or more HBM stack particles.
Besides, recognizing Japan’s dominance in packaging materials and technologies, Samsung recently launched a R&D center there to swiftly upscale its AVP business.
Given all these circumstances, it seems to be only a matter of time before Samsung carves out its own significant share in the AI chip market. Despite TSMC’s industry dominance and pivotal role in AI chip advancements, the rising demand for advanced packaging is set to undeniably reshape supply chain dynamics and the future of the semiconductor industry.
(Source: Nvidia)
In-Depth Analyses
With the advancements in AIGC models such as ChatGPT and Midjourney, we are witnessing the rise of more super-sized language models, opening up new possibilities for High-Performance Computing (HPC) platforms.
According to TrendForce, by 2025, the global demand for computational resources in the AIGC industry – assuming 5 super-sized AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products equivalent to Midjourney, and 80 small-sized AIGC products – would be approximately equivalent to 145,600 – 233,700 units of NVIDIA A100 GPUs. This highlights the significant impact of AIGC on computational requirements.
Additionally, the rapid development of supercomputing, 8K video streaming, and AR/VR will also lead to an increased workload on cloud computing systems. This calls for highly efficient computing platforms that can handle parallel processing of vast amounts of data.
However, a critical concern is whether hardware advancements can keep pace with the demands of these emerging applications.
HBM: The Fast Lane to High-Performance Computing
While the performance of core computing components like CPUs, GPUs, and ASICs has improved due to semiconductor advancements, their overall efficiency can be hindered by the limited bandwidth of DDR SDRAM.
For example, from 2014 to 2020, CPU performance increased over threefold, while DDR SDRAM bandwidth only doubled. Additionally, the pursuit of higher transmission performance through technologies like DDR5 or future DDR6 increases power consumption, posing long-term impacts on computing systems’ efficiency.
Recognizing this challenge, major chip manufacturers quickly turned their attention to new solutions. In 2013, AMD and SK Hynix made separate debuts with their pioneering products featuring High Bandwidth Memory (HBM), a revolutionary technology that allows for stacking on GPUs and effectively replacing GDDR SDRAM. It was recognized as an industry standard by JEDEC the same year.
In 2015, AMD introduced Fiji, the first high-end consumer GPU with integrated HBM, followed by NVIDIA’s release of P100, the first AI server GPU with HBM in 2016, marking the beginning of a new era for server GPU’s integration with HBM.
HBM’s rise as the mainstream technology sought after by key players can be attributed to its exceptional bandwidth and lower power consumption when compared to DDR SDRAM. For example, HBM3 delivers 15 times the bandwidth of DDR5 and can further increase the total bandwidth by adding more stacked dies. Additionally, at system level, HBM can effectively manage power consumption by replacing a portion of GDDR SDRAM or DDR SDRAM.
As computing power demands increase, HBM’s exceptional transmission efficiency unlocks the full potential of core computing components. Integrating HBM into server GPUs has become a prominent trend, propelling the global HBM market to grow at a compound annual rate of 40-45% from 2023 to 2025, according to TrendForce.
The Crucial Role of 2.5D Packaging
In the midst of this trend, the crucial role of 2.5D packaging technology in enabling such integration cannot be overlooked.
TSMC has been laying the groundwork for 2.5D packaging technology with CoWoS (Chip on Wafer on Substrate) since 2011. This technology enables the integration of logic chips on the same silicon interposer. The third-generation CoWoS technology, introduced in 2016, allowed the integration of logic chips with HBM and was adopted by NVIDIA for its P100 GPU.
With development in CoWoS technology, the interposer area has expanded, accommodating more stacked HBM dies. The 5th-generation CoWoS, launched in 2021, can integrate 8 HBM stacks and 2 core computing components. The upcoming 6th-generation CoWoS, expected in 2023, will support up to 12 HBM stacks, meeting the requirements of HBM3.
TSMC’s CoWoS platform has become the foundation for high-performance computing platforms. While other semiconductor leaders like Samsung, Intel, and ASE are also venturing into 2.5D packaging technology with HBM integration, we think TSMC is poised to be the biggest winner in this emerging field, considering its technological expertise, production capacity, and order capabilities.
In conclusion, the remarkable transmission efficiency of HBM, facilitated by the advancements in 2.5D packaging technologies, creates an exciting prospect for the seamless convergence of these innovations. The future holds immense potential for enhanced computing experiences.