Semiconductors


2023-07-05

Market Demand Remains Weak, No Signs of Rebound in Spot Prices for DRAM and NAND Flash

TrendForce has released the latest spot prices for DRAM and NAND Flash, indicating that market demand remains weak and there are no signs of a rebound in spot prices. The details are as follows.

DRAM Spot Market:

Similar to the contract market, the spot market is still showing weak demand, and spot prices on the whole have been registering small daily declines. There is no indication of a turnaround anytime soon. Both spot prices of DDR4 and DDR5 products are falling as channel customers are restrained with respect to stock-up activities. The average spot price of the mainstream chips (i.e., DDR4 1Gx8 2666MT/s) fell by 0.33% from US$1.506 last week to US$1.501 this week.

NAND Flash Spot Market:

Recent spot market transactions remain relatively apathetic under sporadic inquiries and sluggish demand, and concluded prices of NAND Flash products are still slowly dropping due to the lack of stimulation from seasonal order pulls. 512Gb TLC wafer spots have dropped by 0.21% this week, arriving at US$1.408.

 

2023-07-04

Global GaN Power Device Market Set to Soar, Reaching $1.33 Billion by 2026

According to TrendForce’s “2023 GaN Power Semiconductor Market Analysis Report – Part 1,” the global GaN power device market is projected to grow from $180 million in 2022 to $1.33 billion in 2026, with a compound annual growth rate of 65%.

The development of the GaN power device market is primarily driven by consumer electronics, with a focus on fast chargers as the core application. Other consumer electronic scenarios include Class D audio and wireless charging.

However, many manufacturers have already shifted their focus to the industrial market, with data centers being a key application. ChatGPT has sparked a wave of AI cloud server deployment, and GaN technology will help data centers reduce operating costs and improve server efficiency.

Simultaneously, the automotive market is also gaining attention, as OEMs and Tier 1 suppliers recognize the potential of GaN. It is expected that by around 2025, GaN will gradually penetrate low-power onboard chargers (OBC) and DC-DC converters. Looking further ahead to 2030, OEMs may consider incorporating GaN technology into traction inverters.

In terms of market competition, based on GaN power device business revenue, Power Integrations ranked first in 2022. The company has been leading the high-voltage market’s development since 2018, and its excellent GaN integrated solutions have gained wide market recognition. Other leading manufacturers include Navitas, Innosic, EPC, GaN Systems, and Transphorm.

Additionally, the industry paid attention to the acquisition of GaN Systems by Infineon. According to TrendForce’s statistics, the combined market share of both companies was approximately 15% in 2022.

Turning to the supply chain, as mentioned earlier, the development of the GaN power device market will be driven by consumer electronics for a long time. Therefore, the industry must pursue scale and low cost, necessitating the expansion of wafer sizes. Currently, mainstream GaN power wafers still rely on 6-inch silicon substrates, with only Innosic, X-FAB, and VIS offering 8-inch options. With a positive outlook for the long-term development of the GaN power market, several wafer manufacturers have announced plans to shift to 8-inch wafers in the coming years, including Infineon, STMicroelectronics, TSMC, and others.

Furthermore, Samsung recently announced its entry into the 8-inch market and plans to provide foundry services starting from 2025, a development worth industry attention.

(Photo credit: Navitas)

2023-06-29

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023, Says TrendForce

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce’s forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.

HBM is unequivocally a superior solution for building high-speed computing platforms, thanks to its higher bandwidth and lower energy consumption compared to DDR SDRAM. This distinction is clear when comparing DDR4 SDRAM and DDR5 SDRAM, released in 2014 and 2020 respectively, whose bandwidths only differed by a factor of two. Regardless of whether DDR5 or the future DDR6 is used, the quest for higher transmission performance will inevitably lead to an increase in power consumption, which could potentially affect system performance adversely. Taking HBM3 and DDR5 as examples, the former’s bandwidth is 15 times that of the latter and can be further enhanced by adding more stacked chips. Furthermore, HBM can replace a portion of GDDR SDRAM or DDR SDRAM, thus managing power consumption more effectively.

TrendForce concludes that the current driving force behind the increasing demand is AI servers equipped with Nvidia A100, H100, AMD MI300, and large CSPs such as Google and AWS, which are developing their own ASICs. It is estimated that the shipment volume of AI servers, including those equipped with GPUs, FPGAs, and ASICs, will reach nearly 1.2 million units in 2023, marking an annual growth rate of almost 38%. TrendForce also anticipates a concurrent surge in the shipment volume of AI chips, with growth potentially exceeding 50%.

2023-06-28

Under the Hood: How is SiC Reshaping the Automotive Supply Chain?

The global automotive industry is pouring billions of dollars into SiC semiconductors, hoping that they could be key to transforming vehicle power systems. This shift is rapidly changing the supply chain at all levels, from components to modules.

In the previous piece “SiC vs. Silicon Debate: Will the Winner Take All?,” we explored SiC’s unique physical properties. Its ability to facilitate high-frequency fast charging, increase range, and reduce vehicle weight has made it increasingly popular in the market of electric vehicles (EVs).

Research from TrendForce shows that the main inverter has become the first area for a substantial penetration of SiC modules. In 2022, nearly 90% of all SiC usage in conventional vehicles was in main inverters. As demand grows for longer range and quicker charging times, we’re seeing a shift in vehicle voltage platforms from 400V to 800V. This evolution makes SiC a strategic focus for automotive OEMs, likely making it a standard component in main inverters in the future.

However, it is common for now that SiC power component suppliers fail to meet capacity and yield expectations – a challenge that directly affects car production schedules. This has led to a struggle for SiC capacity that is impacting the entire market segment.

Device Level: Burgeoning Strategic Alliances

Given the long-term scarcity of SiC components, leading OEMs and Tier 1 companies are vying to forge strategic partnerships or joint ventures with key SiC semiconductor suppliers, aiming to secure a steady supply of SiC.

In terms of technology, Planar SiC MOSFETs currently offer more mature reliability guarantees. However, the future appears to lie in Trench technology due to its cost and performance advantages.

Infineon and ROHM are leaders in this technology, while Planar manufacturers like STM, Wolfspeed, and On Semi are gradually transitioning to this new structure in their next-generation products. The pace at which customers embrace this new technology is a trend to watch closely.

Module Level: Highly-customized Solutions

When it comes to key main inverter component modules, more automakers prefer to define their own SiC modules – they prefer semiconductor suppliers to provide only the bare die, allowing chips from various suppliers to be compatible with their custom packaging modules for supply stability.

For instance, Tesla’s TPAK SiC MOSFET module as a model case for achieving high design flexibility. The module employs multi-tube parallelism, allowing different numbers of TPAKs to be paralleled in the same package based on the power level in the EV drive system. The bare dies for each TPAK can be purchased from different suppliers and allow cross-material platform use (Si IGBT, SiC MOSFET, GaN HEMT), establishing a diversified supply ecosystem.

China’s Deep Dive into SiC Module Design

In the vibrant Chinese market, automakers are accelerating the investment in SiC power modules, and are collaborating with domestic packaging factories and international IDMs to build technical barriers.

  • Li Auto has collaborated with San’an Semiconductor to jointly establish a SiC power module packaging production line, expected to go into production in 2024. 
  • NIO is developing its own motor inverters and has signed a long-term supply agreement with SiC device suppliers like ON Semi.
  • Great Wall Motor, amidst its transformation, has also focused on SiC technology as a key strategy. Not only have they set up their own packaging production line, but they’ve also tied up with SiC substrate manufacturers by investing in Tongguang Semiconductor.

Clearly, the rising demand for SiC is redrawing the map of the value chain. We anticipate an increase in automakers and Tier 1 companies creating their unique SiC power modules tailored for 800-900V high-voltage platforms. This push will likely catalyze an influx of innovative product solutions by 2025, thereby unlocking significant market potential and ushering in a comprehensive era of EVs.

2023-06-26

HBM and 2.5D Packaging: the Essential Backbone Behind AI Server

With the advancements in AIGC models such as ChatGPT and Midjourney, we are witnessing the rise of more super-sized language models, opening up new possibilities for High-Performance Computing (HPC) platforms.

According to TrendForce, by 2025, the global demand for computational resources in the AIGC industry – assuming 5 super-sized AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products equivalent to Midjourney, and 80 small-sized AIGC products – would be approximately equivalent to 145,600 – 233,700 units of NVIDIA A100 GPUs. This highlights the significant impact of AIGC on computational requirements.

Additionally, the rapid development of supercomputing, 8K video streaming, and AR/VR will also lead to an increased workload on cloud computing systems. This calls for highly efficient computing platforms that can handle parallel processing of vast amounts of data.
However, a critical concern is whether hardware advancements can keep pace with the demands of these emerging applications.

HBM: The Fast Lane to High-Performance Computing

While the performance of core computing components like CPUs, GPUs, and ASICs has improved due to semiconductor advancements, their overall efficiency can be hindered by the limited bandwidth of DDR SDRAM.

For example, from 2014 to 2020, CPU performance increased over threefold, while DDR SDRAM bandwidth only doubled. Additionally, the pursuit of higher transmission performance through technologies like DDR5 or future DDR6 increases power consumption, posing long-term impacts on computing systems’ efficiency.

Recognizing this challenge, major chip manufacturers quickly turned their attention to new solutions. In 2013, AMD and SK Hynix made separate debuts with their pioneering products featuring High Bandwidth Memory (HBM), a revolutionary technology that allows for stacking on GPUs and effectively replacing GDDR SDRAM. It was recognized as an industry standard by JEDEC the same year.

In 2015, AMD introduced Fiji, the first high-end consumer GPU with integrated HBM, followed by NVIDIA’s release of P100, the first AI server GPU with HBM in 2016, marking the beginning of a new era for server GPU’s integration with HBM.

HBM’s rise as the mainstream technology sought after by key players can be attributed to its exceptional bandwidth and lower power consumption when compared to DDR SDRAM. For example, HBM3 delivers 15 times the bandwidth of DDR5 and can further increase the total bandwidth by adding more stacked dies. Additionally, at system level, HBM can effectively manage power consumption by replacing a portion of GDDR SDRAM or DDR SDRAM.

As computing power demands increase, HBM’s exceptional transmission efficiency unlocks the full potential of core computing components. Integrating HBM into server GPUs has become a prominent trend, propelling the global HBM market to grow at a compound annual rate of 40-45% from 2023 to 2025, according to TrendForce.

The Crucial Role of 2.5D Packaging

In the midst of this trend, the crucial role of 2.5D packaging technology in enabling such integration cannot be overlooked.

TSMC has been laying the groundwork for 2.5D packaging technology with CoWoS (Chip on Wafer on Substrate) since 2011. This technology enables the integration of logic chips on the same silicon interposer. The third-generation CoWoS technology, introduced in 2016, allowed the integration of logic chips with HBM and was adopted by NVIDIA for its P100 GPU.

With development in CoWoS technology, the interposer area has expanded, accommodating more stacked HBM dies. The 5th-generation CoWoS, launched in 2021, can integrate 8 HBM stacks and 2 core computing components. The upcoming 6th-generation CoWoS, expected in 2023, will support up to 12 HBM stacks, meeting the requirements of HBM3.

TSMC’s CoWoS platform has become the foundation for high-performance computing platforms. While other semiconductor leaders like Samsung, Intel, and ASE are also venturing into 2.5D packaging technology with HBM integration, we think TSMC is poised to be the biggest winner in this emerging field, considering its technological expertise, production capacity, and order capabilities.

In conclusion, the remarkable transmission efficiency of HBM, facilitated by the advancements in 2.5D packaging technologies, creates an exciting prospect for the seamless convergence of these innovations. The future holds immense potential for enhanced computing experiences.

 

  • Page 239
  • 274 page(s)
  • 1370 result(s)

Get in touch with us