Semiconductors


2023-09-11

[News] TSMC Intensifies Silicon Photonics R&D, Rumored Collaboration with Broadcom and NVIDIA

According to a report by Taiwan’s Economic Daily, AI is driving a massive demand for data transmission, and silicon photonics and Co-Packaged Optics (CPO) have become new focal points in the industry. TSMC is actively entering this field and is rumored to be collaborating with major customers such as Broadcom and NVIDIA to jointly develop these technologies. The earliest large orders are expected to come in the second half of next year.

TSMC has already assembled a research and development team of over 200 people, aiming to seize the business opportunities in the emerging market of ultra-high-speed computing chips based on silicon photonics, which are expected to arrive gradually starting next year.

Regarding these rumors, TSMC has stated that they do not comment on customer and product situations. However, TSMC has a high regard for silicon photonics technology. TSMC Vice President Douglas Yu recently stated publicly, “If we can provide a good silicon photonics integration system, it can address two key issues: energy efficiency and AI computing capability. This could be a paradigm shift. We may be at the beginning of a new era.”

Silicon photonics was a hot topic at the recent SEMICON Taiwan 2023 with major semiconductor giants like TSMC and ASE giving related keynote speeches. This surge in interest is mainly due to the proliferation of AI applications, which have raised questions about how to make data transmission faster and achieve signal latency reduction. The traditional method of using electricity for signal transmission no longer meets the demands, and silicon photonics, which converts electricity into faster optical transmission, has become the highly anticipated next-generation technology to enhance high-volume data transmission speeds in the industry.

Industry reports suggest that TSMC is currently collaborating with major customers like Broadcom and NVIDIA to develop new products in the field of silicon photonics and Co-Packaged Optics. The manufacturing process technology ranges from 45 nanometers to 7 nanometers, and with mass production slated for 2025. At that time, it is expected to bring new business opportunities to TSMC.

Industry sources reveal that TSMC has already organized a research and development team of approximately 200 people. In the future, silicon photonics is expected to be incorporated into CPU, GPU, and other computing processes. By changing from electronic transmission lines to faster optical transmission internally, computing capabilities are expected to increase several tens of times compared to existing processors. Currently, this technology is still in the research and academic paper stage, but the industry has high hopes that it will become a new driver of explosive growth for TSMC’s operations in the coming years.

(Photo credit: Google)

2023-09-08

[News] PSMC to Launch Affordable AI Chips Next Year

According to a report by Taiwan’s Commercial Times, the semiconductor market is expected to slow down this year. PSMC Chairman Frank Huang stated that it is estimated that the current wave of semiconductor inventory clearance will not be completed until the end of the first quarter of next year, and the overall market conditions for next year are still not expected to rebound strongly.

When asked about the mature wafer fabs in mainland China aggressively capturing market share this year with low prices, Frank Huang emphasized that this was anticipated. He further stated that PSMC is planning to launch affordable AI chips primarily targeting the consumer market next year, completely differentiating them from Nvidia’s high-priced products. Given the large scale of the consumer market, he expressed optimism regarding future shipment growth.

Huang emphasized that PSMC’s planned AI chips with AI functionality are like miniature computers. Currently, international chip manufacturers offer AI chips with unit prices as high as $200,000, making them impossible for widespread adoption in the consumer market. Therefore, the AI chips PSMC plans to launch next year will have lower prices and will be specifically tailored for the massive consumer market. He gave examples, including affordable AI features being integrated into toys and household appliances. Toys, for instance, will be able to recognize their owners and engage in voice interactions.

Huang mentioned that, because they are targeting affordability and mass appeal, these AI chips will be produced using a 28-nanometer process and are expected to contribute to revenue through formal shipments next year. With a focus on the consumer market, Huang is optimistic about the future shipments and business contributions of these AI chips.

2023-09-08

[News] Reportedly, TSMC’s U.S. Factory Plans Small-Scale Trial Line for Q1 2024

According to a report by Taiwan’s Money DJ, the production schedule for TSMC’s semiconductor foundry in the United States has been delayed until 2025, raising concerns among observers. However, Chairman Mark Liu, in an interview on the 6th, stated that there has been significant progress over the past five months and expressed confidence in the project’s success. Industry sources have indicated that TSMC’s U.S. facility may alter its ramp-up strategy by first establishing a mini-line for trial production, with the expectation of having it in place by the first quarter of 2024.

TSMC’s Fab 21 Phase 1 construction began in April 2021, originally slated for early 2024 production. However, challenges such as a shortage of skilled equipment installation personnel, local union protests, and differences in overseas safety regulations have caused delays in equipment installation. This has compelled TSMC to adjust its plans, and the expected production timeline is now set for 2025, representing a one-year delay.

Industry analysts have noted that the efficiency of equipment entering the facility at TSMC’s U.S. plant in Arizona is only about one-third of that of its Taiwan facilities. Given the current pace of progress, the time required for equipment setup to actual production could be substantial. Therefore, TSMC has decided to change its previous ramp-up strategy and first establish a mini-line with an initial estimated monthly capacity of about 4,000 to 5,000 wafers. This approach aims to ensure some level of production output while mitigating potential contract breach issues arising from delays in production.

(Photo credit: TSMC)

2023-09-08

Continuing Moore’s Law: Advanced Packaging Enters the 3D Stacked CPU/GPU Era

As applications like AIGC, 8K, AR/MR, and others continue to develop, 3D IC stacking and heterogeneous integration of chiplet have become the primary solutions to meet future high-performance computing demands and extend Moore’s Law.

Major companies like TSMC and Intel have been expanding their investments in heterogeneous integration manufacturing and related research and development in recent years. Additionally, leading EDA company Cadence has taken the industry lead by introducing the “Integrity 3D-IC” platform, an integrated solution for design planning, realization, and system analysis simulation tools, marking a significant step towards 3D chip stacking.

Differences between 2.5D and 3D Packaging

The main difference between 2.5D and 3D packaging technologies lies in the stacking method. 2.5D packaging involves stacking chips one by one on an interposer or connecting them through silicon bridges, primarily used for assembling logic processing chips and high-bandwidth memory. On the other hand, 3D packaging is a technology that vertically stacks chips, mainly targeting high-performance logic chips and SoC manufacturing.

CPU and HBM Stacking Demands

With the rapid development of applications like AIGC, AR/VR, and 8K, it is expected that a significant amount of computational demand will arise, particularly driving the need for parallel computing systems capable of processing big data in a short time. To overcome the bandwidth limitations of DDR SDRAM and further enhance parallel computing performance, the industry has been increasingly adopting High-Bandwidth Memory (HBM). This trend has led to a shift from the traditional “CPU + memory (such as DDR4)” architecture to the “Chip + HBM stacking” 2.5D architecture. With continuous growth in computational demand, the future may see the integration of CPU, GPU, or SoC through 3D stacking.

3D Stacking with HBM Prevails, but CPU Stacking Lags Behind

HBM was introduced in 2013 as a 3D stacked architecture for high-performance SDRAM. Over time, the stacking of multiple layers of HBM has become widespread in packaging, while the stacking of CPUs/GPUs has not seen significant progress.

The main reasons for this disparity can be attributed to three factors: 1. Thermal conduction, 2. Thermal stress, and 3. IC design. First, 3D stacking has historically performed poorly in terms of thermal conduction, which is why it is primarily used in memory stacking, as memory operations generate much less heat than logic operations. As a result, the thermal conduction issues faced by current memory stacking products can be largely disregarded.

Second, thermal stress issues arise from the mismatch in coefficients of thermal expansion (CTE) between materials and the introduction of stress from thinning the chips and introducing metal layers. The complex stress distribution in stacked structures has a significant negative impact on product reliability.

Finally, IC design challenges from a lack of EDA tools, as traditional CAD tools are inadequate for handling 3D design rules. Developers must create their own tools to address process requirements, and the complex design of 3D packaging further increases the design, manufacturing, and testing costs.

How EDA Companies Offer Solutions

Cadence, during the LIVE Taiwan 2023 user annual conference, highlighted its years of effort in developing solutions. They have introduced tools like the Clarity 3D solver, Celsius thermal solver, and Sigrity Signal and Power Integrity, which can address thermal conduction and thermal stress simulation issues. When combined with Cadence’s comprehensive EDA tools, these offerings contribute to the growth of the “Integrity 3D-IC” platform, aiding in the development of 3D IC design.

“3D IC” represents a critical design trend in semiconductor development. However, it presents greater challenges and complexity than other projects. In addition to the challenges in Logic IC design, there is a need for analog and multi-physics simulations. Therefore, cross-platform design tools are indispensable. The tools provided by EDA leader Cadence are expected to strengthen the 3D IC design tool platform, reducing the technological barriers for stacking CPU, GPU, or SoC to enhance chip computing performance.

This article is from TechNews, a collaborative media partner of TrendForce.

(Photo credit: TSMC)

2023-09-07

Can China’s Indigenous AI Chips Compete with NVIDIA?

In its FY2Q24 earnings report for 2023, NVIDIA disclosed that the U.S. government had imposed controls on its AI chips destined for the Middle East. However, on August 31, 2023, the U.S. Department of Commerce stated that they had “not prohibited the sale of chips to the Middle East” and declined to comment on whether new requirements had been imposed on specific U.S. companies. Both NVIDIA and AMD have not responded to this issue.

TrendForce’s analysis:

  • Close ties between Middle Eastern countries and China raise U.S. concerns:

In NVIDIA’s FY2Q24 earnings report, it mentioned, “During the second quarter of fiscal year 2024, the USG informed us of an additional licensing requirement for a subset of A100 and H100 products destined to certain customers and other regions, including some countries in the Middle East.” It is speculated that the U.S. is trying to prevent high-speed AI chips from flowing into the Chinese market via the Middle East. This has led to controls on the export of AI chips to the Middle East.

Since August 2022, the U.S. has imposed controls on NVIDIA A100, H100, AMD MI100, MI200, and other AI-related GPUs, restricting the export of AI chips with bidirectional transfer rates exceeding 600GB/s to China. Saudi Arabia had already signed a strategic partnership with China in 2022 for cooperation in the digital economy sector, including AI, advanced computing, and quantum computing technologies. Additionally, the United Arab Emirates has expressed interest in AI cooperation with China. There have been recent reports of Saudi Arabia heavily acquiring NVIDIA’s AI chips, which has raised concerns in the U.S.

  • Huawei is expected to release AI chips comparable to NVIDIA A100 in the second half of 2024; competition is yet to be observed:

Affected by U.S. sanctions, Chinese companies are vigorously developing AI chips. iFlytek is planning to launch a new general-purpose LLM (Large Language Model) in October 2023, and the AI chip Ascend 910B, co-developed with Huawei, is expected to hit the market in the second half of 2024, with performance claimed to rival that of NVIDIA A100. In fact, Huawei had already introduced the Ascend 910, which matched the performance of NVIDIA’s V100, in 2019. Considering Huawei’s Kirin 9000s, featured in the flagship smartphone Mate 60 Pro released in August 2023, it is highly likely that Huawei can produce products with performance comparable to A100.

However, it’s important to note that the A100 was already announced by NVIDIA in 2020. This means that even if Huawei successfully launches a new AI chip, it will already be four years behind NVIDIA. Given the expected 7nm process for Huawei’s Ascend 910B and NVIDIA’s plan to release the 3nm process-based Blackwell architecture GPU B100 in the second half of 2024, Huawei will also lag behind by two generations in chip fabrication technology. With the parameters of LLM doubling annually, the competitiveness of Huawei’s new AI chip remains to be observed.

  • China remains NVIDIA’s dominion in the short term:

Despite the active development of AI chips by Chinese IC design house, NVIDIA’s AI chips remain the preferred choice for training LLM models among Chinese cloud companies. Looking at the revenue performance of the leading Chinese AI chip company, Cambricon, its revenue for the first half of 2023 was only CNY 114 million, a YoY decrease of 34%. While being added to the U.S. Entity List was a major reason for the revenue decline, NVIDIA’s dominance in the vast Chinese AI market is also a contributing factor. It is estimated that NVIDIA’s market share in the Chinese GPU market for AI training exceeded 95% in the first half of 2023. In fact, in the second quarter of 2023, the China market accounted for 20-25% of NVIDIA’s Data Center segment revenue.

The main reason for this is that the Chinese AI ecosystem is still quite fragmented and challenging to compete with NVIDIA’s CUDA ecosystem. Therefore, Chinese companies are actively engaged in software development. However, building a sufficiently attractive ecosystem to lure Chinese CSPs in the short term remains quite challenging. Consequently, it is expected that NVIDIA will continue to dominate the Chinese market for the next 2-3 years.

(Photo credit: NVIDIA)

  • Page 269
  • 320 page(s)
  • 1598 result(s)

Get in touch with us