IC Manufacturing, Package&Test


2023-08-30

[News] Huawei Mate 60’s Kirin 9000s: SMIC Production, Old Tech or US Restriction Break?

According to a report by Taiwan’s TechNews, the Huawei Kirin 9000S mobile processor, dubbed by Chinese media as “4G technology with 5G speed,” was incorporated into the Huawei Mate 60 Pro smartphone on the 29th. The phone was made available for purchase directly without a launch event or prior promotion, priced at 6,999 Chinese Yuan, sparking significant industry discussion.

The discussion around the Huawei Kirin 9000S mobile processor stems from the fact that, for the first time post the US-China trade war, a chip foundry has manufactured chips for Huawei, featuring an advanced 5-nanometer process. Does this signify a breakthrough for Chinese chip production amidst US restrictions and a leap forward in China’s semiconductor industry? At present, the answer seems to be negative.

According to insiders’ revelations, the Mate 60 Pro’s Kirin 9000S chip was manufactured by SMIC. However, key production aspects are still under US control, making breaking through these limitations quite challenging.

Screenshots shared by users indicate that Kirin is on a 5nm process. Nonetheless, technical experts widely believe that the 9000S isn’t on a 5nm process; rather, it’s on SMIC’s N+2 process.

Source: fin

SMIC is the only Chinese enterprise capable of mass-producing 14-nanometer FinFET technology. Both N+1 and N+2 processes are improvements based on the 14nm FinFET technology and are achieved through DUV lithography, bypassing US restrictions. (The most advanced processes currently require EUV lithography machines.)

SMIC has not openly stated that N+1 and N+2 are on the 7nm process. However, the chip industry generally considers N+1 to be equivalent to 7nm LPE (Low Power) technology, and N+2 to be equivalent to 7nm LPP (High Performance) technology. The shipment of the Mate 60 Pro seems to have openly revealed information about SMIC’s N+2 process reaching maturity and entering mass production.

(Photo credit: Huawei)

2023-08-30

[News] Intel’s Processor Upgrades: Impact on TSMC’s Revenue Awaited

According to Taiwan’s TechNews report, Intel has revealed the architecture and supply schedule of the new generation data center Xeon processors, Sierra Forest and Granite Rapids. They are also set to unveil the consumer processor codenamed Meteor Lake in mid-September. However, with the semiconductor market’s current weak recovery, the impact of Intel’s new processors on driving upgrades and benefiting Taiwanese supply chain manufacturers remains uncertain, making it a market focal point.

Regarding the consumer-oriented Meteor Lake processor, industry sources suggest that it will not only be the first to adopt “Intel 4” technology, but also the first to utilize EUV lithography for cost reduction in mass-producing CPU tiles. TSMC will assist in production using the 5/6 nanometer process for graphics chip modules (GFX tile), system chip modules (SoC tile), and input/output chip modules (IOE tile), aiming for higher yields to decrease production costs.

Furthermore, the Meteor Lake processor shifts from traditional monolithic chip design to chiplet technology. After separating functions like graphics, system, and I/O chips, it employs the 3D Foveros advanced packaging technology. Through Foveros interconnects, multiple chiplets are vertically stacked into one chip. This approach not only increases the yield of critical modules but also reduces costs, granting Intel greater flexibility in rapidly creating next-generation chip capacities.

For the upcoming Meteor Lake processor, its direct beneficiary is undoubtedly TSMC, which assists in producing graphics chip modules, system chip modules, and input/output chip modules using the 5/6 nanometer process. This collaboration not only boosts revenue but also maintains the ongoing partnership with Intel.

However, despite Taiwanese foundries and board manufacturers securing orders for Intel’s new-generation processors, the current economic environment remains unfavorable. With a cautious and conservative outlook on consumer spending in the global market, the launch of Intel’s new products could either boost supply chain revenue or lead to increased inventory in the next phase, requiring further observation.

(Photo credit: Intel)

 

2023-08-29

[News] CoWoS Demand Surges: TSMC Raises Urgent Orders by 20%, Non-TSMC Suppliers Benefit

According to a report from Taiwan’s TechNews, NVIDIA has delivered impressive results in its latest financial report, coupled with an optimistic outlook for its financial projections. This demonstrates that the demand for AI remains robust for the coming quarters. Currently, NVIDIA’s H100 and A100 chips both utilize TSMC’s CoWoS advanced packaging technology, making TSMC’s production capacity a crucial factor.

Examining the core GPU market, NVIDIA holds a dominant market share of 90%, while AMD accounts for about 10%. While other companies might adopt Google’s TPU or develop customized chips, they currently lack significant operational cost advantages.

In the short term, the shortage of CoWoS has led to tight chip supplies. However, according to a recent report by Morgan Stanley Securities, NVIDIA believes that TSMC’s CoWoS capacity won’t restrict shipments of the next quarter’s H100 GPUs. The company anticipates an increase in supply for each quarter next year. Simultaneously, TSMC is raising CoWoS prices by 20% for rush orders, indicating that the anticipated CoWoS bottleneck might alleviate.

According to industry sources, NVIDIA is actively diversifying its CoWoS supply chain away from TSMC. UMC, ASE, Amkor, and SPIL are significant players in this effort. Currently, UMC is expanding its interposer production capacity, aiming to double its capacity to relieve the tight CoWoS supply situation.

According to Morgan Stanley Securities, TSMC’s monthly CoWoS capacity this year is around 11,000 wafers, projected to reach 25,000 wafers by the end of next year. Non-TSMC CoWoS supply chain’s monthly capacity can reach 3,000 wafers, with a planned increase to 5,000 wafers by the end of next year.

(Photo credit: TSMC)

2023-08-28

[News] NVIDIA’s Financial Forecast Stands Out, Yet Short-Term Semiconductor Market Weakness Remains

NVIDIA Beats Expectations with Q2 Financial Results and Optimistic Q3 Outlook, But Overall Semiconductor Short-Term Prospects Remain Weak, According to Taiwan’s Central News Agency.

While the semiconductor industry remains subdued, NVIDIA stands out with robust operational performance and a positive outlook. The company reported Q2 revenue of $13.51 billion, an 88% increase from the previous quarter and double the figure from the same period last year. Net income reached $6.19 billion, translating to $2.48 per share. NVIDIA anticipates Q3 revenue to further reach around $16 billion, marking a 170% YoY increase.

According to research firm TrendForce, NVIDIA’s rapid data center business growth is the primary driver. In Q4 of the fiscal year 2022, data center revenue accounted for about 42.7% of the total, surpassing gaming. In Q1 of FY 2023, it exceeded 45%, and by Q2 of FY 2024, data center revenue reached $10.32 billion, a 141% increase from the previous quarter and a 171% YoY increase, making up more than 76% of total revenue.

TrendForce notes that AI server solutions are pivotal in propelling NVIDIA’s data center growth, including AI accelerator GPUs and AI server reference architecture like HGX.

Arisa Liu, a researcher and director at Taiwan Industry Economics Services, mentioned that NVIDIA’s outstanding performance underscores its solid leadership in the AI market. She emphasized that customer demand for AI-related solutions is consistently on the rise.

Liu also mentioned that NVIDIA’s supply chain is expected to benefit in tandem. Orders for TSMC’s 7nm, 4nm, and 3nm advanced processes might increase. Advanced packaging technologies like CoWoS are expected to remain in high demand. In addition, orders for silicon intellectual property, high-speed transmission components, power supply, PCBs, chassis, and server OEMs are likely to see growth.

However, Liu indicated that due to the relatively low share of the AI market, it cannot fully offset the impact of sluggish demand in major application markets such as computers, smartphones, and consumer electronics. As a result, the short-term semiconductor market conditions are expected to remain weak.

(Photo credit: NVIDIA)

2023-08-25

TrendForce Dives into NVIDIA’s Product Positioning and Supply Chain Shifts Post Earnings Release

NVIDIA’s latest financial report for FY2Q24 reveals that its data center business reached US$10.32 billion—a QoQ growth of 141% and YoY increase of 171%. The company remains optimistic about its future growth. TrendForce believes that the primary driver behind NVIDIA’s robust revenue growth stems from its data center’s AI server-related solutions. Key products include AI-accelerated GPUs and AI server HGX reference architecture, which serve as the foundational AI infrastructure for large data centers.

TrendForce further anticipates that NVIDIA will integrate its software and hardware resources. Utilizing a refined approach, NVIDIA will align its high-end, mid-tier, and entry-level GPU AI accelerator chips with various ODMs and OEMs, establishing a collaborative system certification model. Beyond accelerating the deployment of CSP cloud AI server infrastructures, NVIDIA is also partnering with entities like VMware on solutions including the Private AI Foundation. This strategy extends NVIDIA’s reach into the edge enterprise AI server market, underpinning steady growth in its data center business for the next two years.

NVIDIA’s data center business surpasses 76% market share due to strong demand for cloud AI

In recent years, NVIDIA has been actively expanding its data center business. In FY4Q22, data center revenue accounted for approximately 42.7%, trailing its gaming segment by about 2 percentage points. However, by FY1Q23, data center business surpassed gaming—accounting for over 45% of revenue. Starting in 2023, with major CSPs heavily investing in ChatBOTS and various AI services for public cloud infrastructures, NVIDIA reaped significant benefits. By FY2Q24, data center revenue share skyrocketed to over 76%.

NVIDIA targets both Cloud and Edge Data Center AI markets

TrendForce observes and forecasts a shift in NVIDIA’s approach to high-end GPU products in 2H23. While the company has primarily focused on top-tier AI servers equipped with the A100 and H100, given positive market demand, NVIDIA is likely to prioritize the higher-priced H100 to effectively boost its data-center-related revenue growth.

NVIDIA is currently emphasizing the L40s as their flagship product for mid-tier GPUs, meaning several strategic implications: Firstly, the high-end H100 series is constrained by the limited production capacity of current CoWoS and HBM technologies. In contrast, the L40s primarily utilizes GDDR memory. Without the need for CoWos packaging, it can be rapidly introduced to the mid-tier AI server market, filling the gap left by the A100 PCle interface in meeting the needs of enterprise customers.

Secondly, the L40s also target enterprise customers who don’t require large parameter models like ChatGPT. Instead, it focuses on more compact AI training applications in various specialized fields, with parameter counts ranging from tens of billions to under a hundred billion. They can also address edge AI inference or image analysis tasks. Additionally, in light of potential geopolitical issues that might disrupt the supply of the high-end GPU H series for Chinese customers, the L40s can serve as an alternative. As for lower-tier GPUs, NVIDIA highlights the L4 or T4 series, which are designed for real-time AI inference or image analysis in edge AI servers. These GPUs underscore affordability while maintaining a high-cost-performance ratio.

HGX and MGX AI server reference architectures are set to be NVIDIA’s main weapons for AI solutions in 2H23

TrendForce notes that recently, NVIDIA has not only refined its product positioning for its core AI chip GPU but has also actively promoted its HGX and MGX solutions. Although this approach isn’t new in the server industry, NVIDIA has the opportunity to solidify its leading position with this strategy. The key is NVIDIA’s absolute leadership stemming from its extensive integration of its GPU and CUDA platform—establishing a comprehensive AI ecosystem. As a result, NVIDIA has considerable negotiating power with existing server supply chains. Consequently, ODMs like Inventec, Quanta, FII, Wistron, and Wiwynn, as well as brands such as Dell, Supermicro, and Gigabyte, are encouraged to follow NVIDIA’s HGX or MGX reference designs. However, they must undergo NVIDIA’s hardware and software certification process for these AI server reference architectures. Leveraging this, NVIDIA can bundle and offer integrated solutions like its Arm CPU Grace, NPU, and AI Cloud Foundation.

It’s worth noting that for ODMs or OEMs, given that NVIDIA is expected to make significant achievements in the AI server market for CSPs from 2023 to 2024, there will likely be a boost in overall shipment volume and revenue growth of AI servers. However, with NVIDIA’s strategic introduction of standardized AI server architectures like HGX or MGX, the core product architecture for AI servers among ODMs and others will become more homogenized. This will intensify the competition among them as they vie for orders from CSPs. Furthermore, it’s been observed that large CSPs such as Google and AWS are leaning toward adopting in-house ASIC AI accelerator chips in the future, meaning there’s a potential threat to a portion of NVIDIA’s GPU market. This is likely one of the reasons NVIDIA continues to roll out GPUs with varied positioning and comprehensive solutions. They aim to further expand their AI business aggressively to Tier-2 data centers (like CoreWeave) and edge enterprise clients.

  • Page 54
  • 63 page(s)
  • 312 result(s)

Get in touch with us