News
According to Taiwan’s Economic Daily, TSMC Chairman Mark Liu stated on 9/6 that semiconductor technology development “has reached the exit of the tunnel, and there are more possibilities beyond the tunnel; we are no longer bound by the tunnel.”
Regarding TSMC’s progress in establishing a factory in the United States, Liu mentioned that this project has received support from the local government and has made significant progress in recent months. He added, “We will certainly make it very successful.”
As for the recent shortage of chips caused by generative AI, Liu noted that it is not due to TSMC’s manufacturing capacity but rather the sudden threefold increase in CoWoS (Chip-on-Wafer-on-Substrate) demand. TSMC will continue to support the demand in the short term but cannot immediately ramp up production. Liu estimated that TSMC’s capacity will catch up with customer demand in about a year and a half, considering the capacity bottleneck as a short-term phenomenon.
Regarding SoftBank Group’s subsidiary, Arm, planning an initial public offering (IPO) to raise funds, Liu also revealed that they are evaluating whether to become an investor in Arm, with a decision expected in the next one or two weeks. He emphasized Arm’s importance within the semiconductor ecosystem, expressing TSMC’s desire for a successful Arm IPO.
News
According to a report by Taiwan’s Central News Agency, Tien Wu, CEO of the semiconductor packaging and testing giant ASE Group, believes that the semiconductor industry is experiencing ongoing inventory adjustments, with uncertainties remaining in the global economy. However, he maintains a positive long-term outlook, asserting that semiconductor demand remains robust. Wu also revealed that ASE Group is expanding its operations in Penang, Malaysia, with expectations of doubling its revenue to $750 million within 2 to 3 years.
The 2023 Semicon Taiwan is set to begin on the 6th, and when discussing the economic outlook for the second half of the year, Wu noted that the semiconductor industry is well-aware of the current inventory corrections and the lingering global economic uncertainties. Nevertheless, he maintains relative optimism about the industry’s long-term development.
Regarding the company’s involvement in advanced packaging, such as Chip-on-Wafer-on-Substrate (CoWoS), Wu mentioned that ASE Group offers corresponding services in this field. When asked about the contribution of artificial intelligence (AI) applications and advanced packaging to the company’s portfolio, he stated that it’s currently challenging to evaluate. However, he emphasized that AI is a significant focus for ASE Group.
In response to inquiries about whether customers have requested ASE Group to shift a portion of its production capacity outside of Taiwan (Taiwan+1) to mitigate risks, Wu clarified that there have been no specific requests from customers regarding proportional capacity transfers or deadlines for such transfers. Production capacity adjustments are primarily made flexibly, contingent on the readiness of the local supply chain. He emphasized that customer discussions regarding capacity adjustments are rational and logical.
Wu stressed that customer demands are being met in accordance with logic and regulatory considerations. In response to urgent service needs, ASE Group is expanding its operations in locations outside Taiwan. However, this does not signify a complete relocation of Taiwanese production capacity, nor does it indicate that customers have mandated such a shift.
He disclosed that ASE Group’s expansion is taking place in Penang, Malaysia, with the first five-story building expected to be completed by July next year. Plans are in place for a second building by 2025. Currently, ASE Group’s Penang facility generates approximately $350 million in annual revenue. It is projected that within 2 to 3 years, the facility’s revenue will double to $750 million.
In addition to its California presence, Wu highlighted that the ASE Group subsidiary, ISE Labs, has expanded its capacity in San Jose to meet customer demands. He emphasized that ASE Group continues to expand its operations in Taiwan as well, including locations in Zhongli, Kaohsiung, Taichung’s Tanzi.
(Photo credit: ASE)
News
TSMC’s CoWoS advanced packaging capacity shortage is causing limitations in NVIDIA’s AI chip output. Reports are emerging that NVIDIA is willing to pay a premium for alternative manufacturing capacity outside of TSMC, setting off a surge in massive overflow orders. UMC, the supplier of interposer materials for CoWoS, has reportedly raised prices for super hot runs and initiated plans to double its production capacity to meet client demand. ASE, an advanced packaging provider, is also seeing movement in its pricing.
In response to this, both UMC and ASE declined to comment on pricing and market rumors. In addressing the CoWoS advanced packaging capacity issue, NVIDIA previously confirmed during its financial report conference that it had certified other CoWoS packaging suppliers for capacity support and would collaborate with them to increase production, with industry speculation pointing towards ASE and other professional packaging factories.
TSMC’s CEO, C.C. Wei, openly stated that their advanced packaging capacity is at full utilization, and as the company actively expands its capacity, they will also outsource to professional packaging and testing factories.
It’s understood that the overflow effect from the inadequate CoWoS advanced packaging capacity at TSMC is gradually spreading. As the semiconductor industry as a whole adjusts its inventory, advanced packaging has become a market favorite.
Industry insiders point out that the interposer, acting as a communication medium within small chips, is a critical material in advanced packaging. With a broad uptick in demand for advanced packaging, the market for interposer materials is growing in parallel. Faced with high demand and limited supply, UMC has raised prices for super-hot-run interposer components.
UMC revealed that it has a comprehensive solution in the interposer field, including carriers, customed ASICs, and memory, with cooperation from multiple factories forming a substantial advantage. If other competitors are entering this space now, they might not have the quick responsiveness or abundant peripheral resources that UMC does.
UMC emphasized that compared to competitors, its competitive advantage in the interposer field lies in its open architecture. Currently, UMC’s interposer production primarily takes place in its Singapore plant, with a current capacity of about 3,000 units, with a target of doubling to six or seven thousand to meet customer demand.
Industry analysts attribute TSMC’s tight CoWoS advanced packaging capacity to a sudden surge in NVIDIA’s orders. TSMC’s CoWoS packaging had primarily catered to long-term partners, with production schedules already set, making it unable to provide NVIDIA with additional capacity. Moreover, even with tight capacity, TSMC won’t arbitrarily raise prices, as it would disrupt existing client production schedules. Therefore, NVIDIA’s move to secure additional capacity support through a premium likely involves temporary outsourced partners.
(Photo credit: NVIDIA)
News
According to a report from Taiwan’s TechNews, NVIDIA has delivered impressive results in its latest financial report, coupled with an optimistic outlook for its financial projections. This demonstrates that the demand for AI remains robust for the coming quarters. Currently, NVIDIA’s H100 and A100 chips both utilize TSMC’s CoWoS advanced packaging technology, making TSMC’s production capacity a crucial factor.
Examining the core GPU market, NVIDIA holds a dominant market share of 90%, while AMD accounts for about 10%. While other companies might adopt Google’s TPU or develop customized chips, they currently lack significant operational cost advantages.
In the short term, the shortage of CoWoS has led to tight chip supplies. However, according to a recent report by Morgan Stanley Securities, NVIDIA believes that TSMC’s CoWoS capacity won’t restrict shipments of the next quarter’s H100 GPUs. The company anticipates an increase in supply for each quarter next year. Simultaneously, TSMC is raising CoWoS prices by 20% for rush orders, indicating that the anticipated CoWoS bottleneck might alleviate.
According to industry sources, NVIDIA is actively diversifying its CoWoS supply chain away from TSMC. UMC, ASE, Amkor, and SPIL are significant players in this effort. Currently, UMC is expanding its interposer production capacity, aiming to double its capacity to relieve the tight CoWoS supply situation.
According to Morgan Stanley Securities, TSMC’s monthly CoWoS capacity this year is around 11,000 wafers, projected to reach 25,000 wafers by the end of next year. Non-TSMC CoWoS supply chain’s monthly capacity can reach 3,000 wafers, with a planned increase to 5,000 wafers by the end of next year.
(Photo credit: TSMC)
Press Releases
NVIDIA’s latest financial report for FY2Q24 reveals that its data center business reached US$10.32 billion—a QoQ growth of 141% and YoY increase of 171%. The company remains optimistic about its future growth. TrendForce believes that the primary driver behind NVIDIA’s robust revenue growth stems from its data center’s AI server-related solutions. Key products include AI-accelerated GPUs and AI server HGX reference architecture, which serve as the foundational AI infrastructure for large data centers.
TrendForce further anticipates that NVIDIA will integrate its software and hardware resources. Utilizing a refined approach, NVIDIA will align its high-end, mid-tier, and entry-level GPU AI accelerator chips with various ODMs and OEMs, establishing a collaborative system certification model. Beyond accelerating the deployment of CSP cloud AI server infrastructures, NVIDIA is also partnering with entities like VMware on solutions including the Private AI Foundation. This strategy extends NVIDIA’s reach into the edge enterprise AI server market, underpinning steady growth in its data center business for the next two years.
NVIDIA’s data center business surpasses 76% market share due to strong demand for cloud AI
In recent years, NVIDIA has been actively expanding its data center business. In FY4Q22, data center revenue accounted for approximately 42.7%, trailing its gaming segment by about 2 percentage points. However, by FY1Q23, data center business surpassed gaming—accounting for over 45% of revenue. Starting in 2023, with major CSPs heavily investing in ChatBOTS and various AI services for public cloud infrastructures, NVIDIA reaped significant benefits. By FY2Q24, data center revenue share skyrocketed to over 76%.
NVIDIA targets both Cloud and Edge Data Center AI markets
TrendForce observes and forecasts a shift in NVIDIA’s approach to high-end GPU products in 2H23. While the company has primarily focused on top-tier AI servers equipped with the A100 and H100, given positive market demand, NVIDIA is likely to prioritize the higher-priced H100 to effectively boost its data-center-related revenue growth.
NVIDIA is currently emphasizing the L40s as their flagship product for mid-tier GPUs, meaning several strategic implications: Firstly, the high-end H100 series is constrained by the limited production capacity of current CoWoS and HBM technologies. In contrast, the L40s primarily utilizes GDDR memory. Without the need for CoWos packaging, it can be rapidly introduced to the mid-tier AI server market, filling the gap left by the A100 PCle interface in meeting the needs of enterprise customers.
Secondly, the L40s also target enterprise customers who don’t require large parameter models like ChatGPT. Instead, it focuses on more compact AI training applications in various specialized fields, with parameter counts ranging from tens of billions to under a hundred billion. They can also address edge AI inference or image analysis tasks. Additionally, in light of potential geopolitical issues that might disrupt the supply of the high-end GPU H series for Chinese customers, the L40s can serve as an alternative. As for lower-tier GPUs, NVIDIA highlights the L4 or T4 series, which are designed for real-time AI inference or image analysis in edge AI servers. These GPUs underscore affordability while maintaining a high-cost-performance ratio.
HGX and MGX AI server reference architectures are set to be NVIDIA’s main weapons for AI solutions in 2H23
TrendForce notes that recently, NVIDIA has not only refined its product positioning for its core AI chip GPU but has also actively promoted its HGX and MGX solutions. Although this approach isn’t new in the server industry, NVIDIA has the opportunity to solidify its leading position with this strategy. The key is NVIDIA’s absolute leadership stemming from its extensive integration of its GPU and CUDA platform—establishing a comprehensive AI ecosystem. As a result, NVIDIA has considerable negotiating power with existing server supply chains. Consequently, ODMs like Inventec, Quanta, FII, Wistron, and Wiwynn, as well as brands such as Dell, Supermicro, and Gigabyte, are encouraged to follow NVIDIA’s HGX or MGX reference designs. However, they must undergo NVIDIA’s hardware and software certification process for these AI server reference architectures. Leveraging this, NVIDIA can bundle and offer integrated solutions like its Arm CPU Grace, NPU, and AI Cloud Foundation.
It’s worth noting that for ODMs or OEMs, given that NVIDIA is expected to make significant achievements in the AI server market for CSPs from 2023 to 2024, there will likely be a boost in overall shipment volume and revenue growth of AI servers. However, with NVIDIA’s strategic introduction of standardized AI server architectures like HGX or MGX, the core product architecture for AI servers among ODMs and others will become more homogenized. This will intensify the competition among them as they vie for orders from CSPs. Furthermore, it’s been observed that large CSPs such as Google and AWS are leaning toward adopting in-house ASIC AI accelerator chips in the future, meaning there’s a potential threat to a portion of NVIDIA’s GPU market. This is likely one of the reasons NVIDIA continues to roll out GPUs with varied positioning and comprehensive solutions. They aim to further expand their AI business aggressively to Tier-2 data centers (like CoreWeave) and edge enterprise clients.