News
Nvidia CEO Jensen Huang announced on the 11th the company’s intention to deepen collaboration with high-tech companies in Vietnam, with a focus on fostering local expertise in AI and digital infrastructure development. Huang revealed plans to establish a chip center in Vietnam, as reported by Reuters.
According to documents released by the White House in September to enhance bilateral relations, Nvidia has invested USD 250 million in Vietnam. The company has strategically aligned with leading tech companies to implement AI technology in cloud computing, automotive, and healthcare industries.
This marks Huang’s first visit to Vietnam, where, during an event in Hanoi, he emphasized, “Vietnam is already our partner as we have millions of clients here.” He stated, “Vietnam and Nvidia will deepen our relations, with Viettel, FPT, Vingroup, VNG being the partners Nvidia looks to expand partnership with,” Huang said, adding Nvidia would support Vietnam’s artificial training and infrastructure.
Vietnam’s Minister of Planning and Investment, Chi Dung Nguyen, highlighted during the meeting on December 11th the country’s ongoing efforts to design mechanisms and incentives to attract investments in semiconductor and AI projects.
During his meeting with Vietnamese Prime Minister Pham Minh Chinh on the 10th, Huang shared the vision of establishing an R&D center, emphasizing that “the base will be for attracting talent from around the world to contribute to the development of Vietnam’s semiconductor ecosystem and digitalization.” Subsequently, on the 11th, Nguyen Chi Dung extended an invitation for Nvidia to consider establishing an R&D base in the country.
On the 11th, Nvidia engaged in discussions with the Vietnamese government and local tech companies regarding semiconductor cooperation agreements. According to insiders, Nvidia may potentially reach a technology transfer agreement with at least one Vietnamese company.
Given the strained trade relations between China and the U.S., Vietnam’s technology and manufacturing sectors are presented with a significant opportunity. The government actively seeks to enhance chip design capabilities and explore avenues for establishing a viable chip manufacturing industry.
Vietnam already serves as a pivotal IC packaging hub for global chip manufacturers. For instance, Intel boasts that it has world’s largest IC packaging and testing facility, is situated in Vietnam. Despite temporary delays in the expansion of its Vietnamese factory due to power supply and bureaucratic challenges, Intel affirmed in a Reuters interview, “Vietnam will continue to be a critical part of our global manufacturing operations as demand for semiconductors grows.”
Furthermore, several chipmakers have recently set up or expanded production facilities in Vietnam. Major OSAT provider Amkor commenced operations at its new USD 1.6 billion IC packaging plant in Yen Phong 2C Industrial Park, Bac Ninh Province, Vietnam, in October this year. A month earlier, Samsung’s OSAT partner, Hana Micron, announced the inauguration of its USD 600 million IC packaging plant in Bac Giang Province.
(Image: Nvidia)
Explore more
News
Nvidia CFO, Colette Kress, recently hinted again that the next-gen chips might be outsourced to Intel Corp. During the call with semiconductor analyst Tim Arcuri at the UBS Global Technology Conference on November 28th, she was asked whether Intel would be considered as a foundry partner for the next-gen chips.
In response, she stated that there are many powerful foundries in the market. TSMC and Samsung Electronics have been great partners. She said, “we’d love to have a third one,” when answering whether Nvidia want a third partner.
Kress also mentioned that, TSMC’s and others’ US fab may also be their options, and “there is nothing necessarily different but again in terms of different region. Nothing will stop us from potentially adding another foundry.”
Kress highlighted that Nvidia’s current data center GPUs designed for AI and high-performance computing (HPC) are predominantly outsourced to TSMC. However, in the previous generation, Nvidia’s gaming GPUs were mainly entrusted to Samsung for fabrication. According to Sedaily, Samsung’s foundry was responsible for manufacturing Nvidia’s GeForce RTX 30 series gaming GPUs based on the Ampere architecture.
Speaking of foundry partners for AI products, Nvidia anticipates that TSMC will remain a crucial foundry partner for producing AI Hopper H200 and Blackwell B100 GPUs. Any additional orders might be entrusted to Samsung.
Nvidia CEO previously said Intel’s next-gen process test chips “look good”
Additionally, reports from Barron also mentioned that on May 30th, during an interaction with journalists in Taiwan, Nvidia CEO Jensen Huang was asked whether Nvidia is considering diversifying its supplier base given the rising tensions between the U.S. and China. In response, Huang referred to Nvidia’s long-standing collaboration with TSMC and Samsung Electronics, stating, “We have a lot of customers depending on us. And so our supply chain resilience is very important to us. We manufacture in as many places as we can.”
At that time, Huang also expressed, “We’re open to manufacturing with Intel. And (Intel CEO) Pat (Gelsinger) has said in the past that we’re evaluating their process, and we’ve recently received the test chip results of their next generation process and the results look good.”
From Nvidia CFO’s talk in November and Nvidia CEO’s response in May, it is obvious that, beyond TSMC and Samsung, Nvidia is thinking about a potential third foundry partner.
(Image: NVIDIA Hopper Architecture – H100 SXM)
Insights
On November 22, 2023, NVIDIA released its financial report for the third quarter of 2023 (FY3Q24: August 2023 to October 2023), with a revenue of USD 18.1 billion. This represents a quarterly increase of 34% and a yearly increase of 206%.
NVIDIA also provided a revenue guidance for the fourth quarter of 2023 (FY4Q24: November 2023 to January 2024), with a median estimate of USD 20 billion. This reflects a quarterly increase of 10.5% and a yearly increase of 231%.
TrendForce’s Insight:
NVIDIA continues its robust performance in the third quarter of 2023 (FY3Q24). The Datacenter division reported a revenue of USD 14.5 billion, marking a 41% quarterly increase and a staggering 279% annual increase. This segment now constitutes 80% of the overall revenue, with a 4% increase from the previous quarter.
The growth is primarily driven by the HGX Hopper GPU, along with the commencement of shipments and revenue recognition for L40S and GH200. Approximately 50% of Datacenter revenue comes from CSP customers, while the remaining 50% is contributed by Consumer Internet and enterprise clients.
In terms of revenue outlook, the China region accounts for approximately 20-25% of NVIDIA’s Datacenter revenue. The management acknowledges the significant impact of the U.S. restrictions on China’s revenue for the fourth quarter of 2023 but expresses confidence that revenue from other regions can offset this impact.
This confidence stems from the current high demand and low supply situation for AI chips. Notably, NVIDIA’s anticipated release of lower-capacity chips, including HGX H20, L20 PCIe, and L2PCIe, originally slated for November 16, 2023, is now expected to be delayed until the end of 2023, presumably due to ongoing negotiations with the U.S. Department of Commerce.
In a significant product announcement, NVIDIA introduced HGX H200 on November 14, 2023. The new offering is a high-capacity version of H100 and is fully compatible with HGX H100 systems. This compatibility ensures that manufacturers using H100 don’t need to modify server systems or software specifications when transitioning to H200.
HGX H200 is slated for release in the second quarter of 2024, with initial customers including AWS, Microsoft Azure, Google Cloud, and Oracle Cloud. Additionally, CoreWeave, Lambda, and Vultr are expected to adopt HGX H200 in their setups.
Comparing H200 SXM with H100 SXM, it’s evident that H200 SXM has a 76% increase in memory capacity and a 43% increase in bandwidth compared to H100 SXM. Additionally, it upgrades from HBM3 to HBM3e, while the remaining specifications remain the same. This indicates that H200 SXM is essentially a high-capacity version of H100 SXM.
Given the sensitivity of performance to memory capacity and bandwidth, inference on the Llama2 with 7 billion parameters shows that the performance of H200 SXM can reach 1.9 times that of H100 SXM.
Moreover, NVIDIA plans to launch the B100, truly representing the next generation of products with the new Blackwell architecture in 2024. Utilizing Chiplet technology, it is speculated that the architecture will transition from a single die + 6 HBM3 configuration in H100 to a dual-die + 8 HBM3e configuration in B100, potentially doubling the performance compared to H100.
Read more
(Photo credit: NVIDIA)
News
The AI landscape witnesses a robust surge with the consecutive launches of AMD’s “Instinct MI300” series AI chips and NVIDIA’s upcoming “B100” GPU structure. This wave of innovation propels a flourishing demand for AI-related Outsourced Semiconductor Assembly And Test Services (OSAT), surpassing initial estimates by over 10%. OSAT companies like ASE Holdings, King Yuan Electronics (KYEC), and Sigurd are poised to experience a notable uptick in revenue, as reported by UDN News.
According to reports, AMD is launching the “Instinct MI300” series AI chips this week, and NVIDIA plans to unveil the next-gen “B100” GPU next year. This successive release of new AI products by the two giants is boosting momentum in related OSATs collaboration.
NVIDIA is gearing up for the 2024 launch of its next-gen Blackwell architecture B100 GPU, saying AI performance exceeding twice that of the H200 GPU under the Hopper architecture, signifying a substantial leap in computational prowess.
Positive Outlook in 2024 for OSATs Amid AI Chip Development
Industry source indicates that due to the AI extensive computation requirements, advanced packaging is gradually becoming mainstream. This involves stacking chips and packaging them on a substrate. Depending on the arrangement, it is divided into 2.5D and 3D packaging. The advantage of this packaging technology is the ability to reduce chip space while also reducing power consumption and costs.
It is said the surge in AI chip orders from AMD and NVIDIA has led to a bottleneck in TSMC CoWoS advanced packaging capacity. This unexpected demand has exceeded projections for related OSATs, including ASE Holdings, KYEC, and Sigurd.
In the case of ASE Holdings, its subsidiary Siliconware Precision Industries (SPIL) possesses the advanced packaging capacity essential for generative AI chips. Joseph Tung, CFO of ASE Holdings, notes that while AI currently in its early-stage and is set to drive explosive growth. As AI integrates into existing and new applications, the demand for advanced packaging is expected to fuel the industry’s entry into the next super growth cycle.
For KYEC, a significant expansion in AI chip testing capacity since Q2 this year positions the company to benefit from the surge in demand.
Sigurd’s COO Tsan-Lien Yeh addresses that, with the release of AI phones, recognizing the doubled testing time for phone chips, which now carry APU/NPU for AI computing compared to general 5G chips. Sigurd has upgraded its equipment to align with future customer needs.
(Image: ASE VIPack’s video)
News
According to the expreview’s report, due to the surge in demand for AI applications and the market’s need for more powerful solutions, NVIDIA plans to shorten the release cycle of new products from the original 2 years to 1 year. Regarding its HBM partner, although validations for various samples are still in progress, market indications lean towards SK Hynix securing the ultimate HBM3e supply contract.
In a recent investor presentation, NVIDIA revealed its product roadmap, showcasing the data center plans for 2024 to 2025. The release time for the next-generation Blackwell architecture GPU has been moved up from Q4 2024 to the end of Q2 2024, with plans for the subsequent “X100” after its release in 2025.
According to Business Korea, NVIDIA has already signed a prioritized supply agreement with SK Hynix for HBM3e, intended for the upcoming GPU B100.
While NVIDIA aims for a diversified supply chain, it has received HBM3e samples from Micron and Samsung for verification testing, and formal agreements are expected after successful validation. However, industry insiders anticipate that SK Hynix will likely secure the initial HBM3e supply contract, securing the largest share.
With this transaction, SK Hynix’s revenue for the fourth quarter of the 2023 fiscal year is poised to surpass KRW 10 trillion, marking a resurgence after a hiatus of 1 year and 3 months.
In the upcoming NVIDIA products scheduled for release next year, the newly added H200 and B100 will incorporate 6 and 8 HBM3e modules, respectively. As NVIDIA’s product line transitions to the next generation, the usage of HBM3e is expected to increase, contributing to SK Hynix’s profit potential.
SK Hynix is actively engaged in the development of HBM4, aiming to sustain its competitive edge by targeting completion by 2025.
TrendForce’s earlier research into the HBM market indicates that NVIDIA plans to diversify its HBM suppliers for more robust and efficient supply chain management. The progress of HBM3e, as outlined in the timeline below, shows that Micron provided its 8hi (24GB) samples to NVIDIA by the end of July, SK hynix in mid-August, and Samsung in early October.
Given the intricacy of the HBM verification process—estimated to take two quarters—TrendForce expects that some manufacturers might learn preliminary HBM3e results by the end of 2023. However, it’s generally anticipated that major manufacturers will have definite results by 1Q24. Notably, the outcomes will influence NVIDIA’s procurement decisions for 2024, as final evaluations are still underway.
(Photo credit: SK Hynix)