News
According to a report from Wccftech, it’s indicated that with soaring market demand, the shipment volume of NVIDIA’s Blackwell architecture GB200 AI servers has also significantly increased.
As NVIDIA claims, the Blackwell series is expected to be its most successful product. Industry sources cited by Wccftech indicate that NVIDIA’s latest GB200 AI servers are drawing significant orders, with strong demand projected to continue beyond 2025. This ongoing demand is enabling NVIDIA to secure additional orders as its newest AI products remain dominant.
The increasing demand for NVIDIA GB200 AI servers has led to revenue performances of Taiwanese suppliers such as Quanta, Foxconn, and Wistron exceeding expectations. Reportedly, NVIDIA is expected to ship 60,000 to 70,000 servers equipped with GB200 AI server. Each server is estimated to cost between USD 2 million and 3 million, resulting in approximately USD 210 billion in annual revenue from the Blackwell servers alone.
NVIDIA’s GB200 AI chip servers, available in NVL72 and NVL36 specifications, have seen greater preference for the less powerful models due to the growing number of AI startups choosing the more financially feasible NVL36 servers.
With Blackwell debuting in the market by Q4 2024, NVIDIA is projected to achieve significant revenue figures, potentially surpassing the performance of the previous Hopper architecture. Furthermore, NVIDIA has reportedly placed orders for around 340,000 CoWoS advanced packaging units with TSMC for 2025.
Notably, according to the industry sources previously cited in a report from Economic Daily News, TSMC is gearing up to start production of NVIDIA’s latest Blackwell platform architecture graphics processors (GPU) on the 4nm process.
The same report further cited sources, revealing that international giants such as Amazon, Dell, Google, Meta, and Microsoft will adopt the NVIDIA Blackwell architecture GPU for AI servers. As demand exceeds expectations,NVIDIA is prompted to increase its orders with TSMC by approximately 25%.
Read more
(Photo credit: NVIDIA)
News
According to the industry sources cited in a report from Economic Daily News, TSMC is gearing up to start production of NVIDIA’s latest Blackwell platform architecture graphics processors (GPU) on the 4nm process. In response to the strong customer demand, NVIDIA has reportedly increased its orders to TSMC by 25%.
This surge not only underscores the unprecedented boom in the AI market but also provides substantial momentum for TSMC’s performance in the second half of the year, setting the stage for an optimistic annual outlook adjustment, the report notes.
TSMC is set to hold an earnings conference call on July 18, in which it is expected to release the financial results of the second quarter as well as the guidance for the third quarter.
As TSMC will reportedly commence the production of NVIDIA’s Blackwell platform architecture GPU, which may be regarded as one of the most powerful AI chips, it is anticipated to be a focal point of discussion at TSMC’s upcoming earnings call.
Packed with 208 billion transistors, NVIDIA’s Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
The report further cited sources, revealing that international giants such as Amazon, Dell, Google, Meta, and Microsoft will adopt the NVIDIA Blackwell architecture GPU for AI servers. As demand exceeds expectations,NVIDIA is prompted to increase its orders with TSMC by approximately 25%.
As NVIDIA ramps up production of its Blackwell architecture GPUs, shipments of terminal server cabinets, including the GB200 NVL72 and GB200 NVL36 models, have seen a simultaneous significant increase. Initially expected to ship a combined total of 40,000 units, this figure has surged to 60,000 units, marking a 50% increase. Among them, the GB200 NVL36 accounts for the majority with 50,000 units.
The report estimates suggest that the average selling price of the GB200 NVL36 server cabinet is USD 1.8 million, while the GB200 NVL72 server cabinet commands an even higher price of USD 3 million. The GB200 NVL36 features 36 GB200 super chips, 18 Grace CPUs, and 36 enhanced B200 GPUs, whereas the GB200 NVL72 boasts 72 GB200 super chips, 36 Grace CPUs, and 72 B200 GPUs, which all contribute to TSMC’s momentum.
TSMC former Chairman Mark Liu, before handing over the reins in June, had already predicted that the demand for AI applications looks more optimistic compared to a year ago. Current Chairman C.C. Wei has also indicated that AI applications are just beginning, and he is optimistic like everyone else.
Read more
(Photo credit: TSMC)
News
Following Foxconn’s substantial order for the assembly of NVIDIA’s GB200 AI servers, according to a report from Economic Daily News, Foxconn has now exclusively secured a major order for the NVLink Switch, a key component of the GB200 renowned for enhancing computing power. The volume of this order is estimated to be seven times that of the server cabinets. Not only is this a brand new order, but it also carries a significantly higher gross profit margin compared to server assembly, the report noted.
While Foxconn does not comment on orders and customers, industry sources cited by the same report highlight that NVLink is an exclusive NVIDIA technology consisting of two parts. The first is the bridge technology, which connects the central processing unit (CPU) with the AI chip (GPU). The second is the switch technology, which is crucial for interconnecting GPUs, enabling thousands of GPUs to combine in operation, thereby maximizing their collective computing power.
Industry sources cited by Economic Daily News have stated that the key feature of the GB200 is not just its significant computing power but also its high-speed transmission capabilities. NVLink is considered the magic ingredient for enhancing this computing power.
Reportedly, the primary reason Foxconn has secured the exclusive order for NVIDIA’s NVLink is due to their long-standing cooperation and mutual understanding. Foxconn has been a leading manufacturer for network communication equipment for years, making it a reasonable choice for NVIDIA to entrust with these orders.
Industry sources cited by the report further claim that as each server cabinet requires seven NVLinks, this new order means that for every GB200 server cabinet produced, Foxconn receives an order for seven NVLink switches. Given that the profit margin for switches is considerably higher than for server assembly, this order is expected to significantly boost Foxconn’s operations.
Per the report, the world’s top seven switch manufacturers, including Dell, HP, Cisco, Nokia, and Ericsson, are all clients of Foxconn. This has enabled Foxconn to secure over 75% of the global market share in switches, firmly establishing its leading position.
Regarding the AI server market, Foxconn’s Chairman Young Liu previously revealed that the GB200 is in high demand, and he anticipates that Foxconn’s market share in AI servers could reach 40% this year.
Read more
(Photo credit: NVIDIA)
News
According to a report from Economic Daily News, Luxshare, a crucial player in the Chinese Apple supply chain, is said to be entering NVIDIA’s supply chain for the GB200, as it has announced the development of various components tailored for NVIDIA’s GB200 AI servers.
These components encompass connector, power-related items, and cooling products. The sources cited by the same report have noted that Luxshare’s focus areas align closely with Taiwanese expertise, setting the stage for another direct showdown with Taiwanese manufacturers.
Luxshare, previously not prominent in the server domain, has now reportedly made its move into NVIDIA’s top-tier AI products, attracting market attention. Especially given Luxshare’s swift entry into the iPhone supply chain previously, aggressively competing for orders with Taiwanese Apple suppliers.
As per the same report, Luxshare has revealed in its investor conference records that it has developed solutions corresponding to the NVIDIA GB200 AI server architecture, including products for electrical connection, optical connection, power management, and cooling. The company is reportedly said to be expected to offer solutions priced at approximately CNY 2.09 million and anticipates that the total market size will reach hundreds of billions of CNY.
If Luxshare adopts a similar strategy of leveraging its latecomer advantage in entering the NVIDIA AI supply chain, it will undoubtedly encounter intense competition.
Industry sources cited by the report also point out that Luxshare’s claim to supply components for NVIDIA’s GB200 is in areas where Taiwanese suppliers excel.
For instance, while connector is Luxshare’s core business, Taiwanese firms like JPC Connectivity and Lintes Tech also serve as suppliers of connectors for NVIDIA’s GB200 AI servers. They are poised to compete directly with Luxshare in the future.
In terms of power supply, Delta Electronics leverages its expertise in integrating power, cooling, and passive components to provide a comprehensive range of AI power integration solutions, from the grid to the chip. They cater to orders for power supplies for NVIDIA’s Blackwell architecture series B100, B200, and GB200 servers, and will also compete with Luxshare in the future.
When it comes to thermal management, Asia Vital Components and Auras Technology are currently the anticipated players in the market, and they are also poised to compete with Luxshare.
Read more
(Photo credit: Luxshare)
News
With the Blackwell series chips making a splash in the market, pricing becomes a focal point. According to Commercial Times citing sources, Jensen Huang, the founder and CEO of NVIDIA, revealed in a recent interview that the price range for the Blackwell GPU is approximately USD 30,000 to USD 40,000. However, this is just an approximate figure.
Jensen Huang emphasizes that NVIDIA customizes pricing based on individual customer needs and different system configurations. NVIDIA does not sell individual chips but provides comprehensive services for data centers, including networking and software-related equipment.
Reportedly, Jensen Huang stated that the global data center market is currently valued at USD 1 trillion, with total expenditures on data center hardware and software upgrades reaching USD 250 billion last year alone, a 20% increase from the previous year. He noted that NVIDIA stands to benefit significantly from this USD 250 billion investment in data centers.
According to documents recently released by NVIDIA, 19% of last year’s revenue came from a single major customer, and more than USD 9.2 billion in revenue from a few large cloud service providers in the last quarter alone. Adjusting the pricing of the Blackwell chip series could attract more businesses from various industries to become NVIDIA customers.
As per the report from Commercial Times, Jensen Huang is said to be optimistic about the rapid expansion of the AI application market, emphasizing that AI computing upgrades are just beginning. Reportedly, he believes that future demand will only accelerate, allowing NVIDIA to capture more market share.
According to a previous report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.
NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.
Per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.
In the development of the GB200, NVIDIA invested significantly. Jensen Huang revealed that the development of the GB200 was a monumental task, with expenditures exceeding USD 10 billion solely on modern GPU architecture and design.
Given the substantial investment, Huang reportedly confirmed that NVIDIA has priced the Blackwell GPU GB200, tailored for AI and HPC workloads, at USD 30,000 to USD 40,000. Industry sources cited by the report from Commercial Times point out that NVIDIA is particularly keen on selling supercomputers or DGX B200 SuperPODS, as the average selling price (ASP) is higher in situations involving large hardware and software deployments.
Read more
(Photo credit: NVIDIA)