News
The market was originally concerned that NVIDIA might face a demand lull during the transition from its Hopper series GPUs to the Blackwell series. However, the company executives clearly stated during the latest financial report release that this is not the case.
According to reports from MarketWatch and CNBC, NVIDIA CFO Colette Kress stated on May 22 that NVIDIA’s data center revenue for the first quarter (February to April) surged 427% year-over-year to USD 22.6 billion, primarily due to shipments of Hopper GPUs, including the H100.
On May 22, during the earnings call, Kress also mentioned that Facebook’s parent company, Meta Platforms, announced the launch of its latest large language model (LLM), “Lama 3,” which utilized 24,000 H100 GPUs. This was the highlight of Q1. She also noted that major cloud computing providers contributed approximately “mid-40%” of NVIDIA’s data center revenue.
NVIDIA CEO Jensen Huang also stated in the call, “We see increasing demand of Hopper through this quarter,” adding that he expects demand to outstrip supply for some time as NVIDIA transitions to Blackwell.
As per a report from MoneyDJ, Wall Street had previously been concerned that NVIDIA’s customers might delay purchases while waiting for the Blackwell series. Sources cited by the report predict that the Blackwell chips will be delivered in the fourth quarter of this year.
NVIDIA’s Q1 (February to April) financial result showed that revenue soared 262% year-over-year to USD 26.04 billion, with adjusted earnings per share at USD 6.12. Meanwhile, NVIDIA’s data center revenue surged 427% year-over-year to USD 22.6 billion.
During Q1, revenue from networking products (mainly Infiniband) surged more than threefold to USD 3.2 billion compared to the same period last year. Revenue from gaming-related products increased by 18% year-over-year to USD 2.65 billion. Looking ahead to this quarter (May to July), NVIDIA predicts revenue will reach USD 28 billion, plus or minus 2%.
NVIDIA’s adjusted gross margin for Q1 was 78.9%. The company predicts that this quarter’s adjusted gross margin will be 75.5%, plus or minus 50 basis points. In comparison, competitor AMD’s gross margin for the first quarter was 52%.
Read more
(Photo credit: NVIDIA)
News
The semiconductor battleground of the Angstrom Era has commenced earlier than expected, with TSMC advancing its plant expansions in Taiwan. As per Commercial Times citing sources, TSMC is poised to increase its 2024 capital expenditure from the initial estimate of USD 28-32 billion to USD 30-34 billion, marking a hike of over 7%.
TSMC’s continuous plant expansion includes the initiation of the first 2-nanometer plant in Hsinchu’s Baoshan facility in April, the addition of another 2-nanometer plant in Kaohsiung, and the commencement of construction for two advanced packaging plants in Chiayi. Furthermore, there are market rumors speculating that TSMC plans to build two more A14 plants in Kaohsiung.
According to industry sources cited by the report, TSMC’s earnings call on April 18th will mark a significant milestone as the company transitions to the next generation of manufacturing processes. Expectations are high for surprises in capital expenditure, second-quarter operating prospects, and the nomination list for new directors.
During TSMC’s January earnings call, they disclosed a capital expenditure estimate of approximately USD 28-32 billion for this year. However, with NVIDIA’s recent unveiling of the Blackwell architecture, advanced packaging has become almost indispensable for next-generation chips. Major customers for advanced packaging, including NVIDIA, Broadcom, Marvell, and AMD, are all closely linked to AI.
Per the same report citing sources, it’s revealed by Commercial Times citing sources near TSMC’s clients that the current waiting time remains as long as six months, as capacity ramp-up continues to chase demand. It is widely expected that TSMC will increase its capital expenditure, with the lower bound potentially surpassing USD 28 billion to over USD 30 billion.
From an operational standpoint, TSMC is expected to benefit this year from the surge in demand for artificial intelligence. Analysts predict that AI clients will support TSMC’s second-quarter revenue momentum, with the potential to deliver low single-digit quarterly growth.
Per the report citing sources, the positive outlook for TSMC’s second quarter can be attributed to several factors. These include stable demand for TSMC’s 4nm and 5nm processes with support from NVIDIA’s GPUs. Additionally, it is speculated that the 3nm process will benefit from cryptocurrency clients and early orders for Apple’s AI chips, boosting capacity utilization. Furthermore, there is an upward trend in the mature 16nm and 28nm processes.
Per the industry sources cited by the report, TSMC’s CoWoS capacity is fully booked until the first half of next year. This will drive up the revenue contribution from TSMC’s 3nm process. Furthermore, the outsourcing orders for Intel CPUs this year will further boost revenue growth.
Additionally, on June 4th, TSMC will hold elections for ten directors, including six independent directors. The list of director candidates is about to be announced, attracting significant attention to the new team lineup. With the current Chairman, Mark Liu, announcing his succession, and independent director K.C. Chen planning to retire, significant changes in the TSMC board of directors’ composition are anticipated.
Read more
(Photo credit: TSMC)
News
With the Blackwell series chips making a splash in the market, pricing becomes a focal point. According to Commercial Times citing sources, Jensen Huang, the founder and CEO of NVIDIA, revealed in a recent interview that the price range for the Blackwell GPU is approximately USD 30,000 to USD 40,000. However, this is just an approximate figure.
Jensen Huang emphasizes that NVIDIA customizes pricing based on individual customer needs and different system configurations. NVIDIA does not sell individual chips but provides comprehensive services for data centers, including networking and software-related equipment.
Reportedly, Jensen Huang stated that the global data center market is currently valued at USD 1 trillion, with total expenditures on data center hardware and software upgrades reaching USD 250 billion last year alone, a 20% increase from the previous year. He noted that NVIDIA stands to benefit significantly from this USD 250 billion investment in data centers.
According to documents recently released by NVIDIA, 19% of last year’s revenue came from a single major customer, and more than USD 9.2 billion in revenue from a few large cloud service providers in the last quarter alone. Adjusting the pricing of the Blackwell chip series could attract more businesses from various industries to become NVIDIA customers.
As per the report from Commercial Times, Jensen Huang is said to be optimistic about the rapid expansion of the AI application market, emphasizing that AI computing upgrades are just beginning. Reportedly, he believes that future demand will only accelerate, allowing NVIDIA to capture more market share.
According to a previous report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.
NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.
Per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.
In the development of the GB200, NVIDIA invested significantly. Jensen Huang revealed that the development of the GB200 was a monumental task, with expenditures exceeding USD 10 billion solely on modern GPU architecture and design.
Given the substantial investment, Huang reportedly confirmed that NVIDIA has priced the Blackwell GPU GB200, tailored for AI and HPC workloads, at USD 30,000 to USD 40,000. Industry sources cited by the report from Commercial Times point out that NVIDIA is particularly keen on selling supercomputers or DGX B200 SuperPODS, as the average selling price (ASP) is higher in situations involving large hardware and software deployments.
Read more
(Photo credit: NVIDIA)
News
NVIDIA unveiled its Blackwell architecture and the touted powerhouse AI chip GB200 at GTC 2024 held in San Jose, California, on March 19th. Manufactured using TSMC’s 4-nanometer (4NP) process, it is expected to ship later this year.
According to a report from TechNews, TSMC’s CoWoS technology comes in various forms, including CoWoS-R, CoWoS-L, and CoWoS-S, each differing in cost due to variations in the interposer material. Customers can choose the appropriate technology based on their specific requirements.
CoWoS-R, for instance, integrates InFo technology, utilizing RDL wiring in the interposer to connect chips, making it suitable for high-bandwidth memory (HBM) and SoC integration.
On the other hand, CoWoS-L combines the advantages of CoWoS-S and InFO technologies, offering a cost-effective solution with the use of LSI (Local Silicon Interconnect) chips as the interposer for dense chip-to-chip connections. According to market reports, the Blackwell platform adopts CoWoS-L, as this technology is better suited for larger chiplets.
CoWoS-S, utilizing silicon as the interposer material, represents the highest cost variant and is currently the mainstream choice. Notably, NVIDIA’s H100, H200, and AMD’s MI300 chips all employ CoWoS-S.
NVIDIA’s latest Blackwell architecture features AI chips, including the B100, B200, and the GB200 with Grace CPU, all manufactured on TSMC’s 4-nanometer process. As per the industry sources cited by the report, insights suggest that production for the B100 is slated for the fourth quarter of this year, with mass production expected in the first half of next year.
Meanwhile, the B200 and GB200 are set to follow suit with mass production next year. As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.
TSMC’s advanced manufacturing processes and CoWoS packaging technology are expected to continue benefiting, particularly with the adoption of CoWoS-L packaging.
Read more
(Photo credit: TSMC)
News
Chip giant NVIDIA kicked off its annual Graphics Processing Unit (GPU) Technology Conference (GTC) today, with CEO Jensen Huang announcing the launch of the new artificial intelligence chip, Blackwell B200.
According to a report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.
NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.
As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.
NVIDIA’s HBM supplier, South Korean chipmaker SK Hynix, also issued a press release today announcing the commencement of mass production of its high-performance DRAM new product, HBM3e, with shipments set to begin at the end of March.
Recently, global tech companies have been heavily investing in AI, leading to increasing demands for AI chip performance. SK Hynix points out that HBM3e is the optimal product to meet these demands. As memory operations for AI are extremely fast, efficient heat dissipation is crucial. HBM3e incorporates the latest Advanced MR-MUF technology for heat dissipation control, resulting in a 10% improvement in cooling performance compared to the previous generation.
Per SK Hynix’s press release, Sungsoo Ryu, the head of HBM Business at SK Hynix, said that mass production of HBM3e has completed the company’s lineup of industry-leading AI memory products.
“With the success story of the HBM business and the strong partnership with customers that it has built for years, SK hynix will cement its position as the total AI memory provider,” he stated.
Read more
(Photo credit: NVIDIA)