News
AMD posted third-quarter results on October 29th, with a quarterly revenue of USD 6.8 billion and net income to USD 771 million, while data center revenue surged 122% year-over-year. With new products such as MI300X hitting the market, the world’s second largest data center GPU provider also raises its AI chip sales forecast for this year to USD 5 billion, up from an earlier estimate of USD 4.5 billion, according to a report by CNBC.
For the fourth quarter of 2024, according to the company’s press release, AMD expects revenue to be approximately USD 7.5 billion, plus or minus USD 300 million. At the mid-point of the revenue range, this represents year-over-year growth of approximately 22% and sequential growth of approximately 10%. Non-GAAP gross margin is expected to be approximately 54%.
Fourth Quarter Forecast Falls to Impress; Concerns Raised on Capacity Constraints
However, the fourth-quarter forecast is slightly below market expectations, which raises concerns about whether the growth of the AI sector might be slowing down. According to Bloomberg, analysts had an average estimate of USD 7.55 billion.
AMD CEO Lisa Su reiterated that the company still sees robust momentum in AI, as interest from customers and partners in the MI325X is strong, a report by CNBC notes. AMD plans to begin production shipments of the MI325X this quarter, according to Su.
In October, AMD introduced the MI325X, and projected that the AI GPU market could reach USD 500 billion by 2028.
Nonetheless, Su also said that the environment will “continue to be tight”, but AMD has also planned for significant growth going into 2025, according to Bloomberg. She stated that the company feels good “about our overall supply-chain capability,” Bloomberg indicates.
AMD’s major foundry partner, TSMC, indicated in July that constraints on AI chip production will persist into 2025, which may imply a significant hurdle for clients like AMD, as it not only has to compete with NVIDIA on product performance, but also on the race of securing capacity.
Strong Data Center Revenue with 122% YoY Increase, while Gaming/ Embedded on the Decline
For the third quarter of 2024, AMD delivered a quarterly revenue of USD 6.8 billion, gross margin of 50%, operating income of USD 724 million, net income of USD 771 million and diluted earnings per share of USD 0.47. On a non-GAAP basis, gross margin was 54%, operating income was USD 1.7 billion, net income was USD 1.5 billion and diluted earnings per share was USD 0.92.
AMD’s AI chips are included in its data center segment, which saw annual sales more than double, reaching USD 3.5 billion. Overall, data center revenue rose 122% year-over-year. Su attributed the strong results to higher sales of EPYC and Instinct data center products and robust demand for the Ryzen PC processors, according to AMD’s press release.
The company also sees robust growth in its client segment, as revenue was USD 1.9 billion, up 29% year-over-year and 26% sequentially primarily driven by strong demand for “Zen 5” AMD Ryzen processors.
However, the gaming segment revenue was USD 462 million, down 69% year-over-year and 29% sequentially primarily due to a decrease in semi-custom revenue. According to CNBC, this could be attributed to reduced “semi-custom revenue” from custom chips used in consoles like the Sony PlayStation 5.
Embedded segment revenue was also declining, down 25% year-over-year to USD 927 million, as customers normalized their inventory levels. On a sequential basis, revenue increased 8% as demand improved in several end markets.
Read more
(Photo credit: AMD)
News
As AMD unveiled the roadmap for its upcoming AI accelerators at Advancing AI 2024, including MI325X and MI355X, its longtime foundry partner TSMC is expected to be the major beneficiary, according to the reports by the Commercial Times and the Economic Daily News.
TSMC, as AMD’s key chip making partner, is expected to benefit the most, as the foundry giant also provides advanced packaging services such chiplets and CoWoS, Commercial Times notes.
Other Taiwanese companies in the supply chain are also expected to benefit, including ASMedia, which provides PCIe Gen5 high-speed interface chips, as well as ASIC firms GUC and Alchip, according to the report. Among server OEM partners, Compal, Wistron, Wiwynn, and Inventec are listed as AMD’s collaborators.
According to the Economic Daily News, AMD’s Instinct MI325X AI accelerator is said to be manufactured with TSMC’s 4nm and 5nm, with mass production anticipated to begin this quarter. It is worth noting that the AI GPU would be the first of its kind to be equipped with 256GB HBM3e memory, according to another report by Wccftech.
On the other hand, to compete with AI chip giant NVIDIA’s GB200, AMD also introduced the MI350 series at the event. According to Commercial Times, MI355X will be launched in the second half of 2025, leveraging TSMC’s 3nm process while equipped with 288GB HBM3e memory.
The report by Commercial Times further notes that in addition to its larger memory capacity, the MI355X accelerator also incorporates the CDNA 4 architecture, allowing it to achieve a significant 35x increase in FP8 computational performance.
Featuring TSMC’s 3nm node just like NVIDIA’s Rubin reportedly does, AMD’s MI355X has the potential to catch up with, or even run ahead of its archrival in terms of product schedule, the report suggests. NVIDIA’s Rubin is reportedly to be released in the fourth quarter of 2025.
Notably, AMD’s MI300X accelerator has been reportedly adopted by a few tech heavyweights. According to Commercial Times, following Microsoft’s adoption, Samsung has also purchased USD 20 million worth of AMD MI300X units for AI training.
At Advancing AI 2024, which took place on October 10th, AMD also introduced its latest EPYC server processors, EPYC 9005 Series, previously codenamed Turin. According to its press release, the EPYC 9005 Series is built on the latest “Zen 5” architecture, which offers up to 192 cores and will be available in a wide range of platforms from leading OEMs and ODMs. According to the Economic Daily News, EPYC 9005 Series is manufactured with TSMC’s 3nm and 4nm nodes.
Read more
(Photo credit: AMD)
News
AI chip giants NVIDIA and AMD have been under heated competition for a couple of years. NVIDIA, though controls the lion’s share of the market for AI computing solutions, had been challenged by AMD while the latter launched Instinct MI300X GPU in late 2023, claiming the product to be the fastest AI chip in the world, which beats NVIDIA’s H200 GPUs.
However, months after the launch of MI300X, an analysis by Richard’s Research Blog indicates that AMD’s MI300X’s cost is significantly higher than NVIDIA’s H200’s, while H200 outperforms MI300X by over 40% regarding inference production applications, which makes NVIDIA’s high margin justifiable.
AMD’s MI300X: More Transistors, More Memory Capacity, More Advanced Packaging…with a Higher Cost
The analysis further compares the chip specifications between the two best-selling products and explores their margins. NVIDIA’s H200 is implemented using TSMC’s N4 node with 80 billion transistors. On the other hand, AMD’s MI300X is built with 153 billion transistors, featuring TSMC’s 5nm process.
Furthermore, NVIDIA’s H200 features 141GB of HBM3e, while AMD’s MI300X is equipped with 192GB of HBM3. Regarding packaging techniques, while NVIDIA is using TSMC’s CoWoS 2.5D in the H200, AMD’s MI300X has been moved to CoWoS/SoIC 3D with a total of 20 dies/stacks, which significantly increases its complexity.
According to the analysis, under the same process, the number of transistors in the logic compute die and the total die size/total cost are roughly proportional. AMD’s MI300X, equipped with nearly twice the number of transistors compared to NVIDIA’s H200, therefore, is said to cost twice as much of the latter in this respect.
With 36% more memory capacity and much higher packaging complexity, AMD’s MI300X is said to suffer a significantly higher manufacturing cost than NVIDIA’s H200. It is also worth noting that as NVIDIA is currently the dominant HBM user in the market, the company must enjoy the advantage of lower procurement costs, the analysis suggests.
This is the price AMD has to pay for the high specifications of the MI300X, the analysis observes.
NVIDIA’s 80% Margin: High at First Glance, but Actually Justifiable
On the other hand, citing the results of MLPerf tests, the analysis notes that in practical deployment for inference production applications, the H200 outperforms the MI300X by over 40%. This means that if AMD wants to maintain a similar cost/performance ratio (which CSP customers will demand), the MI300X price must be about 30% lower than the H200. The scenario does not take other factors into consideration, including NVIDIA’s familiarity with secondary vendors, the Compute Unified Device Architecture (CUDA), as well as related software.
Therefore, the analysis further suggests that NVIDIA’s 80% gross margin, though might seem to be high at first glance, actually allows room for its competitors to survive. If NVIDIA were to price its products below a 70% margin, its rivals might struggle with negative operating profits.
In addition to achieving better product performance at a lower cost through superior hardware and software technology, NVIDIA excels at non-technical economic factors, including R&D and the scaling of expensive photomasks, which impact operational expenditures (OPEX) and cost distribution as well, while its long-term commitments to its clients, confidence, and time-to-market also play a role, the analysis notes.
Regarding the key takeaways from their latest earnings reports, NVIDIA claims the demand for Hopper remains strong, while Blackwell chips will potentially generate billions of dollars in revenue in the fourth quarter. AMD’s Instinct MI300 series, on the other hand, has emerged as a primary growth driver, as it is expected to generate more than USD 4.5 billion in sales this year.
Read more
(Photo credit: NVIDIA)