News

[News] Marvell’s AI Business Reportedly Accelerates, Potentially Benefiting TSMC


2024-04-17 Semiconductors editor

Driven by AI-driven demand for optical communication and ASICs, Marvell, a major network IC design company, is accelerating its AI-related business. According to a report from Commercial Times, the revenue from this segment is expected to grow from USD 200 million in fiscal year 2023 to USD 550 million in fiscal year 2024.

Marvell previously announced plans to utilize TSMC’s process technology to produce a 2-nanometer chip optimized for accelerating infrastructure. Reports suggest that TSMC will be a primary beneficiary of Marvell’s chip fabrication business.

“The 2nm platform will enable Marvell to deliver highly differentiated analog, mixed-signal, and foundational IP to build accelerated infrastructure capable of delivering on the promise of AI. Our partnership with TSMC on our 5nm, 3nm and now 2nm platforms has been instrumental in helping Marvell expand the boundaries of what can be achieved in silicon,” said Sandeep Bharathi, chief development officer at Marvell, in Marvell’s previous press release.

In addition, Marvell holds a high market share in the global optical communication digital signal processor (DSP) field. Marvell pointed out that AI has accelerated the rate of transmission speed upgrades, reducing the doubling cycle from 4 years to 2 years, thereby driving rapid growth in the company’s performance.

During Marvell AI Day, company management expressed optimism about its AI business outlook and shared the positive news of receiving AI chip orders from large technology companies. At the time, industry sources have speculated that this customer could be Microsoft.

Marvell CEO Matt Murphy revealed that the company has acquired its third AI hyperscale customer and is developing an AI accelerator slated for production in 2026. These orders encompass customized AI training accelerators and AI inference accelerators for Customer A, a customized Arm architecture CPU for Customer B, and a new customized AI accelerator for Customer C.

Marvell indicates that the AI training accelerators for Customer A and the Arm architecture CPU for Customer B are currently in the ramp-up phase for production. The AI inference accelerator for Customer A and the AI accelerator for Customer C are scheduled for production in 2025 and 2026, respectively.

The report cites sources indicating that Marvell’s customer B is Google, and the Arm-based CPU in question is the recently unveiled Google Axion. However, Marvell has not responded to this information.

Marvell highlighted advancements in chip technology, including advanced packaging techniques that integrate multiple chips.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Commercial Times.

Get in touch with us