News

[News] AI Boom: Alibaba’s ¥380B & Musk’s 1M GPUs Signal a Surge



The global AI frenzy continues to soar, with tech giants making massive investments in AI hardware infrastructure and AI application models.

Recently, Alibaba announced a RMB 380 billion (approximately $53 billion) investment in cloud and AI hardware infrastructure, while Elon Musk’s xAI is planning to raise another $10 billion and expand its AI computing center to 1 million GPUs. These two major developments have drawn industry-wide attention, sending a clear signal: the AI explosion is surpassing expectations, and the global AI race has reached a fever pitch.

Alibaba’s ¥380 Billion Investment in Cloud and AI Infrastructure

According to Xinhua News Agency, on February 24, Alibaba Group CEO Wu Yongming announced that over the next three years, the company will invest more than ¥380 billion in building cloud and AI hardware infrastructure—exceeding the total investment of the past decade. This marks the largest-ever investment by a private Chinese enterprise in this sector.

Wu Yongming stated, “The AI boom has exceeded expectations, and China’s tech industry is thriving with immense potential. Alibaba is fully committed to accelerating cloud and AI infrastructure development to drive industry-wide ecosystem growth.”

He added that this massive investment will greatly boost confidence in the sector, reaffirming Alibaba’s long-standing belief in and commitment to the future.

Earlier, Alibaba’s latest quarterly financial report revealed that Alibaba Cloud generated ¥31.742 billion in revenue, a 13% year-over-year increase. Public cloud revenue continued double-digit growth, while AI-related revenue maintained triple-digit growth for six consecutive quarters.

In terms of AI development, Alibaba Cloud was one of the first major Chinese tech firms to open-source its proprietary AI models. In November 2024, it open-sourced the full series of Qwen2.5-Coder models, featuring six different versions. In January 2025, Alibaba Cloud released the Qwen2.5-1M model, which supports a 1-million-token context window. Most recently, the flagship Qwen2.5-Max model was upgraded, incorporating Alibaba’s latest exploration of Mixture of Experts (MoE) models with over 20 trillion tokens in pre-training data.

Additionally, Alibaba Cloud is rapidly expanding its overseas cloud infrastructure. In February, its second data center in Thailand was officially launched, alongside the opening of a new data center in Mexico. More data centers are planned to meet the growing global AI market demand.

Grok 3 Consumes 200,000 GPUs, Expansion to 1 Million Planned

In late February, Elon Musk’s xAI launched Grok 3, the latest generation of its AI model, which boasts superior reasoning, computation, and adaptability. According to official data, Grok 3 outperformed DeepSeek-v3, GPT-4o, and Gemini-2 Pro in multiple benchmark tests, including mathematical reasoning, scientific logic, and code generation.

Musk hailed Grok 3 as “the smartest AI” and revealed its training cost: the model consumed a total of 200,000 Nvidia GPUs, with training conducted entirely at xAI’s data center.

Industry estimates suggest that the Nvidia H100 GPU costs between $25,000 and $30,000 per unit, meaning 200,000 GPUs alone would cost around $6 billion. When factoring in servers, networking equipment, power supply, and cooling infrastructure, the total cost of training Grok 3 could reach the $10 billion range.

Notably, 200,000 GPUs are not Musk’s final target. According to xAI, the company plans to expand its GPU capacity to 1 million units in the future.

To support this expansion, market reports indicate that xAI is in discussions with potential investors to raise $10 billion in funding. Existing investors, including Sequoia Capital, Andreessen Horowitz, and Valor Equity Partners, are reportedly considering participation in the funding round. If successful, xAI’s valuation could reach $75 billion.

(Photo credit: xAI)

Get in touch with us