News

[News] China’s AI Compute Dilemma: Why Advanced GPUs Are Sitting Unused in Idle Data Centers?


2025-03-12 Semiconductors editor

According to a report from TechNews, citing Forbes, while Chinese AI startup DeepSeek claims its computing resources are limited, numerous idle data centers across China are filled with advanced GPUs, waiting to be used—highlighting the challenges in China’s AI development.

As noted in the report, in response to U.S. export controls, Chinese tech firms and local governments stockpiled NVIDIA chips. Some buyers even sought to bypass U.S. restrictions to acquire NVIDIA’s latest AI chips, including the Blackwell series, through third-party channels. However, the report highlights that little thought was given to how these resources would be utilized.

Fragmented AI Compute: Inefficiencies in Deployment

The report highlights that in 2024, China expanded its compute capacity by adding at least 1 million AI chips. However, their deployment lacked efficiency, as they were dispersed across data centers of differing quality, often placing high-performance chips in areas with minimal demand.

Furthermore, without a clear strategy, Chinese companies and the government have rushed to build AI infrastructure, resulting in the proliferation of low-quality compute data centers, as the report notes.

As the report indicates, some Chinese companies, despite acquiring enough GPUs to support a large-scale AI computing center, have instead distributed them across multiple small, isolated data centers. However, as noted by the report, without high-speed network connections or a proper software architecture, these GPUs cannot be fully utilized for large-scale computing.

Shifting Demand Creates Challenges for China’s AI Infrastructure

Meanwhile, the shift in demand is another key factor contributing to China’s AI infrastructure challenges. The report mentions that while 2023 saw a surge in efforts to develop foundation AI models, by 2024, many of these initiatives had pivoted toward AI applications. As a result, the demand for model training declined, while the need for inference began to rise.

However, as the report points out, the AI infrastructure China built in 2023 was primarily designed for training. With the market shift in 2024, this imbalance led to an oversupply of training compute while inference compute remained insufficient.

TrendForce highlights that DeepSeek’s influence will drive CSPs toward lower-cost proprietary ASIC solutions, shifting focus from AI training to AI inference. This shift is expected to gradually increase the share of AI inference servers to nearly 50%.

Read more

 

Please note that this article cites information from TechNews and Forbes.

Get in touch with us