According to a report from Tom’s Hardware, it’s reported that one of GPU giant NVIDIA’s key advantages in data centers is not only its leading GPUs for AI and HPC computing but also its effective scaling of data center processors using its own hardware and software. To compete with NVIDIA’s CUDA, Chinese GPU manufacturer Moore Threads has developed networking technology aimed at achieving the same horizontal scaling of GPU compute power with its related clusters, addressing market demands.
Moore Threads was founded in 2020 by former senior executives from NVIDIA China. After being blacklisted due to U.S. export restrictions, they were unable to access advanced manufacturing processes but continued to develop gaming GPUs.
Per another report from South China Morning Post, Moore Threads has upgraded the AI KUAE data center servers, with a single cluster connecting up to 10,000 GPUs. The KUAE data center server integrates eight MTT S4000 GPUs, designed for training and running large language models and interconnected using MTLink network technology, similar to NVIDIA’s NVLink.
These GPUs use the MUSA architecture, featuring 128 tensor cores and 48GB of GDDR6 memory with a bandwidth of 768GB/s. A cluster with 10,000 GPUs can have 1,280,000 tensor cores, though the actual performance depends on various factors.
However, Moore Threads’ products still lag behind NVIDIA’s GPUs in performance. Even NVIDIA’s 2020 A100 80GB GPU significantly outperforms the MTT S4000 in computing power.
Moore Threads has established strategic partnerships with major telecommunications companies like China Mobile and China Unicom, as well as with China Energy Engineering Group and Big Data Technology Co., Ltd., to develop three new computing clusters aimed at boosting China’s AI development.
Recently, Moore Threads completed a new round of financing, raising CNY 2.5 billion (roughly USD 343.7 million) to support its expansion plans and technological development. However, the inability to access advanced processes from TSMC, Intel, and Samsung presents significant challenges for developing next-generation GPUs.
Read more
(Photo credit: Moore Threads)