News
According to a report from Tom’s Hardware, it’s reported that one of GPU giant NVIDIA’s key advantages in data centers is not only its leading GPUs for AI and HPC computing but also its effective scaling of data center processors using its own hardware and software. To compete with NVIDIA’s CUDA, Chinese GPU manufacturer Moore Threads has developed networking technology aimed at achieving the same horizontal scaling of GPU compute power with its related clusters, addressing market demands.
Moore Threads was founded in 2020 by former senior executives from NVIDIA China. After being blacklisted due to U.S. export restrictions, they were unable to access advanced manufacturing processes but continued to develop gaming GPUs.
Per another report from South China Morning Post, Moore Threads has upgraded the AI KUAE data center servers, with a single cluster connecting up to 10,000 GPUs. The KUAE data center server integrates eight MTT S4000 GPUs, designed for training and running large language models and interconnected using MTLink network technology, similar to NVIDIA’s NVLink.
These GPUs use the MUSA architecture, featuring 128 tensor cores and 48GB of GDDR6 memory with a bandwidth of 768GB/s. A cluster with 10,000 GPUs can have 1,280,000 tensor cores, though the actual performance depends on various factors.
However, Moore Threads’ products still lag behind NVIDIA’s GPUs in performance. Even NVIDIA’s 2020 A100 80GB GPU significantly outperforms the MTT S4000 in computing power.
Moore Threads has established strategic partnerships with major telecommunications companies like China Mobile and China Unicom, as well as with China Energy Engineering Group and Big Data Technology Co., Ltd., to develop three new computing clusters aimed at boosting China’s AI development.
Recently, Moore Threads completed a new round of financing, raising CNY 2.5 billion (roughly USD 343.7 million) to support its expansion plans and technological development. However, the inability to access advanced processes from TSMC, Intel, and Samsung presents significant challenges for developing next-generation GPUs.
Read more
(Photo credit: Moore Threads)
News
Following Foxconn’s substantial order for the assembly of NVIDIA’s GB200 AI servers, according to a report from Economic Daily News, Foxconn has now exclusively secured a major order for the NVLink Switch, a key component of the GB200 renowned for enhancing computing power. The volume of this order is estimated to be seven times that of the server cabinets. Not only is this a brand new order, but it also carries a significantly higher gross profit margin compared to server assembly, the report noted.
While Foxconn does not comment on orders and customers, industry sources cited by the same report highlight that NVLink is an exclusive NVIDIA technology consisting of two parts. The first is the bridge technology, which connects the central processing unit (CPU) with the AI chip (GPU). The second is the switch technology, which is crucial for interconnecting GPUs, enabling thousands of GPUs to combine in operation, thereby maximizing their collective computing power.
Industry sources cited by Economic Daily News have stated that the key feature of the GB200 is not just its significant computing power but also its high-speed transmission capabilities. NVLink is considered the magic ingredient for enhancing this computing power.
Reportedly, the primary reason Foxconn has secured the exclusive order for NVIDIA’s NVLink is due to their long-standing cooperation and mutual understanding. Foxconn has been a leading manufacturer for network communication equipment for years, making it a reasonable choice for NVIDIA to entrust with these orders.
Industry sources cited by the report further claim that as each server cabinet requires seven NVLinks, this new order means that for every GB200 server cabinet produced, Foxconn receives an order for seven NVLink switches. Given that the profit margin for switches is considerably higher than for server assembly, this order is expected to significantly boost Foxconn’s operations.
Per the report, the world’s top seven switch manufacturers, including Dell, HP, Cisco, Nokia, and Ericsson, are all clients of Foxconn. This has enabled Foxconn to secure over 75% of the global market share in switches, firmly establishing its leading position.
Regarding the AI server market, Foxconn’s Chairman Young Liu previously revealed that the GB200 is in high demand, and he anticipates that Foxconn’s market share in AI servers could reach 40% this year.
Read more
(Photo credit: NVIDIA)