News

[News] Softbank Acquired Graphcore, Hinting at a Battle between IPU and GPU


2024-07-16 Semiconductors editor

Recently, Reuters reported that SoftBank Group acquired Graphcore, a company often referred to as the “UK’s NVIDIA,” though the amount of the deal was not disclosed. Graphcore is a startup in the field of artificial intelligence (AI) that has designed a new type of Intelligent Processing Unit (IPU). In certain model tests, its performance has surpassed that of NVIDIA’s GPU systems, and thus, the industry is optimistic about its potential to compete with NVIDIA’s GPU.

  • The Differences between IPU and GPU

As a processor specifically designed for AI computation, also known as an AI processor, IPU is excels in fields such as deep learning, machine learning, and natural language processing, boasting the capability of accelerating various AI-related tasks.

GPU, on the other hand, was initially designed to meet the demands of graphics rendering and image processing. With the rapid proliferation of AI and big data technologies, high-performance GPU, known for their powerful parallel processing capabilities, can handle multiple data points and tasks simultaneously, thereby speeding up training and inference processes, which together enable GPU to be gradually applied in the AI field, particularly in deep learning and machine learning.

Although both IPU and GPU can be used in the AI domain, they differ a lot in several aspects, such as computational architecture and memory architecture.

Previously, Lu Tao, the President and General Manager of Greater China at Graphcore, explained that the Graphcore C600 has 1,472 processing cores per IPU, capable of running 8,832 independent program threads in parallel. In comparison, NVIDIA’s GPU SM Core (Stream Multiprocessor Core) has around 100 cores, varying with different product configurations.

In terms of memory architecture, NVIDIA’s GPUs have a two-level memory structure. The first level consists of around 40-50 MB of memory within the chip, with external HBM or VRAM attached. Graphcore’s IPU, however, contains 900 MB of on-chip SRAM storage, which is distributed.

Lu summarized that the IPU’s architecture shows greater advantages for tasks with high requirements for sparsity and high dimensions relative to to GPU. For matrix operations, its performance might be similar to GPU or slightly less competitive.

Read more

(Photo credit: Graphcore)

Please note that this article cites information from WeChat account DRAMeXchange

Get in touch with us