News
According to sources and documents cited in a report from Reuters, two major Chinese chip manufacturers are in the early stages of producing High Bandwidth Memory (HBM) semiconductors, primarily for AI chipsets. Despite facing export restrictions from the United States, China is currently making progress mainly on older versions of HBM, gradually reducing reliance on other global suppliers.
Sources cited in the same report revealed that China’s largest DRAM chip manufacturer, ChangXin Memory Technologies (CXMT), is collaborating with chip packaging and testing company Tongfu Microelectronics to develop HBM chip samples, which are being showcased to potential customers.
On the other hand, Wuhan Xinxin Semiconductor Manufacturing Co., Ltd. (XMC) is constructing a 12-inch plant with a monthly capacity of 3,000 wafers, which is planned to manufucture HBM chips. Per the corporate registration documents, the plant is expected to commence operations in February this year.
Sources in the report mentioned that CXMT and other Chinese chip companies regularly hold meetings with semiconductor equipment manufacturers from South Korea and Japan to purchase tools for HBM development. Currently, CXMT, Tongfu Microelectronics, and XMC have not responded to these reports.
CXMT and XMC are both private companies that have received funding from local governments in China to drive technological development amid the country’s vigorous efforts to develop its semiconductor industry.
There also are reports indicating that Huawei, the Chinese tech giant subject to US sanctions, looks to collaborate with other local companies to produce HBM2 chips by 2026. According to a report from The Information, a group led by Huawei aimed at producing HBM chips includes Fujian Jinhua Integrated Circuit.
As per market reports cited by Reuters, China’s current focus is on HBM2. While the US has not restricted the export of HBM chips, HBM3 chips are manufactured using US technology, which many Chinese companies, including Huawei, are prohibited from using.
According to the analysis by Trendforce, the research and manufacturing of HBM involve complex processes and technical challenges, including wafer-level packaging, testing technology, design compatibility, and more. CoWoS is currently the mainstream packaging solution for AI processors, and in AI chips utilizing CoWoS technology, HBM integration is also incorporated.
CoWoS and HBM involves processes such as TSV (Through-Silicon Via), bumps, microbumps, and RDL (Redistribution Layer). Among these, TSV accounts for the highest proportion of the 3D packaging cost of HBM, close to 30%.
Read more
(Photo credit: CXMT)
News
Huawei, in collaboration with Orange Pi, an open source product brand of Shenzhen Xunlong Software, have unveiled their latest brandchild, OrangePi Kunpeng Pro development board. Though the specifics of the product have been hidden from public, an AI processor is said to be integrated into the package, indicating that Huawei’s Kunpeng chipsets have been progressing into the AI realm, according to a report from tom’s Hardware.
The Kunpeng Pro development board, an alternative of Raspberry Pi, is reported to be powered by a quad-core 64-bit Arm processor and an AI processor integrated into the same package. However, details of these processors remain undisclosed.
This is a tactic Huawei has employed previously to deter Western scrutiny. Huawei has been facing strict sanctions from the U.S. government, restricting its access to certain chips and chip-making technologies. On May 7th, the U.S. authority revoked the licenses of Intel and Qualcomm to supply semiconductor chips used in laptops and handsets to Huawei, which took immediate effect.
The development board is reported to be designed for a diverse user base, including consumers, developers, and students. It comes preinstalled with the openEuler OS, the openGauss database, and a range of internet, productivity, and software development tools.
Tom’s Hardware has learned that Kunpeng Pro uses a custom Huawei Kunpeng CPU that is paired with an AI FPGA processor. The CPU is believed to be a quad-core ARM model, while the AI FPGA processor is reportedly to offer 8 TOPS (Trillions or Tera Operations per Second) of AI computing power.
Qualcomm’s Snapdragon X Elite, launched in late 2023, delivers peak AI computing performance of 45 TOPS, while Apple’s M4, released in early May, is rated at 38 TOPS.
(Photo credit: Orange Pi)
News
At the Google I/O 2024 developer conference on Tuesday, Google unveiled its 6th generation custom chip, the Trillium TPU, which is scheduled to hit the market later this year, according to the report by TechCrunch.
According to the information provided by Google on its website, compared to TPU v5e, Trillium boasts a 4.7x peak compute performance increase per chip. Google has also doubled the High Bandwidth Memory (HBM) capacity and bandwidth, along with a 1x increase in Interchip Interconnect (ICI) bandwidth between chips.
Additionally, Trillium features the third-generation SparseCore, a dedicated accelerator for processing large embeddings, aimed at handling advanced ranking and recommendation workloads. Moreover, Trillium achieves a 67% higher energy efficiency compared to TPU v5e.
Trillium has the capacity to expand up to 256 TPUs within a singular pod boasting high bandwidth and low latency. Additionally, it incorporates multislice technology, allowing Google to interlink thousands of chips, thus constructing a supercomputer capable of facilitating a data center network capable of processing petabits of data per second.
In addition to Google, major cloud players such as AWS, Meta, and Microsoft have also made their way to develop their own AI Chips.
In late 2023, Microsoft unveiled two custom-designed chips, the Microsoft Azure Maia AI Accelerator, optimized for AI tasks and generative AI, and the Microsoft Azure Cobalt CPU, an Arm-based processor tailored to run general purpose compute workloads on the Microsoft Cloud. The former is reportedly to be manufactured using TSMC’s 5nm process.
In May 2023, Meta also unveiled the Meta Training and Inference Accelerator (MTIA) v1, its first-generation AI inference accelerator designed in-house with Meta’s AI workloads in mind.
AWS has also jumped into the AI chip market. In November, 2023, AWS released Trainium2, a chip for training AI models.
Read more
(Photo credit: Google)
News
NOR Flash manufacturer Wuhan Xinxin Semiconductor Manufacturing Co. (XMC) recently disclosed an IPO counseling filing with the Hubei Securities Regulatory Bureau, according to the official website of the China Securities Regulatory Commission. Its recently announced bidding project may indicate its ambition to become China’s first HBM foundry, according to the report by Chinese media Semi Insights.
As per information from its website, XMC provides 12-inch foundry services for NOR Flash, CIS, and Logic applications with processes of 40 nanometers and above. Originally a wholly-owned subsidiary of Yangtze Memory Technologies (YMTC), XMC announced in March its first external financing round, increasing its registered capital from approximately CNY 5.782 billion to about CNY 8.479 billion. Its IPO counseling filing also indicates that it is still majority-owned by YMTC, with a shareholding ratio of 68.1937%.
According to market sources cited in the same report, XMC’s initiation of external financing and IPO plan is primarily aimed at supporting the significant expansion during a crucial development phase for YMTC. Given the substantial scale of YMTC, completing an IPO within three years poses challenges. Therefore, XMC was chosen as the IPO entity to enhance financing channels.
It is noteworthy that XMC also announced its latest bidding project on HBM (High Bandwidth Memory) – related advanced packaging technology R&D and production line construction, according to local media.
The project indicates the company’s capability to apply three-dimensional integrated multi-wafer stacking technology to develop domestically produced HBM products with higher capacity, greater bandwidth, lower power consumption, and higher production efficiency. With plans to add 16 sets of equipment, XMC’s latest project aims to achieve a monthly output capacity of over 3000 wafers (12 inches), showing its ambition of becoming China’s first HBM foundry.
On December 3, 2018, XMC announced the successful development of its three-dimensional wafer stacking technology based on its three-dimensional integration technology platform. This marks a significant advancement for the company in the field of three-dimensional integration technology, enabling higher density and more complex chip integration.
Currently, XMC has made much progress in the research and development of three-dimensional integrated multi-wafer stacking technology, which has been evident in the successful development of three-wafer stacking technology, the application of three-dimensional integration technology in back-illuminated image sensors, advancements in HBM technology research and industrialization efforts, as well as breakthroughs in the 3D NAND project.
Read more
(Photo credit: XMC)
News
In 2023, NVIDIA secured a large number of AI chip orders through the sale of data center GPU, making it the most prominent company in the AI field for the year. In this context, many chip design companies have been eyeing the AI chip market, aiming to seize the opportunities presented by AI development and achieve greater profits.
According to a report from Nikkei Asia, Arm, a subsidiary of SoftBank Group, is developing AI processor to be used in SoftBank’s data centers. The report states that Arm has already established a department specific to AI processors at its UK headquarters in hopes of having prototypes ready by spring 2025, with an official release in spring 2025, and mass production at wafer foundries starting in fall 2025. SoftBank will bear the initial development costs, expected to reach several hundred billion yen.
As per the plan, SoftBank intends to build Arm-based data centers in the United States, Europe, Asia-Pacific, and the Middle East by 2026. Given the high power supply requirements of data centers, SoftBank also plans to expand its presence into the power generation sector, developing wind and solar energy facilities and exploring next-generation nuclear fusion technology.
The report indicates that once Arm’s AI processors enter mass production, the AI chip business might be spun off. Additionally, SoftBank aims to implement a broader strategy in the AI field to enhance the competitiveness of its data center, robotics, and power generation divisions, and to facilitate innovation. SoftBank CEO Masayoshi Son has emphasized that the application of general artificial intelligence will fundamentally transform industries such as shipping, pharmaceuticals, finance, manufacturing, and logistics.
Statistics show that the AI chip market size is mushrooming, projected to surge from USD 30 billion in 2024 to over USD 100 billion by 2029, and exceed USD 200 billion by 2032. Therefore, despite NVIDIA’s current leading position, various chip companies are striving to meet the demands for AI chips across different industries, seeking opportunities in this fast-growing market.