News
According to ChinaTimes’ report, Big Fund II is once again making strategic moves, this time targeting a stake in Tsing Micro Technology, a company specializing in reconfigurable computing chips.
Reconfigurable architecture chips possess extensive general computing capabilities, making them essential for addressing high computing power demands. Big Fund II’s investment is a proactive step to position itself in the upcoming computing power market, avoiding potential “bottleneck” crises.
Amidst the pressure of technology restrictions from the United States, Big Fund II, tasked with supporting the development of domestic semiconductor companies in China, has been active in recent investment initiatives.
At the end of October this year, Big Fund II invested CNY 14.5 billion, participating in the capital increase of the memory production project at ChangXin XinQiao.
ChinaFund News reports that Tsing Micro Technology’s main business focuses on innovative research and development, as well as industrial applications of reconfigurable computing chips.
Tsing Micro Technology recently underwent a business change, with the addition of ten institutional shareholders, including National Integrated Circuit Industry Investment Fund Phase II (Big Fund II), GigaDevice, CMB International, and Beijing Zhongguancun Science City Technology Growth Investment Partnership, among others. The registered capital of the company increased from approximately CNY 33.18 million to around CNY 38.9 million, with Big Fund II holding a 5.8824% stake.
The report highlights that Tsing Micro Technology’s technical team originates from the reconfigurable computing team at Beijing Tsinghua University’s Institute of Microelectronics. The team has been selected for three consecutive years (since 2021) in EETimes’s “Silicon 100: Startups Worth Watching” list.
Tsing Micro Technology’s Co-founder and CTO, Peng Ouyang, stated in November 2022 that faced with the explosive growth in the demand for computing power due to artificial intelligence, mainstream GPU chips require significant investment and cannot meet the “computing power black hole” brought about by the development of large models. He emphasized that reconfigurable computing chips are a solution to this challenge.
Since 2019, Tsing Micro Technology has undergone multiple rounds of financing. In January 2019, angel investors included Baidu and Focus Media. Subsequent Series A funding involved investments from SK Hynix in South Korea, Green Pine Capital Partners, and SenseTime, among others. Series B financing was led by CDB RW Funds, with participation from SenseTime and Legend Capital.
Read more
(Photo credit: Tsing Micro Tech)
News
In the global landscape of self-developed chips, the industry has predominantly embraced the Arm architecture for IC design. However, Meta’s decision to employ the RISC-V architecture in its self-developed AI chip has become a topic of widespread discussion. It is said the growing preference for RISC-V is attributed to three key advantages including low power consumption, high openness, and relatively lower development costs, according to reports from UDN News.
Noted that Meta exclusively deploys its in-house AI chip, “MTIA,” within its data centers to expedite AI computation and inference. In this highly tailored setting, this choice ensures not only robust computational capabilities but also the potential for low power consumption, with an anticipated power usage of under 25W per RISC-V core. By strategically combining the RISC-V architecture with GPU accelerators or Arm architecture, Meta aims to achieve an overall reduction in power consumption while boosting computing power simultaneously.
Meta’s confirmation of adopting RISC-V architecture form Andes Technology Corporation, a CPU IP and Platform IP supplier from Taiwan, for AI chip development underscores RISC-V’s capability to support high-speed computational tasks and its suitability for integration into advanced manufacturing processes. This move positions RISC-V architecture to potentially make significant inroads into the AI computing market, and stands as the third computing architecture opportunity, joining the ranks of x86 and Arm architectures.
Regarding the development potential of different chip architectures in the AI chip market, TrendForce points out that in the current overall AI market, GPUs (such as NVIDIA, AMD, etc.) still dominate, followed by Arm architecture. This includes major data centers, with active investments from NVIDIA, CSPs, and others in the Arm architecture field. RISC, on the other hand, represents another niche market, targeting the open-source AI market or enterprise niche applications.
(Image: Meta)
News
The growing importance of advanced processes in wafer foundries is evident, propelled by innovations like AI and high-performance computing. While 3nm chips have entered the consumer market, efforts are underway in wafer foundries to advance to 2nm chips. Recent reports suggest progress in 1nm chips, further fueling the competition among wafer foundries.
2nm Chips: Unveiling in 2025
Anticipated by 2025, the race for 2nm chips is in full swing, with major players like TSMC, Samsung, and Rapidus actively pursuing mass production. TSMC plans to implement GAAFET transistors in its 2nm process by 2025, offering a 15% speed boost and up to a 30% reduction in power consumption compared to N3E, all while increasing chip density by over 15%.
Samsung is on a similar trajectory, planning to unveil its 2nm process by the end of 2025. As report by media in October, Samsung Foundry, said on Semiconductor Expo 2023 in South Korea, has already initiated discussions with major clients, expecting decisions in upcoming future.
Rapidus aims for trial production of 2nm chips in 2025, scaling up to mass production by 2027. Reports in September indicated that ASML plans to establish a technical support hub in Hokkaido, Japan in 2024. Approximately 50 engineers will be dispatched to Rapidus’ ongoing construction site for the 2nm plant, assisting in the setup of EUV lithography equipment on the trial production line, and providing support for factory activation, maintenance, and inspections.
When will 1nm chip arrive?
Apart from 2nm, the industry’s attention turns to 1nm-level chips. According to industry plans, mass production of 1nm-level chips is expected between 2027 and 2030.
Nikkei recently reveals collaboration between Japanese chipmaker Rapidus, Tokyo University, and the French technological research organization Leti to develop foundational technology for 1nm IC design. Talent exchange and technical sharing are slated to begin in 2024, aiming to establish a supply system for indispensable 1nm chip products, crucial for enhancing auto driving and AI performance.
On the other hand, collaborations with IBM for 1nm products are also being considered. The computing performance of 1nm products, anticipated to become mainstream in the 2030s, is expected to surpass 2nm by 10-20%.
TSMC and Samsung are also eyeing 1nm chip development. TSMC’s initial plan to build a 1.4nm process wafer fab in Taiwan faced delays after abandoning the original site selection in October. Samsung aims to launch its 1.4nm process by the end of 2027, with improved performance and power consumption through an increased number of nanosheets per transistor, promising enhanced control over current flow and reduced power leakage.
(Image: TSMC)
Insights
In TrendForce’s report on the self-driving System-on-Chip (SoC) market, it has witnessed rapid growth, which is anticipated to soar to $28 billion by 2026, boasting a Compound Annual Growth Rate (CAGR) from 2022 to 2026.
In 2022, the global market for self-driving SoC is approximately $10.8 billion, and it is projected to grow to $12.7 billion in 2023, representing an 18% YoY increase. Fueled by the rising penetration of autonomous driving, the market is expected to reach $28 billion in 2026, with a CAGR of approximately 27% from 2022 to 2026.
Given the slowing growth momentum in the consumer electronics market, self-driving SoC has emerged as a crucial global opportunity for IC design companies.
Due to factors such as regulations, technology, costs, and network speed, most automakers currently operate at Level 2 autonomy. In practical terms, computing power exceeding 100 TOPS (INT8) is sufficient. However, as vehicles typically have a lifespan of over 15 years, future upgrades in autonomy levels will rely on Over-The-Air (OTA) updates, necessitating reserved computing power.
Based on the current choices made by automakers, computing power emerges as a primary consideration. Consequently, NVIDIA and Qualcomm are poised to hold a competitive edge. In contrast, Mobileye’s EyeQ Ultra, set to enter mass production in 2025, offers only 176 TOPS, making it susceptible to significant competitive pressure.
Seamless integration of software and hardware can maximize the computational power of SoCs. Considering the imperative for automakers to reduce costs and enhance efficiency, the degree of integration becomes a pivotal factor in a company’s competitiveness. However, not only does integration matter, but the ability to decouple software and hardware proves even more critical.
Through a high degree of decoupling, automakers can continually update SoC functionality via Over-The-Air (OTA) updates. The openness of the software ecosystem assists automakers in establishing differentiation, serving as a competitive imperative that IC design firms cannot overlook.
News
On the 15th, Microsoft introducing its first in-house AI chip, “Maia.” This move signifies the entry of the world’s second-largest cloud service provider (CSP) into the domain of self-developed AI chips. Concurrently, Microsoft introduced the cloud computing processor “Cobalt,” set to be deployed alongside Maia in selected Microsoft data centers early next year. Both cutting-edge chips are produced using TSMC’s advanced 5nm process, as reported by UDN News.
Amidst the global AI fervor, the trend of CSPs developing their own AI chips has gained momentum. Key players like Amazon, Google, and Meta have already ventured into this territory. Microsoft, positioned as the second-largest CSP globally, joined the league on the 15th, unveiling its inaugural self-developed AI chip, Maia, at the annual Ignite developer conference.
These AI chips developed by CSPs are not intended for external sale; rather, they are exclusively reserved for in-house use. However, given the commanding presence of the top four CSPs in the global market, a significant business opportunity unfolds. Market analysts anticipate that, with the exception of Google—aligned with Samsung for chip production—other major CSPs will likely turn to TSMC for the production of their AI self-developed chips.
TSMC maintains its consistent policy of not commenting on specific customer products and order details.
TSMC’s recent earnings call disclosed that 5nm process shipments constituted 37% of Q3 shipments this year, making the most substantial contribution. Having first 5nm plant mass production in 2020, TSMC has introduced various technologies such as N4, N4P, N4X, and N5A in recent years, continually reinforcing its 5nm family capabilities.
Maia is tailored for processing extensive language models. According to Microsoft, it initially serves the company’s services such as $30 per month AI assistant, “Copilot,” which offers Azure cloud customers a customizable alternative to Nvidia chips.
Borkar, Corporate VP, Azure Hardware Systems & Infrastructure at Microsoft, revealed that Microsoft has been testing the Maia chip in Bing search engine and Office AI products. Notably, Microsoft has been relying on Nvidia chips for training GPT models in collaboration with OpenAI, and Maia is currently undergoing testing.
Gulia, Executive VP of Microsoft Cloud and AI Group, emphasized that starting next year, Microsoft customers using Bing, Microsoft 365, and Azure OpenAI services will witness the performance capabilities of Maia.
While actively advancing its in-house AI chip development, Microsoft underscores its commitment to offering cloud services to Azure customers utilizing the latest flagship chips from Nvidia and AMD, sustaining existing collaborations.
Regarding the cloud computing processor Cobalt, adopting the Arm architecture with 128 core chip, it boasts capabilities comparable to Intel and AMD. Developed with chip designs from devices like smartphones for enhanced energy efficiency, Cobalt aims to challenge major cloud competitors, including Amazon.
(Image: Microsoft)