News
On February 21st, Intel Foundry Direct Connect 2024 took place in San Jose, USA. During the conference, Intel announced the launch of Intel Foundry, a system-level foundry tailored for the AI era. They unveiled seven new process nodes beyond 2024, including the next-generation Intel 14A and 14A-E processes, which utilize High-NA EUV equipment.
In the reconstruction of Intel, Gelsinger had also articulated a vision to establish a world-class foundry and become a major chip capacity provider in the United States and Europe. Now, three years later, this vision is becoming a reality.
The newly introduced Intel Foundry is a rebranded and restructured organizational model. Gelsinger emphasizes that Intel is not merely fixing a company but “establishing two vibrant new organisations”: Intel Foundry and Intel Products. Intel Foundry is dedicated to serving both internal and external customers on a large scale, establishing a supply chain to ensure capacity.
Gelsinger stated that Intel Foundry is striving to become the world’s second-largest foundry by 2030. According to TrendForce’s data statistics for the third quarter of 2023, the world’s top three foundries were TSMC, Samsung, and GlobalFoundries, with Intel Foundry Services (IFS) ranking ninth at the time.
During the conference, Intel expanded its process technology roadmap, introducing the evolution versions of Intel 14A and several specialized nodes.
Intel also confirmed that its “Four Years, Five Process Nodes” roadmap is progressing steadily, and it will be the first to offer backside power delivery solutions in the industry. Intel expects to regain process leadership by 2025 with the Intel 18A process node.
The new roadmap includes evolved versions of Intel 3, Intel 18A, and Intel 14A technologies. For instance, Intel 3-T is optimized for 3D advanced packaging designs through silicon via technology and is expected to be production-ready soon.
Intel also highlighted its progress in mature process nodes, such as the newly announced 12-nanometer node developed in collaboration with UMC in January.
Regarding this collaboration, TrendForce believes that this partnership, which leverages UMC’s diversified technological services and Intel’s existing factory facilities for joint operation, not only aids Intel in transitioning from an IDM to a foundry business model but also brings a wealth of operational experience and enhances manufacturing flexibility.
Intel’s foundry plans to introduce a new node every two years and evolve node versions along the way, helping customers improve their products through Intel’s leading process technology.
Additionally, Intel Foundry announced the addition of FCBGA 2D+ in the technical portfolio of Intel Foundry Advanced System Packaging and Testing (Intel Foundry ASAT). This combination will include FCBGA 2D, EMIB, Foveros, and Foveros Direct technologies.
Intel’s client have reportedly expressed support for Intel’s systemic foundry services. Satya Nadella, Chairman and CEO of Microsoft, announced during the Intel Foundry Direct Connect conference that Microsoft plans to utilize Intel’s 18A process to manufacture a chip designed by the company.
Satya Nadella stated, “We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry.”
Nadella further mentioned, “To achieve this vision, we need a reliable supply of the most advanced, high-performance and high-quality semiconductors. That’s why we are so excited to work with Intel Foundry, and why we have chosen a chip design that we plan to produce on Intel 18A process.”
Intel Foundry has amassed a substantial number of client design cases across various processes, including Intel 18A, Intel 16, and Intel 3, as well as Intel Foundry ASAT, which encompasses advanced packaging.
Overall, the anticipated lifetime deal value for Intel Foundry in wafer manufacturing and advanced packaging surpasses USD 15 billion.
IP (Intellectual Property) and EDA (Electronic Design Automation) partners Synopsys, Cadence, Siemens, Ansys, Lorentz, and Keysight have announced that tools and IP are ready to help foundry customers accelerate advanced chip designs based on Intel’s 18A process, featuring the industry-first backside power delivery solution. Furthermore, these partners have confirmed the availability of their EDA and IP solutions across various Intel node families.
Additionally, several suppliers have announced plans to collaborate on assembly technologies and design flows for Intel’s EMIB 2.5D packaging technology. These EDA solutions will ensure Intel can swiftly develop and deliver advanced packaging solutions to its customers.
Intel has also unveiled the “Emerging Business Initiative” (EBI), which involves collaboration with Arm to provide advanced foundry services for System-on-Chip (SoCs) based on the Arm architecture. This initiative aims to support startups in developing technology based on the Arm architecture by offering essential IP, manufacturing support, and financial assistance. It provides an important opportunity for both Arm and Intel to foster innovation and development in the industry.
Intel’s system-level foundry model offers optimization from factory networks to software. Intel and its ecosystem provide continuously improving technologies, reference designs, and new standards, enabling customers to innovate at the system level.
Stuart Pann, Senior Vice President of Intel Foundry, stated, “We are offering a world-class foundry, delivered from a resilient, more sustainable and secure source of supply, and complemented by unparalleled systems of chips capabilities. Bringing these strengths together gives customers everything they need to engineer and deliver solutions for the most demanding applications.”
In terms of sustainability, Intel aims to be the leading foundry in the industry. In 2023, Intel’s global factories achieved a preliminary estimate of a 99% renewable energy usage rate.
At the Intel Foundry Direct Connect conference, Intel reiterated its commitment to reaching 100% renewable energy usage, water positive status, and zero landfill waste by 2030. Additionally, Intel emphasized its commitment to achieving net-zero Scope 1 and Scope 2 greenhouse gas (GHG) emissions by 2040 and net-zero upstream emissions of Scope 3 GHG by 2050.
Read more
(Photo credit: Intel)
News
NVIDIA, a global AI chip giant, has released its financial report on February 21st, surpassing profit and sales expectations with a remarkable 265% revenue growth, marking a historic high. Moreover, the company anticipates revenue for the current quarter to exceed expectations.
NVIDIA Announces Fourth Quarter Revenue of USD 22.1 billion, exceeding expectations of USD 20.62 billion. As per data from the London Stock Exchange Group (LSEG), adjusted earnings per share for the fourth quarter stand at USD 5.16, surpassing expectations of USD 4.64 per share.
Furthermore, NVIDIA anticipates sales of USD 24 billion for the current quarter. Analysts of LSEG project earnings per share of USD 5.00 and sales of USD 22.17 billion. Net profit for the quarter amounts to USD 12.29 billion, or USD 4.93 per share, marking a 769% increase from the same period last year, when it was USD 1.41 billion, or USD 0.57 per share.
NVIDIA attributed its 265% revenue growth compared to a year ago to robust sales of server artificial intelligence chips, especially its “Hopper” chips like the H100.
According to reports cited by Liberty Times Net, NVIDIA CEO Jensen Huang, during a conference call with industry analysts, addressed concerns among investors about the company’s ability to sustain this level of growth or sales throughout the year.
Huang told analysts that, ‘Fundamentally, the conditions are excellent for continued growth’ in 2025 and beyond. He further cited strong underlying conditions driven by generative AI and a broader industry trend shifting toward accelerators of NVIDIA. This shift is expected to maintain high demand for the company’s GPUs.
Previously, as reported by Economic Daily News, when discussing the major trends in AI, Huang pointed out that AI will operate in smartphones, computers, robots, automobiles, as well as in the cloud and data centers. Huang emphasized that NVIDIA is a pioneer in accelerating computation and AI computing, and in the next decade, he envisions a reshaping of computation, with every industry being impacted.
Analysts cited in the report from Liberty Times Net anticipate that major supplier TSMC’s capacity expansion in advanced packaging in the first half of the year will help NVIDIA overcome core supply bottlenecks and provide more chips to customers.
Read more
(Photo credit: NVIDIA)
News
Foundry is a crucial sector in the semiconductor industry and a focal point of attention for many professionals in the industry. Recently, three foundries have released their outlook for the first half of 2024, all indicating a cautious outlook for the first quarter.
According to Taiwanese News outlet Commercial Times, United Microelectronics Corporation (UMC), Powerchip Semiconductor Manufacturing Corporation (PSMC), and Vanguard International Semiconductor Corporation (VIS) anticipate a subdued first quarter due to factors such as off-season effects and holidays.
With conservative estimates on wafer shipments, average selling prices (ASP), and gross margins for the first quarter, there remains a high likelihood of a continued decline in performance compared to the previous quarter.
VIS stated in a recent conference that semiconductor demand entered the traditional off-season at the beginning of the year.
It is expected that the supply chain will continue inventory adjustments and maintain a cautious approach to orders. Assuming an average exchange rate of NTD 30.9, shipments are expected to decrease by 6-8% quarterly, with average selling prices roughly remaining flat and gross margins falling between 21-23%.
VIS believes that the industry is still undergoing inventory adjustments, and the overall economic situation remains sluggish. Currently, the visibility of the market is limited to only two to three months. In the first quarter, due to continuous inventory adjustments in the supply chain and a cautious approach to ordering, capacity utilization will decrease to 50%.
In addition, regarding the investment in 12-inch fabs, VIS Chairman Leuh Fang stated that due to the significant investment required for 12-inch fabs, there must be definite demand and leading technological sources before deciding to proceed with construction.
Currently, the decision is still in the cautious evaluation stage, and no plants will be built hastily until the technology sources are confirmed.
PSMC’s General Manager, Brian Shieh, stated during a mid-January earnings call that the company expects a seasonal decline of approximately 5-6% in revenue for the first quarter due to fewer working days.
Regarding inventory, PSMC noted that client inventory levels are currently at normal, with the semiconductor manufacturing segment performing relatively well. It is anticipated that capacity utilization rates for the first quarter of 2024 could rebound to 70% to 75%, offering promising prospects for operations in the latter half of the year.
Overall, PSMC aims for a capacity utilization rate of over 90% for the full year, with the goal of continuously filling the new capacity at the Tongluo plant in the second half of the year. The company estimates that the Tongluo plant can be fully operational in the latter half of the year, primarily focusing on 55nm and 40nm logic products.
UMC forecasts a modest increase of 2-3% in wafer shipments for the first quarter of 2024, with ASP quoted in USD expected to decrease by 5%, leading to a slight decline in gross margin to around 30%. This is primarily attributed to adjustments in pricing and changes in product mix. Capacity utilization is anticipated to remain at low 60%.
In terms of production lines, stable demand is projected for communication and consumer sectors, maintaining flat revenue trends, while automotive and industrial segments are expected to undergo inventory adjustments, resulting in a seasonal decline in revenue for the first quarter of 2024.
UMC estimates that the revenue contribution from special processes will reach 30% in the first quarter of this year, with sales from a predominant single customer making a significant contribution.
Regarding the medium to long-term outlook for the full year, UMC stated that the semiconductor market is expected to grow at a mid-single-digit rate annually, while the foundry industry is forecasted to grow at a high single-digit rate, approaching 10%.
UMC’s revenue is expected to align closely with the growth rate of the wafer foundry industry. UMC holds a cautiously optimistic outlook for demand in 2024, as smartphone and PC inventory levels returned to relatively normal levels in the fourth quarter of 2023.
Additionally, on January 25th of this year, Intel and UMC announced a collaboration to develop 12nm technology. Both parties will share the expenses, with Intel taking charge of operating the facility.
UMC stated that the capacity expansion will significantly impact the company’s operational performance once production starts. The technology is expected to enter the Process Design Kit (PDK) stage in 2025, begin trial production in 2026, and commence supply in 2027. The 12nm technology represents a potential market worth billions, and the collaboration does not include IP licensing.
Regarding this matter, TrendForce believes that this partnership, which leverages UMC’s diversified technological services and Intel’s existing factory facilities for joint operation, not only aids Intel in transitioning from an IDM to a foundry business model but also brings a wealth of operational experience and enhances manufacturing flexibility.
For UMC, this collaboration is a game-changer as it allows the company to agilely leverage FinFET capacity without the pressure of heavy capital investments.
This move positions UMC to carve out a unique niche in the fiercely competitive mature process market. Furthermore, by co-managing Intel’s US facility, UMC can expand its global footprint, smartly diversifying geopolitical risks. This partnership is shaping up to be a win-win for both.
Overall, TrendForce views this alliance as a significant step. UMC brings its plentiful experience in mature processes, while Intel contributes its advanced technological prowess.
This partnership is not just about mutual benefits at the 10nm process level; it’s a watchpoint for potentially deeper and more extensive collaboration in their respective fields of expertise. In the dynamic world of semiconductor manufacturing, this Intel-UMC alliance is a fascinating development to keep an eye on.
Read more
(Photo credit: UMC)
News
Under the formidable impetus of AI, global enterprises are vigorously strategizing for AI chip development, and China is no exception. Who are the prominent AI chip manufacturers in China presently? How do they compare with industry giants like NVIDIA, and what are their unique advantages? A report from TechNews has compiled an overview of eight Chinese AI chip manufacturers in self-development.
In broad terms, AI chips refer to semiconductor chips capable of running AI algorithms. However, in the industry’s typical usage, AI chips specifically denote chips designed with specialized acceleration for AI algorithms, capable of handling large-scale computational tasks in AI applications. Under this concept, AI chips are also referred to as accelerator cards.
Technically, AI chips are mainly classified into three categories: GPU, FPGA, and ASIC. In terms of functionality, AI chips encompass two main types: training and inference. Regarding application scenarios, AI chips can be categorized into server-side and mobile-side, or cloud, edge, and terminal.
The global AI chip market is currently dominated by Western giants, with NVIDIA leading the pack. Industry sources cited by TechNews have revealed data that NVIDIA nearly monopolizes the AI chip market with an 80% market share.
China’s AI industry started relatively late, but in recent years, amid the US-China rivalry and strong support from Chinese policies, Chinese AI chip design companies have gradually gained prominence. They have demonstrated relatively outstanding performance in terminal and large model inference.
However, compared to global giants, they still have significant ground to cover, especially in the higher-threshold GPU and large model training segments.
GPUs are general-purpose chips, currently dominating the usage in the AI chip market. General-purpose GPU computing power is widely employed in artificial intelligence model training and inference fields. Presently, NVIDIA and AMD dominate the GPU market, while Chinese representative companies include Hygon Information Technology, Jingjia Micro, and Enflame Technology.
FPGAs are semi-customized chips known for low latency and short development cycles. Compared to GPUs, they are suitable for multi-instruction, single-data flow analysis, but not for complex algorithm computations. They are mainly used in the inference stage of deep learning algorithms. Frontrunners in this field include Xilinx and Intel in the US, with Chinese representatives including Baidu Kunlunxin and DeePhi.
ASICs are fully customized AI chips with advantages in power consumption, reliability, and integration. Mainstream products include TPU, NPU, VPU, and BPU. Global leading companies include Google and Intel, while China’s representatives include Huawei, Alibaba, Cambricon Technologies, and Horizon Robotics.
In recent years, China has actively invested in the field of self-developed AI chips. Major companies such as Baidu, Alibaba, Tencent, and Huawei have accelerated the development of their own AI chips, and numerous AI chip companies continue to emerge.
Below is an overview of the progress of 8 Chinese AI chip manufacturers:
1. Baidu Kunlunxin
Baidu’s foray into AI chips can be traced back to as early as 2011. After seven years of development, Baidu officially unveiled its self-developed AI chip, Kunlun 1, in 2018. Built on a 14nm process and utilizing the self-developed XPU architecture, Kunlun 1 entered mass production in 2020. It is primarily employed in Baidu’s search engine and Xiaodu businesses.
In August of the same year, Baidu announced the mass production of its second-generation self-developed AI chip, Kunlun 2. It adopts a 7nm process and integrates the self-developed second-generation XPU architecture, delivering a performance improvement of 2-3 times compared to the first generation. It also exhibits significant enhancements in versatility and ease of use.
The first two generations of Baidu Kunlunxin products have already been deployed in tens of thousands of units. The third-generation product is expected to be unveiled at the Baidu Create AI Developer Conference scheduled for April 2024.
2. T-Head (Alibaba)
Established in September 2018, T-Head is the semiconductor chip business entity fully owned by Alibaba. It provides a series of products, covering data center chips, IoT chips, processor IP licensing, and more, achieving complete coverage across the chip design chain.
In terms of AI chip deployment, T-Head introduced its first high-performance artificial intelligence inference chip, the HanGuang 800, in September 2019. It is based on a 12nm process and features a proprietary architecture.
In August 2023, Alibaba’s T-Head unveiled its first self-developed RISC-V AI platform, supporting over 170 mainstream AI models, thereby propelling RISC-V into the era of high-performance AI applications.
Simultaneously, T-Head announced the new upgrade of its XuanTie processor C920, which can accelerate GEMM (General Matrix Multiplication) calculations 15 times faster than the Vector scheme.
In November 2023, T-Head introduced three new processors on the XuanTie RISC-V platform (C920, C907, R910). These processors significantly enhance acceleration computing capabilities, security, and real-time performance, poised to accelerate the widespread commercial deployment of RISC-V in scenarios and domains such as autonomous driving, artificial intelligence, enterprise-grade SSD, and network communication.
3. Tencent
In November 2021, Tencent announced substantial progress in three chip designs: Zixiao for AI computing, Canghai for image processing, and Xuanling for high-performance networking.
Zixiao has successfully undergone trial production and has been activated. Reportedly, Zixiao employs in-house storage-computing architecture and proprietary acceleration modules, delivering up to 3 times the computing acceleration performance and over 45% cost savings overall.
Zixiao chips are intended for internal use by Tencent and are not available for external sales. Tencent profits by renting out computing power through its cloud services.
Recently, according to sources cited by TechNews, Tencent is considering using Zixiao V1 as an alternative to the NVIDIA A10 chip for AI image and voice recognition applications. Additionally, Tencent is planning to launch the Zixiao V2 Pro chip optimized for AI training to replace the NVIDIA L40S chip in the future.
4. Huawei
Huawei unveiled its Huawei AI strategy and all-scenario AI solutions at the 2018 Huawei Connect Conference. Additionally, it introduced two new AI chips: the Ascend 910 and the Ascend 310. Both chips are based on Huawei’s self-developed Da Vinci architecture.
The Ascend 910, designed for training, utilizes a 7nm process and boasts computational density that is said to surpass the NVIDIA Tesla V100 and Google TPU v3.
On the other hand, the Ascend 310 belongs to the Ascend-mini series and is Huawei’s first commercial AI SoC, catering to low-power consumption areas such as edge computing.
Based on the Ascend 910 and Ascend 310 AI chips, Huawei has introduced the Atlas AI computing solution. As per the Huawei Ascend community, the Atlas 300T product line includes three models corresponding to the Ascend 910A, 910B, and 910 Pro B.
Among them, the 910 Pro B has already secured orders for at least 5,000 units from major clients in 2023, with delivery expected in 2024. Sources cited by the TechNews report indicate that the capabilities of the Huawei Ascend 910B chip are now comparable to those of the NVIDIA A100.
Due to the soaring demand for China-produced AI chips like the Huawei Ascend 910B in China, Reuters recently reported that Huawei plans to prioritize the production of the Ascend 910B. This move could potentially impact the production capacity of the Kirin 9000s chips, which are expected to be used in the Mate 60 series.
5. Cambricon Technologies
Founded in 2016, Cambricon Technologies focuses on the research and technological innovation of artificial intelligence chip products.
Since its establishment, Cambricon has launched multiple chip products covering terminal, cloud, and edge computing fields. Among them, the MLU 290 intelligent chip is Cambricon’s first training chip, utilizing TSMC’s 7nm advanced process and integrating 46 billion transistors. It supports the MLUv02 expansion architecture, offering comprehensive support for AI training, inference, or hybrid artificial intelligence computing acceleration tasks.
The Cambricon MLU 370 is the company’s flagship product, utilizing a 7nm manufacturing process and supporting both inference and training tasks. Additionally, the MLU 370 is Cambricon’s first AI chip to adopt chiplet technology, integrating 39 billion transistors, with a maximum computing power of up to 256TOPS (INT8).
6. Biren Technology
Established in 2019, Biren Technology initially focuses on general smart computing in the cloud.
It aims to surpass existing solutions gradually in various fields such as artificial intelligence training, inference, and graphic rendering, thereby achieving a breakthrough in China’s produced high-end general smart computing chips.
In 2021, Biren Technology’s first general GPU, the BR100 series, entered trial production. The BR100 was officially released in August 2022.
Reportedly, the BR100 series is developed based on Biren Technology’s independently chip architecture and utilizes mature 7nm manufacturing processes.
7. Horizon Robotics
Founded in July 2015, Horizon Robotics is a provider of smart driving computing solutions in China. It has launched various AI chips, notably the Sunrise and Journey series. The Sunrise series focuses on the AIoT market, while the Journey series is designed for smart driving applications.
Currently, the Sunrise series has advanced to its third generation, comprising the Sunrise 3M and Sunrise 3E models, catering to the high-end and low-end markets, respectively.
In terms of performance, the Sunrise 3 achieves an equivalent standard computing power of 5 TOPS while consuming only 2.5W of power, representing a significant upgrade from the previous generation.
The Journey series has now iterated to its fifth generation. The Journey 5 chip was released in 2021, with global mass production starting in September 2022. Each chip in the series boasts a maximum AI computing power of up to 128 TOPS.
In November 2023, Horizon Robotics announced that the Journey 6 series will be officially unveiled in April 2024, with the first batch of mass-produced vehicle deliveries scheduled for the fourth quarter of 2024.
Several automotive companies, including BYD, GAC Group, Volkswagen Group’s software company CARIAD, Bosch, among others, have reportedly entered into cooperative agreements with Horizon Robotics.
8. Enflame Technology
Enflame Technology, established in March 2018, specializes in cloud and edge computing in the field of artificial intelligence.
Over the past five years, it has developed two product lines focusing on cloud training and cloud inference. In September 2023, Enflame Technology announced the completion of Series D funding round of CNY 2 billion.
In addition, according to reports cited by TechNews, Enflame Technology’s third-generation AI chip products are set to hit the market in early 2024.
Conclusion
Looking ahead, the industry remains bullish on the commercial development of AI, anticipating a substantial increase in the demand for computing power, thereby creating a significant market opportunity for AI chips.
Per data cited by TechNews, it has indicated that the global AI chip market reached USD 580 billion in 2022 and is projected to exceed a trillion dollars by 2030.
Leading AI chip manufacturers like NVIDIA are naturally poised to continue benefiting from this trend. At the same time, Chinese AI chip companies also have the opportunity to narrow the gap and accelerate growth within the vast AI market landscape.
Read more
(Photo credit: iStock)
News
Microsoft is reportedly developing a customized network card for AI servers, as per sources cited by global media The Information. This card is expected to enhance the performance of its in-house AI chip Azure Maia 100 while reducing dependency on NVIDIA as the primary supplier of high-performance network cards.
Leading this product initiative at Microsoft is Pradeep Sindhu, co-founder of Juniper Networks. Microsoft acquired Sindhu’s data center technology startup, Fungible, last year. Sindhu has since joined Microsoft and is leading the team in developing this network card.
According to the Information, this network card is similar to NVIDIA’s ConnectX-7 interface card, which supports a maximum bandwidth of 400 Gb Ethernet and is sold alongside NVIDIA GPUs.
Developing high-speed networking equipment tailored specifically for AI workloads may take over a year. If successful, it could reduce the time required for OpenAI to train models on Microsoft AI servers and lower the costs associated with the training process.
In November last year, Microsoft unveiled the Azure Maia 100 for data centers, manufactured using TSMC’s 5-nanometer process. The Azure Maia 100, introduced at the conference, is an AI accelerator chip designed for tasks such as running OpenAI models, ChatGPT, Bing, GitHub Copilot, and other AI workloads.
Microsoft is also in the process of designing the next generation of the chip. Not only is Microsoft striving to reduce its reliance on NVIDIA, but other companies including OpenAI, Tesla, Google, Amazon, and Meta are also investing in developing their own AI accelerator chips. These companies are expected to compete with NVIDIA’s flagship H100 AI accelerator chips.
Read more
(Photo credit: Microsoft)