Maia


2024-05-08

[News] Rise of In-House Chips: 5 Tech Giants In the Front

With the skyrocketing demand for AI, cloud service providers (CSPs) are hastening the development of in-house chips. Apple, making a surprising move, is actively developing a data center-grade chip codenamed “Project ACDC,” signaling its foray into the realm of AI accelerators for servers.

As per a report from global media The Wall Street Journal, Apple is developing an AI accelerator chip for data center servers under the project name “Project ACDC.” Sources familiar with the matter revealed that Apple is closely collaborating with TSMC, but the timing of the new chip’s release remains uncertain.

Industry sources cited by the same report from Commercial Times disclosed that Apple’s AI accelerator chip will be developed using TSMC’s 3-nanometer process. Servers equipped with this chip are expected to debut next year, further enhancing the performance of its data centers and future cloud-based AI tools.

Industry sources cited in Commercial Times‘ report reveal that cloud service providers (CSPs) frequently choose TSMC’s 5 and 7-nanometer processes for their in-house chip development, capitalizing on TSMC’s mature advanced processes to enhance profit margins. Additionally, the same report also highlights that major industry players including Microsoft, AWS, Google, Meta, and Apple rely on TSMC’s advanced processes and packaging, which significantly contributes to the company’s performance.

Apple has consistently been an early adopter of TSMC’s most advanced processes, relying on their stability and technological leadership. Apple’s adoption of the 3-nanometer process and CoWoS advanced packaging next year is deemed the most reasonable solution, which will also help boost TSMC’s 3-nanometer production capacity utilization.

Generative AI models are rapidly evolving, enabling businesses and developers to address complex problems and discover new opportunities. However, large-scale models with billions or even trillions of parameters pose more stringent requirements for training, tuning, and inference.

Per Commercial Times citing industry sources, it has noted that Apple’s entry into the in-house chip arena comes as no surprise, given that giants like Google and Microsoft have long been deploying in-house chips and have successively launched iterative products.

In April, Google unveiled its next-generation AI accelerator, TPU v5p, aimed at accelerating cloud-based tasks and enhancing the efficiency of online services such as search, YouTube, Gmail, Google Maps, and Google Play Store. It also aims to improve execution efficiency by integrating cloud computing with Android devices, thereby enhancing user experience.

At the end of last year, AWS introduced two in-house chips, Graviton4 and Trainium2, to strengthen energy efficiency and computational performance to meet various innovative applications of generative AI.

Microsoft also introduced the Maia chip, designed for processing OpenAI models, Bing, GitHub Copilot, ChatGPT, and other AI services.

Meta, on the other hand, completed its second-generation in-house chip, MTIA, designed for tasks related to AI recommendation systems, such as content ranking and recommendations on Facebook and Instagram.

Read more

(Photo credit: Apple)

Please note that this article cites information from The Wall Street Journal and Commercial Times.

2023-11-23

[Insights] Microsoft Unveils In-House AI Chip, Poised for Competitive Edge with a Powerful Ecosystem

Microsoft announced the in-house AI chip, Azure Maia 100, at the Ignite developer conference in Seattle on November 15, 2023. This chip is designed to handle OpenAI models, Bing, GitHub Copilot, ChatGPT, and other AI services. Support for Copilot, Azure OpenAI is expected to commence in early 2024.

TrendForce’s Insights:

  1. Speculating on the Emphasis of Maia 100 on Inference, Microsoft’s Robust Ecosystem Advantage is Poised to Emerge Gradually

Microsoft has not disclosed detailed specifications for Azure Maia 100. Currently, it is known that the chip will be manufactured using TSMC’s 5nm process, featuring 105 billion transistors and supporting at least INT8 and INT4 precision formats. While Microsoft has indicated that the chip will be used for both training and inference, the computational formats it supports suggest a focus on inference applications.

This emphasis is driven by its incorporation of the less common INT4 low-precision computational format in comparison to other CSP manufacturers’ AI ASICs. Additionally, the lower precision contributes to reduced power consumption, shortening inference times, enhancing efficiency. However, the drawback lies in the sacrifice of accuracy.

Microsoft initiated its in-house AI chip project, “Athena,” in 2019. Developed in collaboration with OpenAI. Azure Maia 100, like other CSP manufacturers, aims to reduce costs and decrease dependency on NVIDIA. Despite Microsoft entering the field of proprietary AI chips later than its primary competitors, its formidable ecosystem is expected to gradually demonstrate a competitive advantage in this regard.

  1. U.S. CSP Manufacturers Unveil In-House AI Chips, Meta Exclusively Adopts RISC-V Architecture

Google led the way with its first in-house AI chip, TPU v1, introduced as early as 2016, and has since iterated to the fifth generation with TPU v5e. Amazon followed suit in 2018 with Inferentia for inference, introduced Trainium for training in 2020, and launched the second generation, Inferentia2, in 2023, with Trainium2 expected in 2024.

Meta plans to debut its inaugural in-house AI chip, MTIA v1, in 2025. Given the releases from major competitors, Meta has expedited its timeline and is set to unveil the second-generation in-house AI chip, MTIA v2, in 2026.

Unlike other CSP manufacturers, both MTIA v1 and MTIA v2 adopt the RISC-V architecture, while other CSP manufacturers opt for the ARM architecture. RISC-V is a fully open-source architecture, requiring no instruction set licensing fees. The number of instructions (approximately 200) in RISC-V is lower than ARM (approximately 1,000).

This choice allows chips utilizing the RISC-V architecture to achieve lower power consumption. However, the RISC-V ecosystem is currently less mature, resulting in fewer manufacturers adopting it. Nevertheless, with the growing trend in data centers towards energy efficiency, it is anticipated that more companies will start incorporating RISC-V architecture into their in-house AI chips in the future.

  1. The Battle of AI Chips Ultimately Relies on Ecosystems, Microsoft Poised for Competitive Edge

The competition among AI chips will ultimately hinge on the competition of ecosystems. Since 2006, NVIDIA has introduced the CUDA architecture, nearly ubiquitous in educational institutions. Thus, almost all AI engineers encounter CUDA during their academic tenure.

In 2017, NVIDIA further solidified its ecosystem by launching the RAPIDS AI acceleration integration solution and the GPU Cloud service platform. Notably, over 70% of NVIDIA’s workforce comprises software engineers, emphasizing its status as a software company. The performance of NVIDIA’s AI chips can be further enhanced through software innovations.

On the contrary, Microsoft possess a robust ecosystem like Windows. The recent Intel Arc GPU A770 showcased a 1.7x performance improvement in AI-driven Stable Diffusion on Microsoft Olive, this demonstrates that, similar to NVIDIA, Microsoft has the capability to enhance GPU performance through software.

Consequently, Microsoft’s in-house AI chips are poised to achieve superior performance in software collaboration compared to other CSP manufacturers, providing Microsoft with a competitive advantage in the AI competition.

Read more

2023-11-17

[News] Microsoft First In-House AI Chip “Maia” Produced by TSMC’s 5nm

On the 15th, Microsoft introducing its first in-house AI chip, “Maia.” This move signifies the entry of the world’s second-largest cloud service provider (CSP) into the domain of self-developed AI chips. Concurrently, Microsoft introduced the cloud computing processor “Cobalt,” set to be deployed alongside Maia in selected Microsoft data centers early next year. Both cutting-edge chips are produced using TSMC’s advanced 5nm process, as reported by UDN News.

Amidst the global AI fervor, the trend of CSPs developing their own AI chips has gained momentum. Key players like Amazon, Google, and Meta have already ventured into this territory. Microsoft, positioned as the second-largest CSP globally, joined the league on the 15th, unveiling its inaugural self-developed AI chip, Maia, at the annual Ignite developer conference.

These AI chips developed by CSPs are not intended for external sale; rather, they are exclusively reserved for in-house use. However, given the commanding presence of the top four CSPs in the global market, a significant business opportunity unfolds. Market analysts anticipate that, with the exception of Google—aligned with Samsung for chip production—other major CSPs will likely turn to TSMC for the production of their AI self-developed chips.

TSMC maintains its consistent policy of not commenting on specific customer products and order details.

TSMC’s recent earnings call disclosed that 5nm process shipments constituted 37% of Q3 shipments this year, making the most substantial contribution. Having first 5nm plant mass production in 2020, TSMC has introduced various technologies such as N4, N4P, N4X, and N5A in recent years, continually reinforcing its 5nm family capabilities.

Maia is tailored for processing extensive language models. According to Microsoft, it initially serves the company’s services such as $30 per month AI assistant, “Copilot,” which offers Azure cloud customers a customizable alternative to Nvidia chips.

Borkar, Corporate VP, Azure Hardware Systems & Infrastructure at Microsoft, revealed that Microsoft has been testing the Maia chip in Bing search engine and Office AI products. Notably, Microsoft has been relying on Nvidia chips for training GPT models in collaboration with OpenAI, and Maia is currently undergoing testing.

Gulia, Executive VP of Microsoft Cloud and AI Group, emphasized that starting next year, Microsoft customers using Bing, Microsoft 365, and Azure OpenAI services will witness the performance capabilities of Maia.

While actively advancing its in-house AI chip development, Microsoft underscores its commitment to offering cloud services to Azure customers utilizing the latest flagship chips from Nvidia and AMD, sustaining existing collaborations.

Regarding the cloud computing processor Cobalt, adopting the Arm architecture with 128 core chip, it boasts capabilities comparable to Intel and AMD. Developed with chip designs from devices like smartphones for enhanced energy efficiency, Cobalt aims to challenge major cloud competitors, including Amazon.
(Image: Microsoft)

  • Page 1
  • 1 page(s)
  • 3 result(s)

Get in touch with us