IC Design


2024-02-20

[News] AI Market: A Battleground for Tech Giants as Six Major Companies Develop AI Chips

In 2023, “generative AI” was undeniably the hottest term in the tech industry.

The launch of the generative application ChatGPT by OpenAI has sparked a frenzy in the market, prompting various tech giants to join the race.

As per a report from TechNews, currently, NVIDIA dominates the market by providing AI accelerators, but this has led to a shortage of their AI accelerators in the market. Even OpenAI intends to develop its own chips to avoid being constrained by tight supply chains.

On the other hand, due to restrictions arising from the US-China tech war, while NVIDIA has offered reduced versions of its products to Chinese clients, recent reports suggest that these reduced versions are not favored by Chinese customers.

Instead, Chinese firms are turning to Huawei for assistance or simultaneously developing their own chips, expected to keep pace with the continued advancement of large-scale language models.

In the current wave of AI development, NVIDIA undoubtedly stands as the frontrunner in AI computing power. Its A100/H100 series chips have secured orders from top clients worldwide in the AI market.

As per analyst Stacy Rasgon from the Wall Street investment bank Bernstein Research, the cost of each query using ChatGPT is approximately USD 0.04. If ChatGPT queries were to scale to one-tenth of Google’s search volume, the initial deployment would require approximately USD 48.1 billion worth of GPUs for computation, with an annual requirement of about USD 16 billion worth of chips to sustain operations, along with a similar amount for related chips to execute tasks.

Therefore, whether to reduce costs, decrease overreliance on NVIDIA, or even enhance bargaining power further, global tech giants have initiated plans to develop their own AI accelerators.

Per reports by technology media The Information, citing industry sources, six global tech giants, including Microsoft, OpenAI, Tesla, Google, Amazon, and Meta, are all investing in developing their own AI accelerator chips. These companies are expected to compete with NVIDIA’s flagship H100 AI accelerator chips.

Progress of Global Companies’ In-house Chip Development

  • Microsoft

Rumors surrounding Microsoft’s in-house AI chip development have never ceased.

At the annual Microsoft Ignite 2023 conference, the company finally unveiled the Azure Maia 100 AI chip for data centers and the Azure Cobalt 100 cloud computing processor. In fact, rumors of Microsoft developing an AI-specific chip have been circulating since 2019, aimed at powering large language models.

The Azure Maia 100, introduced at the conference, is an AI accelerator chip designed for tasks such as running OpenAI models, ChatGPT, Bing, GitHub Copilot, and other AI workloads.

According to Microsoft, the Azure Maia 100 is the first-generation product in the series, manufactured using a 5-nanometer process. The Azure Cobalt is an Arm-based cloud computing processor equipped with 128 computing cores, offering a 40% performance improvement compared to several generations of Azure Arm chips. It provides support for services such as Microsoft Teams and Azure SQL. Both chips are produced by TSMC, and Microsoft is already designing the second generation.

  • Open AI

OpenAI is also exploring the production of in-house AI accelerator chips and has begun evaluating potential acquisition targets. According to earlier reports from Reuters citing industry sources, OpenAI has been discussing various solutions to address the shortage of AI chips since at least 2022.

Although OpenAI has not made a final decision, options to address the shortage of AI chips include developing their own AI chips or further collaborating with chip manufacturers like NVIDIA.

OpenAI has not provided an official comment on this matter at the moment.

  • Tesla

Electric car manufacturer Tesla is also actively involved in the development of AI accelerator chips. Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip.

The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.

  • Google

Google began secretly developing a chip focused on AI machine learning algorithms as early as 2013 and deployed it in its internal cloud computing data centers to replace NVIDIA’s GPUs.

The custom chip, called the Tensor Processing Unit (TPU), was unveiled in 2016. It is designed to execute large-scale matrix operations for deep learning models used in natural language processing, computer vision, and recommendation systems.

In fact, Google had already constructed the TPU v4 AI chip in its data centers by 2020. However, it wasn’t until April 2023 that technical details of the chip were publicly disclosed.

  • Amazon

As for Amazon Web Services (AWS), the cloud computing service provider under Amazon, it has been a pioneer in developing its own chips since the introduction of the Nitro1 chip in 2013. AWS has since developed three product lines of in-house chips, including network chips, server chips, and AI machine learning chips.

Among them, AWS’s lineup of self-developed AI chips includes the inference chip Inferentia and the training chip Trainium.

On the other hand, AWS unveiled the Inferentia 2 (Inf2) in early 2023, specifically designed for artificial intelligence. It triples computational performance while increasing accelerator total memory by a quarter.

It supports distributed inference through direct ultra-high-speed connections between chips and can handle up to 175 billion parameters, making it the most powerful in-house manufacturer in today’s AI chip market.

  • Meta

Meanwhile, Meta, until 2022, continued using CPUs and custom-designed chipsets tailored for accelerating AI algorithms to execute its AI tasks.

However, due to the inefficiency of CPUs compared to GPUs in executing AI tasks, Meta scrapped its plans for a large-scale rollout of custom-designed chips in 2022. Instead, it opted to purchase NVIDIA GPUs worth billions of dollars.

Still, amidst the surge of other major players developing in-house AI accelerator chips, Meta has also ventured into internal chip development.

On May 19, 2023, Meta further unveiled its AI training and inference chip project. The chip boasts a power consumption of only 25 watts, which is 1/20th of the power consumption of comparable products from NVIDIA. It utilizes the RISC-V open-source architecture. According to market reports, the chip will also be produced using TSMC’s 7-nanometer manufacturing process.

China’s Progress on In-House Chip Development

China’s journey in developing in-house chips presents a different picture. In October last year, the United States expanded its ban on selling AI chips to China.

Although NVIDIA promptly tailored new chips for the Chinese market to comply with US export regulations, recent reports suggest that major Chinese cloud computing clients such as Alibaba and Tencent are less inclined to purchase the downgraded H20 chips. Instead, they have begun shifting their orders to domestic suppliers, including Huawei.

This shift in strategy indicates a growing reliance on domestically developed chips from Chinese companies by transferring some orders for advanced semiconductors to China.

TrendForce indicates that currently about 80% of high-end AI chips purchased by Chinese cloud operators are from NVIDIA, but this figure may decrease to 50% to 60% over the next five years.

If the United States continues to strengthen chip controls in the future, it could potentially exert additional pressure on NVIDIA’s sales in China.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNewsReuters, and The Information.

2024-01-24

[News] OpenAI Reportedly Expected to Gather with Samsung and SK Group for Deepened Chip Collaboration

Sam Altman, the CEO of OpenAI, the developer of the ChatGPT, is reportedly expected to visit Korea on January 26th. Altman may hold meetings with top executives from Samsung Electronics and SK Group to strengthen their collaboration on High-Bandwidth Memory (HBM).

According to sources cited by The Korea Times, Sam Altman is making a slight adjustment for the potential meeting details with Samsung Electronics’ Chairman Lee Jae-yong and SK Group’s Chairman Chey Tae-won. 

OpenAI is set to engage in discussions with Samsung Electronics and SK Group to collaboratively develop artificial intelligence (AI) semiconductors, as part of OpenAI’s strategy to reduce heavy reliance on the AI chip leader NVIDIA.

Reportedly, Altman visited Korea in June of last year, and this upcoming visit is expected to last only about six hours. Most of the time is anticipated to be spent in closed-door meetings with leaders of Korean chip companies or other high-profile executives. 

Altman is keen on strengthening relationships with Korean startups and chip industry players, as it contributes to OpenAI’s development of large-scale language models, powering ChatGPT. OpenAI unveiled its latest model, GPT-4 Turbo, at the end of last year and is currently proceeding with planned upgrades to related services.

Regarding this matter, The Korea Times also cited a spokesman at SK Group, indicating that SK Group also did not confirm whether Chey and Altman will meet.

“Nothing specific has been confirmed over our top management’s schedule with Altman,” an official at SK Group said.

Read more

(Photo credit: OpenAI)

Please note that this article cites information from The Korea Times.

2024-01-23

[News] Expert Insights on NVIDIA’s AI Chip Strategy – Downgraded Version Targeted for China, High-End Versions Aimed Overseas

NVIDIA CEO Jensen Huang has reportedly gone to Taiwan once again, with reports suggesting a recent visit to China. Industry sources believe NVIDIA is planning to introduce downgraded AI chips to bypass U.S. restrictions on exporting high-end chips to China. Huang’s visit to China is seen as an effort to alleviate concerns among customers about adopting the downgraded versions.

Experts indicate that due to the expanded U.S. semiconductor restriction on China, NVIDIA’s sales in the Chinese market will decline. To counter this, NVIDIA might adjust its product portfolio and expand sales of high-end AI chips outside China.

The export of NVIDIA’s A100 and H100 chips to China and Hong Kong was prohibited in September 2022. Following that, the A800 and H800 chips, which were further designed with downgraded adjustments for the Chinese market, were also prohibited for export to China in October of the previous year.

In November 2023, the NVIDIA’s management acknowledged the significant impact of the U.S. restrictions on China’s revenue for the fourth quarter of 2023 but expressed confidence that revenue from other regions can offset this impact.

CEO Jensen Huang revealed in December in Singapore that NVIDIA was closely collaborating with the U.S. government to ensure compliance with export restrictions on new chips for the Chinese market.

According to reports in Chinese media The Paper, Jensen Huang recently made a low-profile visit to China. The market is closely watching the status of NVIDIA’s AI chip strategy in China and the company’s subsequent development strategies in response to U.S. restrictions. The fate of the newly designed AI chips, H20, L20, and L2, to comply with U.S. export regulations remains uncertain and will be closely observed.

Liu Pei-Chen, a researcher and director at the Taiwan Institute of Economic Research, discussed with CNA’s reporter about NVIDIA’s active planning to introduce a downgraded version of AI chips in China. 

The most urgent task, according to Liu, is to persuade Chinese customers to adopt these downgraded AI chips. Chinese clients believe that there isn’t a significant performance gap between NVIDIA’s downgraded AI chips and domestically designed AI chips.

Liu mentioned that this is likely the reason why Jensen Huang visited China. It serves as an opportunity to promote NVIDIA’s downgraded AI chips and alleviate concerns among Chinese customers. 

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from CNA.

2024-01-16

[News] Taiwan’s Chip Act Takes Effect in February, TSMC to Benefit from Historic Tax Incentives

According to a report by TechNews, Taiwan has introduced its largest-ever investment deduction incentives under the “Statute for Industrial Innovation,” often referred to as the “Taiwanese Chip Act.” Articles 10-2 and 72 of the statute came into effect, and the Ministry of Economic Affairs announced that it would accept company applications from February 1 to May 31 this year.

The Ministry of Economic Affairs stated that applications for deductions would be accepted starting February this year. The provided tax incentives include a 25% deduction for research and development expenses and a 5% deduction for expenditures on acquiring new eqipment for advanced processes, all of which can be deducted from the current year’s corporate income tax.

Eligibility criteria include companies with research and development expenses of at least NT$6 billion, a research and development density of 6%, and expenditures of NT$10 billion for the purchase of equipment for advanced processes, with no restrictions on industry category.

The Ministry of Economic Affairs emphasized that as the parent law already specifies an effective tax rate of 12% for the fiscal year 112 and a threshold of 15% from the fiscal year 113 onwards, this measure aims to encourage businesses that do not meet these tax rate qualifications to strive for them and become eligible for tax incentives.

A review panel will be formed to assess whether applying companies meet the criteria for a critical position in the international supply chain and other qualification requirements.

The Ministry of Economic Affairs shared that the application period for Article 10-2 of the Statute for Industrial Innovation is from February 1 to May 31 this year. Companies are required to provide explanatory documents and supporting evidence, including data on products, international market share, rankings, import-export trade, and other statistics, serving as indicators for the assessment of technological innovation and critical positions.

According to the financial reports of publicly listed companies in 2022, including TSMC, MediaTek, Realtek, Novatek, Delta Electronics, Nanya Technology, Phison and Winbond, their research and development expenses and research and development density all meet the application thresholds.

(Image: TSMC)

Please note that this article cites information from TechNews
2024-01-16

[News] Kirin 9010 Shortage Rumors: Huawei P70 May Use Kirin 9000S in Some Models

According to recent reports, Huawei is expected to unveil its flagship P70 series later this year, alongside the introduction of the new Kirin 9010 chipset. However, there are indications that the older Kirin 9000S might be utilized in a specific model.

Wccftech suggests that the P70 series will include the P70, P70 Pro, and P70 Art, followed by the Mate 70 series. Notably, not all P70 models will feature the new Kirin 9010.

As per insights from the Weibo account Smart Pikachu, the P70 series will boast a custom curved display that is easy on the eyes and power-efficient but lacks a 2K resolution, and the standard version of the P70 is tested with the Kirin 9000S. This may potentially impact the motivation for users who have already purchased the Mate 60 and might not find sufficient reasons to upgrade to the P70.

Wccftech suggests that the adoption of the 9000S in some models could be attributed to the limited supply of the Kirin 9010. The Kirin 9000S, produced by SMIC using a 7nm process, faces production challenges due to the use of older-generation DUV equipment, resulting in a time-consuming and costly manufacturing process with lower yields.

Despite this, there is a glimmer of hope for Huawei’s pricing competitiveness, as the production cost of the Kirin 9000S is expected to be lower than that of the Kirin 9010. This cost advantage could potentially contribute to Huawei’s goal of reaching an estimated shipment volume of 100 million smartphones in 2024, especially considering the company’s historical strength in offering competitive pricing for its base models.

(Image: Huawei)

Please note that this article cites information from WccftechWeibo account Smart Pikachu
  • Page 17
  • 31 page(s)
  • 152 result(s)

Get in touch with us