News
Due to escalating geopolitical risks, Tesla is reportedly requesting its suppliers to begin manufacturing components and parts outside of China and Taiwan as early as 2025.
According to a report from Nikkei News on May 23rd, citing sources from the supply chain, suppliers of printed circuit boards, panels, and electronic controllers for models sold outside of China have recently received requests from Tesla to avoid China and Taiwan. The reason cited is the increasing geopolitical risks in the Greater China region prior to the U.S. presidential election. The objective of this move is to create alternative supply sources for markets outside of China to avoid disruptions in the supply chain.
Reportedly, a Taiwan-based supplier of Tesla revealed that Tesla wants all components to be OOC, OOT, meaning ‘out of China’ and ‘out of Taiwan.’ Allegedly, they hope this proposal can be implemented in new projects next year. Tesla is said to have made this request before the US government increased tariffs on Chinese electric cars fourfold to 100%.
Nikkei News’ report also indicates that Tesla has discussed this issue with suppliers in Japan, South Korea, and other Asian countries. A component supplier source cited in the same report mentioned that his company has responded to Tesla’s request by expanding production in Thailand. The source claimed that for many customers like Tesla, the “China Plus One” strategy—which involves diversifying investments beyond China into other countries—also includes avoiding Taiwan.
The report further cited sources, indicating that American car manufacturers such as General Motors and Ford are also instructing their suppliers to explore on relocating their electronic production lines away from China and Taiwan. However, they have not formally made requests similar to Tesla’s.
Another source cited in the report remarked that Tesla is the most proactive among American automakers in wanting to avoid risks associated with China and Taiwan, but implementing the OOC and OOT strategy is indeed challenging and costly.
Tesla has previously placed orders with TSMC for numerous chips related to electric vehicles. For instance, the supercomputer chip “D1” is utilizing TSMC’s 7nm technology along with advanced packaging processes.
Read more
(Photo credit: Tesla)
News
Tesla’s journey towards autonomous driving necessitates substantial computational power. Earlier today, TSMC has confirmed the commencement of production for Tesla’s next-generation Dojo supercomputer training chips, heralding a significant leap in computing power by 2027.
As per a report from TechNews, Elon Musk’s plan reportedly underscores that software is the true key to profitability. Achieving this requires not only efficient algorithms but also robust computing power. In the hardware realm, Tesla adopts a dual approach: procuring tens of thousands of NVIDIA H100 GPUs while simultaneously developing its own supercomputing chip, Dojo. TSMC, reportedly being responsible for producing the supercomputer chips, has confirmed the commencement of production.
Previously reported by another report from TechNews, Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip. The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.
At the North American technology conference, TSMC provided detailed insights into semiconductor technology and advanced packaging, enabling the establishment of system-level integration on the entire wafer and creating ultra-high computing performance. TSMC stated that the next-generation Dojo training modules for Tesla have begun production. By 2027, TSMC is expected to offer more complex wafer-scale systems with computing power exceeding current systems by over 40 times.
The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power. Crucially, it possesses scalability, allowing for continuous stacking and adjustment of computing power and power consumption based on software demands.
Wafer Integration Offers 40x Computing Power
According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems provided to Cerebras. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.
By 2027, TSMC predicts a comprehensive wafer integration offering 40 times the computing power, surpassing 40 optical mask silicon, and accommodating over 60 HBMs.
As per Musk, if NVIDIA provides enough GPUs, Tesla probably won’t need to develop Dojo on its own. Preliminary estimates suggest that this batch of next-generation Dojo supercomputers will become part of Tesla’s new Dojo cluster, located in New York, with an investment of at least USD 500 million.
Despite substantial computing power investments, Tesla’s AI business still faces challenges. In December last year, two senior engineers responsible for the Dojo project left the company. Now, Tesla is continuously downsizing to cut costs, requiring more talented individuals to contribute their brains and efforts. This is crucial to ensure the timely launch of self-driving taxis and to enhance the Full Self-Driving (FSD) system.
The next-generation Dojo computer for Tesla will be located in New York, while the headquarters in Texas’s Gigafactory will house a 100MW data center for training self-driving software, with hardware using NVIDIA’s supply solutions. Regardless of the location, these chips ultimately come from TSMC’s production line. Referring to them as AI enablers is no exaggeration at all.
Read more
(Photo credit: TSMC)
News
In 2023, “generative AI” was undeniably the hottest term in the tech industry.
The launch of the generative application ChatGPT by OpenAI has sparked a frenzy in the market, prompting various tech giants to join the race.
As per a report from TechNews, currently, NVIDIA dominates the market by providing AI accelerators, but this has led to a shortage of their AI accelerators in the market. Even OpenAI intends to develop its own chips to avoid being constrained by tight supply chains.
On the other hand, due to restrictions arising from the US-China tech war, while NVIDIA has offered reduced versions of its products to Chinese clients, recent reports suggest that these reduced versions are not favored by Chinese customers.
Instead, Chinese firms are turning to Huawei for assistance or simultaneously developing their own chips, expected to keep pace with the continued advancement of large-scale language models.
In the current wave of AI development, NVIDIA undoubtedly stands as the frontrunner in AI computing power. Its A100/H100 series chips have secured orders from top clients worldwide in the AI market.
As per analyst Stacy Rasgon from the Wall Street investment bank Bernstein Research, the cost of each query using ChatGPT is approximately USD 0.04. If ChatGPT queries were to scale to one-tenth of Google’s search volume, the initial deployment would require approximately USD 48.1 billion worth of GPUs for computation, with an annual requirement of about USD 16 billion worth of chips to sustain operations, along with a similar amount for related chips to execute tasks.
Therefore, whether to reduce costs, decrease overreliance on NVIDIA, or even enhance bargaining power further, global tech giants have initiated plans to develop their own AI accelerators.
Per reports by technology media The Information, citing industry sources, six global tech giants, including Microsoft, OpenAI, Tesla, Google, Amazon, and Meta, are all investing in developing their own AI accelerator chips. These companies are expected to compete with NVIDIA’s flagship H100 AI accelerator chips.
Progress of Global Companies’ In-house Chip Development
Rumors surrounding Microsoft’s in-house AI chip development have never ceased.
At the annual Microsoft Ignite 2023 conference, the company finally unveiled the Azure Maia 100 AI chip for data centers and the Azure Cobalt 100 cloud computing processor. In fact, rumors of Microsoft developing an AI-specific chip have been circulating since 2019, aimed at powering large language models.
The Azure Maia 100, introduced at the conference, is an AI accelerator chip designed for tasks such as running OpenAI models, ChatGPT, Bing, GitHub Copilot, and other AI workloads.
According to Microsoft, the Azure Maia 100 is the first-generation product in the series, manufactured using a 5-nanometer process. The Azure Cobalt is an Arm-based cloud computing processor equipped with 128 computing cores, offering a 40% performance improvement compared to several generations of Azure Arm chips. It provides support for services such as Microsoft Teams and Azure SQL. Both chips are produced by TSMC, and Microsoft is already designing the second generation.
OpenAI is also exploring the production of in-house AI accelerator chips and has begun evaluating potential acquisition targets. According to earlier reports from Reuters citing industry sources, OpenAI has been discussing various solutions to address the shortage of AI chips since at least 2022.
Although OpenAI has not made a final decision, options to address the shortage of AI chips include developing their own AI chips or further collaborating with chip manufacturers like NVIDIA.
OpenAI has not provided an official comment on this matter at the moment.
Electric car manufacturer Tesla is also actively involved in the development of AI accelerator chips. Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip.
The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.
Google began secretly developing a chip focused on AI machine learning algorithms as early as 2013 and deployed it in its internal cloud computing data centers to replace NVIDIA’s GPUs.
The custom chip, called the Tensor Processing Unit (TPU), was unveiled in 2016. It is designed to execute large-scale matrix operations for deep learning models used in natural language processing, computer vision, and recommendation systems.
In fact, Google had already constructed the TPU v4 AI chip in its data centers by 2020. However, it wasn’t until April 2023 that technical details of the chip were publicly disclosed.
As for Amazon Web Services (AWS), the cloud computing service provider under Amazon, it has been a pioneer in developing its own chips since the introduction of the Nitro1 chip in 2013. AWS has since developed three product lines of in-house chips, including network chips, server chips, and AI machine learning chips.
Among them, AWS’s lineup of self-developed AI chips includes the inference chip Inferentia and the training chip Trainium.
On the other hand, AWS unveiled the Inferentia 2 (Inf2) in early 2023, specifically designed for artificial intelligence. It triples computational performance while increasing accelerator total memory by a quarter.
It supports distributed inference through direct ultra-high-speed connections between chips and can handle up to 175 billion parameters, making it the most powerful in-house manufacturer in today’s AI chip market.
Meanwhile, Meta, until 2022, continued using CPUs and custom-designed chipsets tailored for accelerating AI algorithms to execute its AI tasks.
However, due to the inefficiency of CPUs compared to GPUs in executing AI tasks, Meta scrapped its plans for a large-scale rollout of custom-designed chips in 2022. Instead, it opted to purchase NVIDIA GPUs worth billions of dollars.
Still, amidst the surge of other major players developing in-house AI accelerator chips, Meta has also ventured into internal chip development.
On May 19, 2023, Meta further unveiled its AI training and inference chip project. The chip boasts a power consumption of only 25 watts, which is 1/20th of the power consumption of comparable products from NVIDIA. It utilizes the RISC-V open-source architecture. According to market reports, the chip will also be produced using TSMC’s 7-nanometer manufacturing process.
China’s Progress on In-House Chip Development
China’s journey in developing in-house chips presents a different picture. In October last year, the United States expanded its ban on selling AI chips to China.
Although NVIDIA promptly tailored new chips for the Chinese market to comply with US export regulations, recent reports suggest that major Chinese cloud computing clients such as Alibaba and Tencent are less inclined to purchase the downgraded H20 chips. Instead, they have begun shifting their orders to domestic suppliers, including Huawei.
This shift in strategy indicates a growing reliance on domestically developed chips from Chinese companies by transferring some orders for advanced semiconductors to China.
TrendForce indicates that currently about 80% of high-end AI chips purchased by Chinese cloud operators are from NVIDIA, but this figure may decrease to 50% to 60% over the next five years.
If the United States continues to strengthen chip controls in the future, it could potentially exert additional pressure on NVIDIA’s sales in China.
Read more
(Photo credit: NVIDIA)
News
Recently, the information cited by Sina Technology indicates that during a recent internal event at Xiaomi, executives from Xiaomi’s automotive division disclosed that the team currently comprises 3700 members. Reportedly, their ambitious goal is to create a Dream Car that can compete with renowned brands like Porsche and Tesla.
According to sources, Xiaomi’s Automotive Vice President and Political Commissar of the Beijing headquarters, Yu Liguo, shared in a recent internal event at Xiaomi, saying, “Mr. Lei (Lei Jun, Xiaomi CEO) often tells us that only those who understand and love cars can make good cars. I believe that among the 3,000-plus people in the automotive department, we have indeed found a group of people who truly understand and love cars.”
Yu further stated that the Xiaomi Automotive Division, established nearly three years ago, currently consists of 3,700 individuals from diverse backgrounds, all sharing a common dream – to create a Dream Car that can rival Porsche and Tesla.
Reportedly, Yu also mentioned that in certain scenarios while driving, Xiaomi’s autonomous driving tests have achieved success, surpassing the current capabilities of Tesla. Although these conditions may not be typical for autonomous driving, they reflect the capabilities of autonomous driving.
On the other hand, Lei Jun further emphasized the importance of corporate culture in the internal event. He stated that Xiaomi has clarified its goals for the new decade this year – to become the leader in the new generation of global hardcore technology. If Xiaomi is to succeed in the next decade, it must have a team capable of fighting tough battles.
He gave an example that in recent years, Xiaomi has rapidly assembled large teams, whether in the chip department or the automotive department, reaching scales of two to three thousand people in a very short time.
To quickly unite everyone as one force, is not only on strategy and motivation but also on corporate culture, which may not be visible in ordinary times, and it is only when facing difficulties, dangers, or situations that require responsibility that it can fully manifest itself.
Previously, Lei Jun announced that Xiaomi would hold a technology launch event for its car on December 28. Lei Jun revealed that the development of Xiaomi’s first car involved a total of 3,400 engineers, with the entire R&D investment exceeding CNY 10 billion. It is noteworthy that he emphasized this event would focus on technology and not product launches.
Looking at his recent teasers on Weibo, the autonomous driving technology mentioned by Yu Liguo is expected to be featured in the technology release. Additionally, there is anticipation for the debut of Xiaomi’s self-developed operating system, HyperOS, in the automotive context.
Read more
(Photo credit: China’s Ministry of Industry and Information Technology)
News
In a bid to compete with rivals like Tesla, who conduct in-house research and development of advanced chips for automotive applications, Japanese automakers have reportedly established a new organization to collaboratively research and develop advanced automotive chips, integrating their technologies and designs.
According to a report by Nikkei, automakers including Toyota have established a new organization called the “Automotive SoC Research Association” (temporarily referred as ASRA), joining forces to develop advanced chips for applications like autonomous driving.
Established in December in Nagoya, ASRA is set to commence research on SoC products with a process of 10nm or more advanced nodes starting in 2024. In addition to Toyota, other automakers such as Nissan, Honda, Mazda, Subaru, and Japanese enterprises including Renesas Electronics and Socionext have also joined the initiative.
According to the report, the trend of automakers intensifying in-house development of automotive chips is growing. The report further indicates that semiconductor giants in the United States, such as NVIDIA and Qualcomm, are also developing high-performance SoCs for automotive use.
Leading electric vehicle manufacturer Tesla has opted for in-house development due to dissatisfaction with limited choices, and their self-developed SoCs are already actively deployed in their vehicles.
On the other hand, Chinese automaker NIO, for example, possesses semiconductor research and development teams in both China and the United States. They have successfully developed semiconductor products used for controlling Light Detection and Ranging (LiDAR) technology.
(Photo credit: Pixabay)