News
Tesla’s journey towards autonomous driving necessitates substantial computational power. Earlier today, TSMC has confirmed the commencement of production for Tesla’s next-generation Dojo supercomputer training chips, heralding a significant leap in computing power by 2027.
As per a report from TechNews, Elon Musk’s plan reportedly underscores that software is the true key to profitability. Achieving this requires not only efficient algorithms but also robust computing power. In the hardware realm, Tesla adopts a dual approach: procuring tens of thousands of NVIDIA H100 GPUs while simultaneously developing its own supercomputing chip, Dojo. TSMC, reportedly being responsible for producing the supercomputer chips, has confirmed the commencement of production.
Previously reported by another report from TechNews, Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip. The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.
At the North American technology conference, TSMC provided detailed insights into semiconductor technology and advanced packaging, enabling the establishment of system-level integration on the entire wafer and creating ultra-high computing performance. TSMC stated that the next-generation Dojo training modules for Tesla have begun production. By 2027, TSMC is expected to offer more complex wafer-scale systems with computing power exceeding current systems by over 40 times.
The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power. Crucially, it possesses scalability, allowing for continuous stacking and adjustment of computing power and power consumption based on software demands.
Wafer Integration Offers 40x Computing Power
According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems provided to Cerebras. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.
By 2027, TSMC predicts a comprehensive wafer integration offering 40 times the computing power, surpassing 40 optical mask silicon, and accommodating over 60 HBMs.
As per Musk, if NVIDIA provides enough GPUs, Tesla probably won’t need to develop Dojo on its own. Preliminary estimates suggest that this batch of next-generation Dojo supercomputers will become part of Tesla’s new Dojo cluster, located in New York, with an investment of at least USD 500 million.
Despite substantial computing power investments, Tesla’s AI business still faces challenges. In December last year, two senior engineers responsible for the Dojo project left the company. Now, Tesla is continuously downsizing to cut costs, requiring more talented individuals to contribute their brains and efforts. This is crucial to ensure the timely launch of self-driving taxis and to enhance the Full Self-Driving (FSD) system.
The next-generation Dojo computer for Tesla will be located in New York, while the headquarters in Texas’s Gigafactory will house a 100MW data center for training self-driving software, with hardware using NVIDIA’s supply solutions. Regardless of the location, these chips ultimately come from TSMC’s production line. Referring to them as AI enablers is no exaggeration at all.
Read more
(Photo credit: TSMC)
News
SK Hynix CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for this year has already been fully sold out, and next year’s capacity is also nearly sold out. From a technological perspective, SK Hynix plans to provide samples of the world’s highest-performance 12-layer stacked HBM3e products in May this year and is preparing for mass production starting in the third quarter.
SK Hynix just held a press conference in South Korea, where they disclosed information regarding their AI memory technology capabilities, market status, and investment plans for future major production sites in Cheongju and Yongin, South Korea, as well as in the United States.
Kwak Noh-Jung pointed out that although AI is currently primarily centered around data centers, it is expected to rapidly expand to on-device AI applications in smartphones, PCs, cars, and other end devices in the future. Consequently, the demand for memory specialized for AI, characterized by “ultra-fast, high-capacity and low-power,” is expected to skyrocket.
Kwak Noh-Jung stated that SK Hynix possesses industry-leading technological capabilities in various product areas such as HBM, TSV-based high-capacity DRAM, and high-performance eSSD. In the future, SK Hynix looks to provide globally top-tier memory solutions tailored to customers’ needs through strategic partnerships with global collaborators.
Looking ahead to AI memory, SK Hynix President Justin Kim pointed out that as we enter the era of AI, the global volume of data generated is expected to grow from 15 Zettabytes (ZB) in 2014 to 660 ZB by 2030. Simultaneously, the proportion of revenue from AI memory is also expected to increase significantly. Memory technologies oriented towards AI, such as HBM and high-capacity DRAM modules, are projected to account for about 5% of the entire memory market in 2023 (in terms of revenue), with expectations to reach 61% by 2028.
Additionally, the company will advance collaboration with top-tier partners in the system semiconductor and foundry fields globally, aiming to timely develop and provide the best products.
Regarding SK Hynix’s packaging technology capabilities, the company highlighted its MR-MUF technology as one of its core packaging technologies. While there may be bottlenecks in high-layer stacking, SK Hynix emphasized that this is not the case in practice. The company has already begun mass production of 12-layer stacked HBM3 products using advanced MR-MUF technology.
Reducing the pressure on chip stacking to 6% has not only shortened process time but also increased production efficiency by up to 4 times while enhancing heat dissipation by 45%. Moreover, the latest MR-MUF technology from SK Hynix utilizes new protective materials, resulting in a 10% improvement in heat dissipation. Additionally, the advanced MR-MUF technology employs superior high-temperature/low-pressure methods for warpage control, making it the most suitable solution for high-layer stacking.
Furthermore, SK Hynix plans to adopt advanced MR-MUF technology in HBM4 to achieve 16-layer stacking and is actively researching hybrid bonding technology. Lastly, in terms of investments in the United States, SK Hynix has confirmed the construction of an advanced packaging production facility for AI memory in Indiana. This facility is scheduled to commence mass production of the next-generation HBM products in the second half of 2028.
Read more
(Photo credit: SK Hynix)
News
AMD benefited from AI demand last quarter (January to March), with revenue of USD 5.47 billion, surpassing Wall Street expectations and turning a profit compared to the same period last year. However, this quarter’s fiscal forecast and market outlook are not as expected.
AMD achieved a net profit of USD 120 million last quarter, with an adjusted EPS of USD 0.62, surpassing Wall Street’s expected USD 0.61. AMD expects revenue for this quarter to be between USD 5.4 billion and USD 6 billion, with a midpoint of USD 5.7 billion, a 6% increase from the same period last year but slightly below Wall Street’s expected USD 5.73 billion.
After enduring a downturn in the semiconductor industry, AMD finally returned to profitability last quarter, largely due to strong sales of its MI300 series AI chips, which drove revenue in the data center division to grow by 80% year-on-year to USD 2.3 billion.
As per a report from the Wall Street Journal, AMD CEO Lisa Su stated that since the launch of the latest MI300X chip at the end of last year, sales have surpassed $1 billion, with major customers including Microsoft, Meta, Oracle, among other tech giants.
In January, Lisa Su had forecasted that AMD’s AI chip revenue for this year could reach USD 3.5 billion, which was recently revised upwards to USD 4 billion. The AMD MI300 series chips are seen as direct competitors to NVIDIA’s H100 chips. However, NVIDIA announced its new generation AI chip architecture, Blackwell, in March this year, forcing AMD to accelerate its pace. Lisa Su stated that AMD is already developing the next generation of AI chips.
AMD’s client division, which sells PC chips, has also benefited from the AI wave, with revenue increasing by 85% year-on-year to USD 1.4 billion last quarter, once again proving the recovery and growth of the global PC market. AMD’s chips for PCs are capable of executing AI computations locally, targeting the increasingly expanding demand for AI-enabled PCs.
Regarding the applications of AI PCs, Su previously stated in an interview with Sina that she found communication, productivity, and creativity particularly exciting. Many applications are still in their early stages, but she expects to see more developments in the coming years.
However, AMD’s businesses outside of AI chips are facing increasing challenges. Revenue from the gaming console chip division declined by 48% year-on-year to USD 920 million last quarter, falling short of Wall Street’s expectations of USD 970 million. Additionally, the revenue from the embedded chip division, established after AMD’s acquisition of Xilinx in 2022, also decreased by 46% year-on-year to USD 850 million last quarter, similarly below Wall Street’s expectations of USD 940 million.
TrendForce previously issued an analysis in a press release, indicating that the AI PC market is propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.
Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.
Read more
(Photo credit: AMD)
News
The demand for AI computing power is skyrocketing, with advanced packaging capacity becoming key. As per a report from Commercial Times citing industry sources, it has pointed out that TSMC is focusing on the growth potential of advanced packaging.
Southern Taiwan Science Park, Central Taiwan Science Park and Chiayi Science Park are all undergoing expansion. The Chiayi Science Park, approved this year, is set to construct two advanced packaging factories ahead of schedule. Phase one of Chiayi Science Park is scheduled to break ground this quarter, with first tool-in slated for the second half of next year. Phase two of Chiayi Science Park is expected to start construction in the second quarter of next year, with first tool-in planned for the first quarter of 2027, continuing to expand its share in the AI and HPC markets.
Advanced packaging technology achieves performance enhancement by stacking, thus increasing the density of inputs/outputs. TSMC recently unveiled numerous next-generation advanced packaging solutions, involving various new technologies and processes, including CoWoS-R and SoW.
The development of advanced packaging technology holds significant importance for the advancement of the chip industry. TSMC’s innovative solutions bring revolutionary wafer-level performance advantages, meeting the future AI demands of ultra-large-scale data centers.
Industry sources cited by the same report has stated that TSMC’s introduction of system-level wafer technology enables 12-inch wafers to accommodate a large number of chips, providing greater computational power while significantly reducing the space required in data centers.
This advancement also increases the power efficiency. Among these, the first commercially available SoW product utilizes an integrated fan-out (InFO) technology primarily for logic chips. Meanwhile, the stacked chip version employing CoWoS technology is expected to be ready by 2027.
As stacking technology advances, the size of AI chips continues to grow, with a single wafer potentially yielding fewer than ten super chips. Packaging capacity becomes crucial in this scenario. The industry sources cited in Commercial Time’s report also note that TSMC’s Longtan Advanced Packaging plant with a monthly capacity of 20,000 wafers is already at full capacity. The Zhunan AP6 plant is currently the main focus of expansion efforts, with equipment installation expected to ramp up in the fourth quarter at the Central Taiwan Science Park facility, accelerating capacity preparation.
TSMC’s SoIC has emerged as a leading solution for 3D chip stacking. AMD is the inaugural customer for SoIC, with its MI300 utilizing SoIC paired with CoWoS.
Apple has also officially entered the generative AI battlefield. It’s noted by the sources as per the same report that Apple’s first 3D packaged SoIC product will be its ARM-based CPU for AI servers, codenamed M4 Plus or M4 Ultra, expected to debut as early as the second half of next year. The 3D packaged SoIC technology is projected to be further extended to consumer-grade MacBook M series processors by 2026.
NVIDIA, on the other hand, is reportedly set to launch the R100 in the second half of next year, utilizing chiplet and the CoWoS-L packaging architecture. It’s not until 2026 that they will officially introduce the X100 (tentative name), which adopts a 3D packaging solution incorporating SoIC and CoWoS-L.
As per a recent report from MoneyDJ citing industry sources, the SoIC technology is still in its early stages, with monthly production capacity expected to reach around 2,000 wafers by the end of this year. There are prospects for this capacity to double this year and potentially exceed 10,000 wafers by 2027.
With support from major players like AMD, Apple, and NVIDIA, TSMC’s expansion in SoIC is viewed as confident, securing future orders for high-end chip manufacturing and advanced packaging.
Read more
(Photo credit: TSMC)
News
In a bid to catch up with leading players like TSMC, the South Korean government is said to have approved a national-level initiative aimed at actively promoting the development of advanced chip packaging technologies, according to a report from South Korean media outlet TheElec.
Citing anonymous sources, the report on April 30th indicates that the feasibility of the aforementioned plan has passed the preliminary examination conducted by the Korea Institute of S&T Evaluation and Planning (KISTEP).
According to reports, the preliminary review targeted a national-level project with a value exceeding KRW 50 billion, with direct government sponsorship exceeding KRW 30 billion. Such projects rarely pass the review in one go, but the aforementioned chip packaging case is an exception.
Most of the reviewers at KISTEP have reportedly reached a consensus, recognizing the necessity of the project to catch up with leaders in advanced packaging like Taiwan’s TSMC, making South Korea a frontrunner.
As per TrendForce’s previous report, by 2027, Korea’s share in advanced process capacity is originally expected to reach 11.5%, with room for further growth.
However, the budget for the 7-year project has been reduced from the original KRW 500 billion to KRW 206.8 billion. After passing the preliminary feasibility review, the project is expected to be formally announced later this year (2024) and is scheduled to commence implementation next year.
Cited by the same report from TheElec, a source involved in the project stated that the budget cut was entirely expected, but the project’s single-pass approval is indeed noteworthy, indicating the government’s deep understanding of the importance of chip packaging.
Read more
(Photo credit: TSMC)