News
At TSMC’s North America Technology Symposium, per a report from TechNews, the semiconductor giant unveiled its A16 process, designed to accommodate more transistors, enhance computational performance, and reduce power consumption. Of particular interest is the integration of the Super PowerRail architecture and nanosheet transistors in the A16 chip, driving faster and more efficient development of data center processors.
As Moore’s Law progresses, transistors become smaller and denser, with an increasing number of stacked layers. It may require passing through 10 to 20 layers of fstacking to provide power and data signals to the transistors below, leading to increasingly complex networks of interconnections and power lines. When electrical signals travel downward, there is IR voltage drop, resulting in power loss.
In addition to power loss, the space occupied by power supply lines is also a concern. In the later stages of chip manufacturing, complex layout of power supply lines often occupies at least 20% of resources. Solving the problem of signal network and power supply network resource conflicts, and enabling component miniaturization, has become a major challenge for chip designers. The industry, per the report, is beginning to explore the possibility of moving power supply networks to the backside of the chip.
TSMC’s A16 employs a different chip wiring. The wires that deliver power to the transistors will be located beneath the transistors instead of above them, known as backside power delivery.
One of the methods to optimize processors is to mitigate IR drop. This phenomenon lowers the voltage received by the transistors, thus lowering performance. A16’s wiring is less prone to voltage drop, and similarly, Intel also introduced backside power delivery in Intel 20A, not only simplifying power distribution but also allowing for denser chip packaging. The goal is to fit more transistors into the processor to enhance computational power.
Transistors consist of four main components: the source, drain, channel, and gate. The source is where current enters the transistor, the drain is where it exits, and the channel and gate orchestrate the movement of electrons.
TSMC’s A16 directly connects the power transmission lines to the source and drain, making it more complex than other backside power delivery methods like Intel’s. However, TSMC states that the decision for a more complex design aims to enhance chip efficiency.
Using the Super PowerRail in A16, TSMC achieves an 10% higher clock speed or a 15% to 20% decrease in power consumption at the same operating voltage (Vdd) compared to N2P. Moreover, the chip density is increased by up to 1.10 times, supporting data center products.
A16 also incorporates NanoFlex, a type of nanosheet transistor. NanoFlex provides chip designers with flexible N2 standard components, serving as the fundamental building block for chip design. Components with lower height can save space and offer higher power efficiency, while those with higher height maximize performance.
Optimizing the combination of high and low components in the same design block allows for the adjustment of power consumption, performance, and area to achieve the best balance. This capability combines various transistor types with different power efficiency, speed, and size configurations. Flexibility enables customers to tightly integrate TSMC chips with their requirements, maximizing performance.
TSMC plans to debut NanoFlex in the 2-nanometer process, with mass production scheduled for 2025. A16 is expected to launch in the second half of 2026.
Read more
News
In a bid to seize the AI PC market opportunity, Apple is set to debut its new iPad Pro on the 7th, featuring its in-house M4 chip. With the momentum of the M4 chip’s strong debut, Apple reportedly plans to revamp its entire Mac lineup. The initial batch of M4 Macs is estimated to hit the market gradually from late this year to early next year.
It’s reported by a report from Commercial Times that Apple’s M4 chip adopts TSMC’s N3E process, aligning with Apple’s plans for a major performance upgrade for Mac, which is expected to boost TSMC’s operations.
Notably, per Wccftech’s previous report, it is rumored that the N3E process is also used for producing products like the A18 Pro, the upcoming Qualcomm Snapdragon 8 Gen 4, and the MediaTek Dimensity 9400, among other major clients’ products.
Apple held an online launch event in Taiwan on May 7th at 10 p.m. Per industry sources cited by the same report, besides introducing accessories like iPad Pro, iPad Air, and Apple Pencil, the event will mark the debut of the M4 self-developed chip, unveiling the computational capabilities of Apple’s first AI tablet.
With major computer brands and chip manufacturers competing to release AI PCs, such as Qualcomm’s Snapdragon X Elite and X Plus, and Intel introducing Core Ultra into various laptop brands, it is imperative for Apple to upgrade the performance of its products. Therefore, the strategy of highlighting AI performance through the M4 chip comes as no surprise.
According to a report by Mark Gurman from Bloomberg, the M4 chip will be integrated across Apple’s entire Mac product line. The first batch of M4 Macs is said to be expected to debut as early as the end of this year, including new iMac models, standard 14-inch MacBook Pro, high-end 14-inch and 16-inch MacBook Pro, and Mac mini. New products for 2025 will also be released gradually, such as updates to the 13-inch and 15-inch MacBook Air in the spring, updates to the Mac Studio in mid-year, and finally updates to the Mac Pro.
The report from Commercial Times has claimed that the M4 chip will come in three versions: Donan, Brava, and Hidra. The Donan variant is intended for entry-level MacBook Pro, MacBook Air, and low-end Mac mini models. The Brava version is expected to be used in high-end MacBook Pro and Mac mini models, while the Hidra version will be integrated into desktop Mac Pro computers.
Apple’s plan to introduce the M4 chip into its Mac series is expected to boost the revenue of TSMC’s 3-nanometer family. The report has indicated that the M4 chip will still be manufactured using TSMC’s 3-nanometer process, but with enhancements to the neural processing engine (NPU), providing AI capabilities to Apple’s product line. Additionally, industry sources cited by the same report have revealed that the M4 will utilize TSMC’s N3E process, an improvement over the previous N3B process used in the M3 series chips.
Meanwhile, TSMC continues to advance its existing advanced process node optimization versions. Among them, the N3E variant of the 3-nanometer family, which entered mass production in the fourth quarter of last year, will be followed by N3P and N3X. Currently, N3E is highly likely to be featured in the new generation iPad Pro.
Read more
(Photo credit: Apple)
News
With the flourishing of AI applications, two major AI giants, NVIDIA and AMD, are fully committed to the high-performance computing (HPC) market. It’s reported by the Economic Daily News that they have secured TSMC’s advanced packaging capacity for CoWoS and SoIC packaging through this year and the next, bolstering TSMC’s AI-related business orders.
TSMC holds a highly positive outlook on the momentum brought by AI-related applications. During the April earnings call, CEO C.C. Wei revised the visibility of AI orders and their revenue contribution, extending the visibility from the original expectation of 2027 to 2028.
TSMC anticipates that revenue contribution from server AI processors will more than double this year, accounting for a low-teens percentage of the company’s total revenue in 2024. It also expects a 50% compound annual growth rate for server AI processors over the next five years, with these processors projected to contribute over 20% to TSMC’s revenue by 2028.
Per the industry sources cited by the same report from Economic Daily News, they have indicated that the strong demand for AI has led to a fierce competition among the four global cloud service giants, including Amazon AWS, Microsoft, Google, and Meta, to bolster their AI server arsenal. This has resulted in a supply shortage for AI chips from major manufacturers like NVIDIA and AMD.
Consequently, these companies have heavily invested in TSMC’s advanced process and packaging capabilities to meet the substantial order demands from cloud service providers. TSMC’s advanced packaging capacity, including CoWoS and SoIC, for 2024 and 2025 has been fully booked.
To address the massive demand from customers, TSMC is actively expanding its advanced packaging capacity. Industry sources cited by the report have estimated that by the end of this year, TSMC’s CoWoS monthly capacity could reach between 45,000 to 50,000 units, representing a significant increase from the 15,000 units in 2023. By the end of 2025, CoWoS monthly capacity is expected to reach a new peak of 50,000 units.
Regarding SoIC, it is anticipated that the monthly capacity by the end of this year could reach five to six thousand units, representing a multiple-fold increase from the 2,000 units at the end of 2023. Furthermore, by the end of 2025, the monthly capacity is expected to surge to a scale of 10,000 units.
It is understood that NVIDIA’s mainstay H100 chip currently in mass production utilizes TSMC’s 4-nanometer process and adopts CoWoS advanced packaging. Additionally, it supplies customers with SK Hynix’s High Bandwidth Memory (HBM) in a 2.5D packaging form.
As for NVIDIA’s next-generation Blackwell architecture AI chips, including the B100, B200, and the GB200 with Grace CPU, although they also utilize TSMC’s 4-nanometer process, they are produced using an enhanced version known as N4P. The production for the B100, per a previous report from TechNews, is slated for the fourth quarter of this year, with mass production expected in the first half of next year.
Additionally, they are equipped with higher-capacity and updated specifications of HBM3e high-bandwidth memory. Consequently, their computational capabilities will see a multiple-fold increase compared to the H100 series.
On the other hand, AMD’s MI300 series AI accelerators are manufactured using TSMC’s 5-nanometer and 6-nanometer processes. Unlike NVIDIA, AMD adopts TSMC’s SoIC advanced packaging to vertically integrate CPU and GPU dies before employing CoWoS advanced packaging with HBM. Hence, the production process involves an additional step of advanced packaging complexity with the SoIC process.
Read more
(Photo credit: TSMC)
News
Tesla’s journey towards autonomous driving necessitates substantial computational power. Earlier today, TSMC has confirmed the commencement of production for Tesla’s next-generation Dojo supercomputer training chips, heralding a significant leap in computing power by 2027.
As per a report from TechNews, Elon Musk’s plan reportedly underscores that software is the true key to profitability. Achieving this requires not only efficient algorithms but also robust computing power. In the hardware realm, Tesla adopts a dual approach: procuring tens of thousands of NVIDIA H100 GPUs while simultaneously developing its own supercomputing chip, Dojo. TSMC, reportedly being responsible for producing the supercomputer chips, has confirmed the commencement of production.
Previously reported by another report from TechNews, Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip. The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.
At the North American technology conference, TSMC provided detailed insights into semiconductor technology and advanced packaging, enabling the establishment of system-level integration on the entire wafer and creating ultra-high computing performance. TSMC stated that the next-generation Dojo training modules for Tesla have begun production. By 2027, TSMC is expected to offer more complex wafer-scale systems with computing power exceeding current systems by over 40 times.
The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power. Crucially, it possesses scalability, allowing for continuous stacking and adjustment of computing power and power consumption based on software demands.
Wafer Integration Offers 40x Computing Power
According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems provided to Cerebras. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.
By 2027, TSMC predicts a comprehensive wafer integration offering 40 times the computing power, surpassing 40 optical mask silicon, and accommodating over 60 HBMs.
As per Musk, if NVIDIA provides enough GPUs, Tesla probably won’t need to develop Dojo on its own. Preliminary estimates suggest that this batch of next-generation Dojo supercomputers will become part of Tesla’s new Dojo cluster, located in New York, with an investment of at least USD 500 million.
Despite substantial computing power investments, Tesla’s AI business still faces challenges. In December last year, two senior engineers responsible for the Dojo project left the company. Now, Tesla is continuously downsizing to cut costs, requiring more talented individuals to contribute their brains and efforts. This is crucial to ensure the timely launch of self-driving taxis and to enhance the Full Self-Driving (FSD) system.
The next-generation Dojo computer for Tesla will be located in New York, while the headquarters in Texas’s Gigafactory will house a 100MW data center for training self-driving software, with hardware using NVIDIA’s supply solutions. Regardless of the location, these chips ultimately come from TSMC’s production line. Referring to them as AI enablers is no exaggeration at all.
Read more
(Photo credit: TSMC)
News
The demand for AI computing power is skyrocketing, with advanced packaging capacity becoming key. As per a report from Commercial Times citing industry sources, it has pointed out that TSMC is focusing on the growth potential of advanced packaging.
Southern Taiwan Science Park, Central Taiwan Science Park and Chiayi Science Park are all undergoing expansion. The Chiayi Science Park, approved this year, is set to construct two advanced packaging factories ahead of schedule. Phase one of Chiayi Science Park is scheduled to break ground this quarter, with first tool-in slated for the second half of next year. Phase two of Chiayi Science Park is expected to start construction in the second quarter of next year, with first tool-in planned for the first quarter of 2027, continuing to expand its share in the AI and HPC markets.
Advanced packaging technology achieves performance enhancement by stacking, thus increasing the density of inputs/outputs. TSMC recently unveiled numerous next-generation advanced packaging solutions, involving various new technologies and processes, including CoWoS-R and SoW.
The development of advanced packaging technology holds significant importance for the advancement of the chip industry. TSMC’s innovative solutions bring revolutionary wafer-level performance advantages, meeting the future AI demands of ultra-large-scale data centers.
Industry sources cited by the same report has stated that TSMC’s introduction of system-level wafer technology enables 12-inch wafers to accommodate a large number of chips, providing greater computational power while significantly reducing the space required in data centers.
This advancement also increases the power efficiency. Among these, the first commercially available SoW product utilizes an integrated fan-out (InFO) technology primarily for logic chips. Meanwhile, the stacked chip version employing CoWoS technology is expected to be ready by 2027.
As stacking technology advances, the size of AI chips continues to grow, with a single wafer potentially yielding fewer than ten super chips. Packaging capacity becomes crucial in this scenario. The industry sources cited in Commercial Time’s report also note that TSMC’s Longtan Advanced Packaging plant with a monthly capacity of 20,000 wafers is already at full capacity. The Zhunan AP6 plant is currently the main focus of expansion efforts, with equipment installation expected to ramp up in the fourth quarter at the Central Taiwan Science Park facility, accelerating capacity preparation.
TSMC’s SoIC has emerged as a leading solution for 3D chip stacking. AMD is the inaugural customer for SoIC, with its MI300 utilizing SoIC paired with CoWoS.
Apple has also officially entered the generative AI battlefield. It’s noted by the sources as per the same report that Apple’s first 3D packaged SoIC product will be its ARM-based CPU for AI servers, codenamed M4 Plus or M4 Ultra, expected to debut as early as the second half of next year. The 3D packaged SoIC technology is projected to be further extended to consumer-grade MacBook M series processors by 2026.
NVIDIA, on the other hand, is reportedly set to launch the R100 in the second half of next year, utilizing chiplet and the CoWoS-L packaging architecture. It’s not until 2026 that they will officially introduce the X100 (tentative name), which adopts a 3D packaging solution incorporating SoIC and CoWoS-L.
As per a recent report from MoneyDJ citing industry sources, the SoIC technology is still in its early stages, with monthly production capacity expected to reach around 2,000 wafers by the end of this year. There are prospects for this capacity to double this year and potentially exceed 10,000 wafers by 2027.
With support from major players like AMD, Apple, and NVIDIA, TSMC’s expansion in SoIC is viewed as confident, securing future orders for high-end chip manufacturing and advanced packaging.
Read more
(Photo credit: TSMC)