News
Taiwanese Minister of Economic Affairs J.W. Kuo, who was invited to visit Japan, attended a forum on August 30 organized by the Taiwan-Japan Research Institute and delivered a keynote speech. As reported by Kyodo News citing the interview with Kuo, he indicated that TSMC plans to build a third fab in Japan, but with a projected timeline after 2030.
However, Kuo emphasized that the final decision on whether to proceed with the expansion in Japan rests with TSMC, and he refrained from discussing specific site locations.
In addition, in response to Kuo’s comments, the Ministry of Economic Affairs clarified that any details regarding TSMC’s potential third fab should be confirmed with the foundry giant itself.
Reportedly, Kumamoto Prefecture Governor Takashi Kimura visited TSMC’s headquarters on the afternoon of August 26 and held talks with TSMC’s senior executives.
Notably, Takashi Kimura, who took office in April, stated in an report from Bloomberg on May 11th that he would spare no effort to persuade TSMC to establish a third fab in the region, believing that during the preparations for TSMC’s first fab in Kumamoto, the region already possesses better-quality road and water infrastructure and an education system that better supports international school students, which could be advantageous.
TSMC’s fabs in Kikuyo Town, Kumamoto Prefecture (Kumamoto Fab 1) is set to begin mass production in Q4 (October-December), utilizing 28/22nm and 16/12nm process technologies, with a monthly production capacity of 55,000 wafers.
The Kumamoto Fab 2 is scheduled to begin construction at the end of 2024, with the goal of starting operations by the end of 2027, focusing on 6/7nm processes. The combined monthly production capacity of TSMC’s Kumamoto fab 1 and 2 is estimated to exceed 100,000 wafers.
TSMC Chairman C.C. Wei mentioned in June that after the successful operation of the first and second fabs, TSMC would consider building a third fab if it receives the approval of the local residents.
Read more
(Photo credit: TSMC)
News
TSMC’s angstrom-level A16 process is creating a buzz even before mass production. According to a report from the Economic Daily News, not only has major client Apple already booked the capacity for TSMC’s A16, OpenAI has also joined in to secure TSMC’s A16 capacity due to its long-term need for self-developed AI chips.
Regarding this matter, TSMC stated on August 30 that the company does not comment on market rumors or on business dealings with individual customers.
Despite that TSMC’s A16 process is scheduled to enter mass production in 2026, the report has hinted that the first batch of customers has already surfaced.
In addition to Apple, which has been in continuous collaboration with TSMC, the most notable new customer of TSMC’s A16 is said to be OpenAI, the developer of ChatGPT, which is actively investing in the design and development of its own ASIC chips.
Industry sources cited by the same report reveal that OpenAI had initially been in active discussions with TSMC about establishing a dedicated fab. However, after assessing the potential benefits, the plan to build a dedicated facility was shelved.
Strategically, OpenAI is now partnering with U.S. companies like Broadcom and Marvell to develop its own ASIC chips, and potentially emerging as one of Broadcom’s top four customers.
Since both IC design giants are long-term clients of TSMC, the ASIC chips they are helping OpenAI develop are expected to be produced using TSMC’s 3nm process family and the subsequent A16 process, according to the chip design roadmap.
It is worth noting that OpenAI not only holds a critical position in the development of AI applications beyond Apple’s ecosystem but also contributes to the advancement of AI applications in Apple devices.
In June of this year, Apple unveiled its personalized intelligent system, Apple Intelligence, which has integrated ChatGPT. This strategic move has led observers to believe that OpenAI plays a key role in Apple’s AI development.
As OpenAI continues to invest in the design and development of its own ASIC chips, it is reportedly expected to maintain its influence in the AI computing field.
TSMC unveiled its angstrom-class A16 advanced process during the company’s 2024 North America Technology Symposium on April 25, set to be mass-produced in 2026.
Per TSMC, Compared to TSMC’s N2P process, the A16 offers an 8% to 10% speed increase at the same Vdd (operating voltage), a 15% to 20% reduction in power consumption at the same speed, and a density increase of up to 1.1 times, supporting data center products.
Read more
News
TSMC is set to offer a new round of its CyberShuttle prototyping service in September. According to sources cited in a report from Commercial Times, it’s revealed that, as per usual practice, there are two opportunities each year, in March and September, for customers to submit their projects. It is indicated that the highlight this time is expected to be the 2nm process, providing leading companies with an opportunity to gain an edge.
TSMC’s 2nm technology is progressing smoothly, with the new Hsinchu Baoshan plant on track for mass production next year. Previously, there were rumors indicating that Apple is considering adopting 2nm chips in 2025, with the iPhone 17 series potentially being among the first devices to use them.
Reportedly, both TSMC’s N2P and A16 technologies are expected to enter mass production in the second half of 2026, offering improvements in power efficiency and chip density.
ASIC companies are eagerly participating in CyberShuttle this time, even though customer intentions for the first 2nm tape-out are still unconfirmed. However, this technology will likely maintain TSMC’s leadership in advanced processes, securing its future technological advantage.
CyberShuttle, also known as MPW (Multi-Project Wafer), refers to the process of placing chips from different customers onto the same test wafer. This approach not only allows for the shared cost of photomasks but also enables rapid chip prototyping and verification, enhancing customers’ cost efficiency and operational effectiveness.
Based on TSMC’s official information, the CyberShuttle prototyping service significantly reduces NRE costs by covering the widest technology range (from 0.5um to 7nm) and the most frequent launch schedule (up to 10 shuttles per month), all through the Foundry segment’s most convenient on-line registration system.)
TSMC’s CyberShuttle prototyping service also validate the sub-circuit functionality and process compatibility of IP, standard cell libraries and I/Os, reducing prototype costs by up to 90%. TSMC states that their current CyberShuttle service covers the broadest range of technologies and can offer up to 10 shuttles per month.
TSMC’s 2nm technology is expected to make its debut in September, offering opportunities for test chips.
Per the report from Commercial Times, IC design companies have pointed out that, unlike the familiar FinFET (Fin Field-Effect Transistor) structure, the industry is transitioning to the Gate-All-Around FET (GAAFET) structure, making it crucial for the market to quickly adapt.
This also allows IC design companies to provide related products to end customers, demonstrating their 2nm design capabilities.
ASIC companies have also revealed that, based on CyberShuttle data, the number of advanced process projects below 7nm is relatively small, with mature processes still dominating.
This suggests that future competition will likely focus on a few leading companies. Those who miss the first wave of 2nm technology may fall behind their competitors by up to six months, making securing a spot on the Shuttle even more critical.
Read more
(Photo credit: TSMC)
News
As designing and manufacturing large monolithic ICs became more complex, related challenges regarding yield and cost have emerged for semiconductor companies, which boosts the popularity of chiplets. Now the wave has been spreading to the memory sector. According to a report by TheElec, SK hynix intends to integrate the chiplet technology into its memory controllers over the next three years to improve cost management.
In January, the company applied for a brand name called MOSAIC, which represents its chiplet technology, the report notes.
Citing SK hynix Executive Vice President Moon Ki-ill, the report notes that the company currently collaborates with TSMC as the foundry for manufacturing its controllers.
However, within the next two to three years, parts of the controller would be manufactured with advanced nodes, while other sections will use legacy nodes, the report states. Moon added that the company is currently developing technology to connect these different sections.
Moon further explains that while TSMC manufactures the controllers for SK hynix, the memory giant itself is responsible for the packaging work. In the future, under the structure of chiplets, a chip can be divided into separate parts with various functions, and then reconnected to achieve similar performances as if they were a single integrated chip, according to TheElec.
In this scenario, function A might use TSMC’s 7nm node, while function B and C could be produced using legacy nodes from TSMC or another foundry. This approach would enables SK hynix to better manage the costs of its DRAM and NAND products, the report notes.
According to the definition of EDA and Intelligent System Design provider Cadence, chiplet technology results in versatile and customizable modular chips, which leads to reduced development timelines and costs.
The creation and adoption of chiplet standards like UCIe enable seamless integration of chiplets into System-on-Chips (SoCs), unlocking new possibilities in computing and technology applications, according to Cadence.
In addition to SK hynix, Samsung has also brought up plans to adopt the chiplet technology. In July, according to an announcement with Japanese startup Preferred Networks, the two companies plan to showcase groundbreaking AI chiplet solutions for the next-generation data center and generative AI computing market.
According to an earlier report by Gizmochina, Samsung is also said to be mulling to apply 3D chiplet technology to its Exynos mobile APs.
Read more
(Photo credit: SK hynix)
News
NVIDIA’s market leadership has garnered significant attention from other industry players. According to a report from Financial Times, several smaller companies, including Cerebras, d-Matrix, and Grog, have raised hundreds of millions of dollars and are launching new products, hoping to carve out a niche in the market.
Cerebras, founded in 2016, recently unveiled its new platform, Cerebras Inference, based on its CS-3 chip. The company even claims its solution is 20 times faster than NVIDIA’s current generation Hopper for AI inference, and at a fraction of the cost.
Per another report from the Economic Daily News, in March this year, Cerebras also launched the WSE-3 processor designed for training AI models, manufactured using TSMC’s 5nm process. At that time, Cerebras confirmed plans for an IPO and has confidentially filed a registration statement with the U.S. Securities and Exchange Commission.
Notably, Andrew Feldman, CEO of Cerebras, further noted that they have already secured meaningful customers from NVIDIA.
d-Matrix, established five years ago, is launching a new funding round with a target of raising over USD 20 million. This follows their USD 11 million Series B round led by Temasek, completed less than a year ago.
The company plans to fully launch its Corsair platform by the end of the year and is integrating its products with open-source software, including Triton, which competes with NVIDIA’s CUDA. Several of NVIDIA’s largest customers support the use of open-source software.
Groq, founded in the same year as Cerebras and led by a team from Google’s Tensor Processing Unit division, recently raised $64 million from investors including BlackRock Private Equity Partners, giving it a valuation of $2.8 billion.
Despite the rush to find and support the next NVIDIA, semiconductor startups are facing significant challenges, according to the Financial Times.
For example, chipmaker Graphcore was acquired by SoftBank last month for just over USD 6 billion, falling short of the approximately USD 7 billion it had raised from venture capital since its founding in 2016.
Read more
(Photo credit: Cerebras)