Semiconductors


2024-05-03

[News] PSMC’s New Tongluo Plant Unveiled, CoWoS Packaging Ready to Roll

Powerchip Semiconductor Manufacturing Corporation (PSMC) held the inauguration ceremony for its new Tongluo plant on May 2nd. This investment project, totaling over NTD 300 billion  for a 12-inch fab, has completed the installation of its initial equipment and commenced trial production. According to a report from Commercial Times, it will serve as PSMC’s primary platform for advancing process technology and pursuing orders from large international clients.

Additionally, PSMC has ventured into advanced CoWoS packaging, primarily producing Silicon Interposers, with mass production expected in the second half of the year and a monthly capacity of several thousand units.

Frank Huang, Chairman of PSMC, stated that construction of the new Tongluo plant began in March 2021. Despite challenges posed by the pandemic, the plant was completed and commenced operations after a three-year period.

As of now, the investment in this 12-inch fab project has exceeded NTD 80 billion, underscoring the significant time, technology, and financial requirements for establishing new semiconductor production capacity. Fortunately, the company made swift decisions and took action to build the plant. Otherwise, with the recent international inflation driving up costs of various raw materials, the construction costs of this new plant would undoubtedly be even higher.

The land area of Powerchip Semiconductor Manufacturing Corporation’s Tongluo plant exceeds 110,000 square meters. The first phase of the newly completed plant comprises a cleanroom spanning 28,000 square meters. It is projected to house 12-inch wafer production lines for 55nm, 40nm, and 28nm nodes with a monthly capacity of 50,000 units. In the future, as the business grows, the company can still construct a second phase of the plant on the Tongluo site to continue advancing its 2x nanometer technology.

Frank Huang indicated that the first 12-inch fab in Taiwan was established by the Powerchip group. To date, they have built eight 12-inch fabs and plan to construct four more in the future. Some of these fabs will adopt the “Fab IP” technology licensing model. For example, the collaboration with Tata Group in India operates under this model.

According to a previous report from TechNews, Frank Huang believes that IP transfer will also become one of the important sources of revenue in the future. “Up to 7-8 countries have approached PSMC,” including Vietnam, Thailand, India, Saudi Arabia, France, Poland, Lithuania, and others, showing interest in investing in fabs, indicating optimism for PSMC’s future Fab IP operating model.

PSMC’s Fab IP strategy, according to the same report, leverages its long-term accumulated experience in plant construction and semiconductor manufacturing technology to assist other countries, extending from Japan and India to countries in the Middle East and Europe, in building semiconductor plants while earning royalties for technology transfers.

Looking ahead to the second half of the year, Frank Huang indicated that the current issue lies in the less-than-stellar performance of the economies of the United States and China. While the United States is showing relatively better performance in AI and technology, China’s performance is not as strong.

Huang believes that after the fourth quarter of this year, there is a chance for accelerated deployment of AI application products such as smartphones, PCs, and notebooks. With the explosive demand brought about by AI, 2025 is expected to be a very good year for the semiconductor industry, and PSMC has already seized the opportunity.

In addition, PSMC also mentioned that since last year, there has been a continuous tight supply of advanced CoWoS packaging. In response to the demands of global chip clients, the company has also ventured into CoWoS-related businesses, primarily providing the Silicon Interposer needed for advanced CoWoS packaging. Currently in the validation stage, mass production is expected to commence in the second half of the year, with an initial monthly capacity of several thousand units.

Read more

(Photo credit: PSMC)

Please note that this article cites information from Commercial Times and TechNews.

2024-05-03

[News] TSMC Reportedly Commences Production of Tesla’s Next-Generation Dojo Chips, Anticipates 40x Increase in Computing Power in 3 Years

Tesla’s journey towards autonomous driving necessitates substantial computational power. Earlier today, TSMC has confirmed the commencement of production for Tesla’s next-generation Dojo supercomputer training chips, heralding a significant leap in computing power by 2027.

As per a report from TechNews, Elon Musk’s plan reportedly underscores that software is the true key to profitability. Achieving this requires not only efficient algorithms but also robust computing power. In the hardware realm, Tesla adopts a dual approach: procuring tens of thousands of NVIDIA H100 GPUs while simultaneously developing its own supercomputing chip, Dojo. TSMC, reportedly being responsible for producing the supercomputer chips, has confirmed the commencement of production.

Previously reported by another report from TechNews, Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip. The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.

At the North American technology conference, TSMC provided detailed insights into semiconductor technology and advanced packaging, enabling the establishment of system-level integration on the entire wafer and creating ultra-high computing performance. TSMC stated that the next-generation Dojo training modules for Tesla have begun production. By 2027, TSMC is expected to offer more complex wafer-scale systems with computing power exceeding current systems by over 40 times.

The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power. Crucially, it possesses scalability, allowing for continuous stacking and adjustment of computing power and power consumption based on software demands.

Wafer Integration Offers 40x Computing Power

According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems provided to Cerebras. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.

By 2027, TSMC predicts a comprehensive wafer integration offering 40 times the computing power, surpassing 40 optical mask silicon, and accommodating over 60 HBMs.

As per Musk, if NVIDIA provides enough GPUs, Tesla probably won’t need to develop Dojo on its own. Preliminary estimates suggest that this batch of next-generation Dojo supercomputers will become part of Tesla’s new Dojo cluster, located in New York, with an investment of at least USD 500 million.

Despite substantial computing power investments, Tesla’s AI business still faces challenges. In December last year, two senior engineers responsible for the Dojo project left the company. Now, Tesla is continuously downsizing to cut costs, requiring more talented individuals to contribute their brains and efforts. This is crucial to ensure the timely launch of self-driving taxis and to enhance the Full Self-Driving (FSD) system.

The next-generation Dojo computer for Tesla will be located in New York, while the headquarters in Texas’s Gigafactory will house a 100MW data center for training self-driving software, with hardware using NVIDIA’s supply solutions. Regardless of the location, these chips ultimately come from TSMC’s production line. Referring to them as AI enablers is no exaggeration at all.

Read more

(Photo credit: TSMC)

Please note that this article cites information from TechNews.

2024-05-02

[News] HBM Craze Continues! SK Hynix Reports Sold Out for this Year, Next Year’s HBM Capacity Nearly Fully Booked

SK Hynix CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for this year has already been fully sold out, and next year’s capacity is also nearly sold out. From a technological perspective, SK Hynix plans to provide samples of the world’s highest-performance 12-layer stacked HBM3e products in May this year and is preparing for mass production starting in the third quarter.

SK Hynix just held a press conference in South Korea, where they disclosed information regarding their AI memory technology capabilities, market status, and investment plans for future major production sites in Cheongju and Yongin, South Korea, as well as in the United States.

Kwak Noh-Jung pointed out that although AI is currently primarily centered around data centers, it is expected to rapidly expand to on-device AI applications in smartphones, PCs, cars, and other end devices in the future. Consequently, the demand for memory specialized for AI, characterized by “ultra-fast, high-capacity and low-power,” is expected to skyrocket.

Kwak Noh-Jung stated that SK Hynix possesses industry-leading technological capabilities in various product areas such as HBM, TSV-based high-capacity DRAM, and high-performance eSSD. In the future, SK Hynix looks to provide globally top-tier memory solutions tailored to customers’ needs through strategic partnerships with global collaborators.

Looking ahead to AI memory, SK Hynix President Justin Kim pointed out that as we enter the era of AI, the global volume of data generated is expected to grow from 15 Zettabytes (ZB) in 2014 to 660 ZB by 2030. Simultaneously, the proportion of revenue from AI memory is also expected to increase significantly. Memory technologies oriented towards AI, such as HBM and high-capacity DRAM modules, are projected to account for about 5% of the entire memory market in 2023 (in terms of revenue), with expectations to reach 61% by 2028.

Additionally, the company will advance collaboration with top-tier partners in the system semiconductor and foundry fields globally, aiming to timely develop and provide the best products.

Regarding SK Hynix’s packaging technology capabilities, the company highlighted its MR-MUF technology as one of its core packaging technologies. While there may be bottlenecks in high-layer stacking, SK Hynix emphasized that this is not the case in practice. The company has already begun mass production of 12-layer stacked HBM3 products using advanced MR-MUF technology.

Reducing the pressure on chip stacking to 6% has not only shortened process time but also increased production efficiency by up to 4 times while enhancing heat dissipation by 45%. Moreover, the latest MR-MUF technology from SK Hynix utilizes new protective materials, resulting in a 10% improvement in heat dissipation. Additionally, the advanced MR-MUF technology employs superior high-temperature/low-pressure methods for warpage control, making it the most suitable solution for high-layer stacking.

Furthermore, SK Hynix plans to adopt advanced MR-MUF technology in HBM4 to achieve 16-layer stacking and is actively researching hybrid bonding technology. Lastly, in terms of investments in the United States, SK Hynix has confirmed the construction of an advanced packaging production facility for AI memory in Indiana. This facility is scheduled to commence mass production of the next-generation HBM products in the second half of 2028.

Read more

(Photo credit: SK Hynix)

2024-05-02

[News] AI Chip Alone Can’t Hold Up? AMD’s Fiscal Forecast This Quarter Reportedly Falls Short of Expectations

AMD benefited from AI demand last quarter (January to March), with revenue of USD 5.47 billion, surpassing Wall Street expectations and turning a profit compared to the same period last year. However, this quarter’s fiscal forecast and market outlook are not as expected.

AMD achieved a net profit of USD 120 million last quarter, with an adjusted EPS of USD 0.62, surpassing Wall Street’s expected USD 0.61. AMD expects revenue for this quarter to be between USD 5.4 billion and USD 6 billion, with a midpoint of USD 5.7 billion, a 6% increase from the same period last year but slightly below Wall Street’s expected USD 5.73 billion.

After enduring a downturn in the semiconductor industry, AMD finally returned to profitability last quarter, largely due to strong sales of its MI300 series AI chips, which drove revenue in the data center division to grow by 80% year-on-year to USD 2.3 billion.

As per a report from the Wall Street Journal, AMD CEO Lisa Su stated that since the launch of the latest MI300X chip at the end of last year, sales have surpassed $1 billion, with major customers including Microsoft, Meta, Oracle, among other tech giants.

In January, Lisa Su had forecasted that AMD’s AI chip revenue for this year could reach USD 3.5 billion, which was recently revised upwards to USD 4 billion. The AMD MI300 series chips are seen as direct competitors to NVIDIA’s H100 chips. However, NVIDIA announced its new generation AI chip architecture, Blackwell, in March this year, forcing AMD to accelerate its pace. Lisa Su stated that AMD is already developing the next generation of AI chips.

AMD’s client division, which sells PC chips, has also benefited from the AI wave, with revenue increasing by 85% year-on-year to USD 1.4 billion last quarter, once again proving the recovery and growth of the global PC market. AMD’s chips for PCs are capable of executing AI computations locally, targeting the increasingly expanding demand for AI-enabled PCs.

Regarding the applications of AI PCs, Su previously stated in an interview with Sina that she found communication, productivity, and creativity particularly exciting. Many applications are still in their early stages, but she expects to see more developments in the coming years.

However, AMD’s businesses outside of AI chips are facing increasing challenges. Revenue from the gaming console chip division declined by 48% year-on-year to USD 920 million last quarter, falling short of Wall Street’s expectations of USD 970 million. Additionally, the revenue from the embedded chip division, established after AMD’s acquisition of Xilinx in 2022, also decreased by 46% year-on-year to USD 850 million last quarter, similarly below Wall Street’s expectations of USD 940 million.

TrendForce previously issued an analysis in a press release, indicating that the AI PC market is propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.

Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.

Read more

(Photo credit: AMD)

Please note that this article cites information from Commercial Timesthe Wall Street Journal and Sina.

2024-05-02

[News] TSMC Advanced Packaging Crucial for AI Computing Power

The demand for AI computing power is skyrocketing, with advanced packaging capacity becoming key. As per a report from Commercial Times citing industry sources, it has pointed out that TSMC is focusing on the growth potential of advanced packaging.

Southern Taiwan Science Park, Central Taiwan Science Park and Chiayi Science Park are all undergoing expansion. The Chiayi Science Park, approved this year, is set to construct two advanced packaging factories ahead of schedule. Phase one of Chiayi Science Park is scheduled to break ground this quarter, with first tool-in slated for the second half of next year. Phase two of Chiayi Science Park is expected to start construction in the second quarter of next year, with first tool-in planned for the first quarter of 2027, continuing to expand its share in the AI and HPC markets.

Advanced packaging technology achieves performance enhancement by stacking, thus increasing the density of inputs/outputs. TSMC recently unveiled numerous next-generation advanced packaging solutions, involving various new technologies and processes, including CoWoS-R and SoW.

The development of advanced packaging technology holds significant importance for the advancement of the chip industry. TSMC’s innovative solutions bring revolutionary wafer-level performance advantages, meeting the future AI demands of ultra-large-scale data centers.

Industry sources cited by the same report has stated that TSMC’s introduction of system-level wafer technology enables 12-inch wafers to accommodate a large number of chips, providing greater computational power while significantly reducing the space required in data centers.

This advancement also increases the power efficiency. Among these, the first commercially available SoW product utilizes an integrated fan-out (InFO) technology primarily for logic chips. Meanwhile, the stacked chip version employing CoWoS technology is expected to be ready by 2027.

As stacking technology advances, the size of AI chips continues to grow, with a single wafer potentially yielding fewer than ten super chips. Packaging capacity becomes crucial in this scenario. The industry sources cited in Commercial Time’s report also note that TSMC’s Longtan Advanced Packaging plant with a monthly capacity of 20,000 wafers is already at full capacity. The Zhunan AP6 plant is currently the main focus of expansion efforts, with equipment installation expected to ramp up in the fourth quarter at the  Central Taiwan Science Park facility, accelerating capacity preparation.

TSMC’s SoIC has emerged as a leading solution for 3D chip stacking. AMD is the inaugural customer for SoIC, with its MI300 utilizing SoIC paired with CoWoS.

Apple has also officially entered the generative AI battlefield. It’s noted by the sources as per the same report that Apple’s first 3D packaged SoIC product will be its ARM-based CPU for AI servers, codenamed M4 Plus or M4 Ultra, expected to debut as early as the second half of next year. The 3D packaged SoIC technology is projected to be further extended to consumer-grade MacBook M series processors by 2026.

NVIDIA, on the other hand, is reportedly set to launch the R100 in the second half of next year, utilizing chiplet and the CoWoS-L packaging architecture. It’s not until 2026 that they will officially introduce the X100 (tentative name), which adopts a 3D packaging solution incorporating SoIC and CoWoS-L.

As per a recent report from MoneyDJ citing industry sources, the SoIC technology is still in its early stages, with monthly production capacity expected to reach around 2,000 wafers by the end of this year. There are prospects for this capacity to double this year and potentially exceed 10,000 wafers by 2027.

With support from major players like AMD, Apple, and NVIDIA, TSMC’s expansion in SoIC is viewed as confident, securing future orders for high-end chip manufacturing and advanced packaging.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Commercial Times and MoneyDJ.

  • Page 110
  • 274 page(s)
  • 1369 result(s)

Get in touch with us