Articles


2024-05-06

[News] Apple M4 Incoming, Boosting TSMC’s 3nm Production

In a bid to seize the AI PC market opportunity, Apple is set to debut its new iPad Pro on the 7th, featuring its in-house M4 chip. With the momentum of the M4 chip’s strong debut, Apple reportedly plans to revamp its entire Mac lineup. The initial batch of M4 Macs is estimated to hit the market gradually from late this year to early next year.

It’s reported by a report from Commercial Times that Apple’s M4 chip adopts TSMC’s N3E process, aligning with Apple’s plans for a major performance upgrade for Mac, which is expected to boost TSMC’s operations.

Notably, per Wccftech’s previous report, it is rumored that the N3E process is also used for producing products like the A18 Pro, the upcoming Qualcomm Snapdragon 8 Gen 4, and the MediaTek Dimensity 9400, among other major clients’ products.

Apple held an online launch event in Taiwan on May 7th at 10 p.m. Per industry sources cited by the same report, besides introducing accessories like iPad Pro, iPad Air, and Apple Pencil, the event will mark the debut of the M4 self-developed chip, unveiling the computational capabilities of Apple’s first AI tablet.

With major computer brands and chip manufacturers competing to release AI PCs, such as Qualcomm’s Snapdragon X Elite and X Plus, and Intel introducing Core Ultra into various laptop brands, it is imperative for Apple to upgrade the performance of its products. Therefore, the strategy of highlighting AI performance through the M4 chip comes as no surprise.

According to a report by Mark Gurman from Bloomberg, the M4 chip will be integrated across Apple’s entire Mac product line. The first batch of M4 Macs is said to be expected to debut as early as the end of this year, including new iMac models, standard 14-inch MacBook Pro, high-end 14-inch and 16-inch MacBook Pro, and Mac mini. New products for 2025 will also be released gradually, such as updates to the 13-inch and 15-inch MacBook Air in the spring, updates to the Mac Studio in mid-year, and finally updates to the Mac Pro.

The report from Commercial Times has claimed that the M4 chip will come in three versions: Donan, Brava, and Hidra. The Donan variant is intended for entry-level MacBook Pro, MacBook Air, and low-end Mac mini models. The Brava version is expected to be used in high-end MacBook Pro and Mac mini models, while the Hidra version will be integrated into desktop Mac Pro computers.

Apple’s plan to introduce the M4 chip into its Mac series is expected to boost the revenue of TSMC’s 3-nanometer family. The report has indicated that the M4 chip will still be manufactured using TSMC’s 3-nanometer process, but with enhancements to the neural processing engine (NPU), providing AI capabilities to Apple’s product line. Additionally, industry sources cited by the same report have revealed that the M4 will utilize TSMC’s N3E process, an improvement over the previous N3B process used in the M3 series chips.

Meanwhile, TSMC continues to advance its existing advanced process node optimization versions. Among them, the N3E variant of the 3-nanometer family, which entered mass production in the fourth quarter of last year, will be followed by N3P and N3X. Currently, N3E is highly likely to be featured in the new generation iPad Pro.

Source: TSMC

Read more

(Photo credit: Apple)

Please note that this article cites information from Commercial Times, Wccftech and Bloomberg.

2024-05-06

[News] TSMC’s Advanced Packaging Capacity Fully Booked by NVIDIA and AMD Through Next Year

With the flourishing of AI applications, two major AI giants, NVIDIA and AMD, are fully committed to the high-performance computing (HPC) market. It’s reported by the Economic Daily News that they have secured TSMC’s advanced packaging capacity for CoWoS and SoIC packaging through this year and the next, bolstering TSMC’s AI-related business orders.

TSMC holds a highly positive outlook on the momentum brought by AI-related applications. During the April earnings call, CEO C.C. Wei revised the visibility of AI orders and their revenue contribution, extending the visibility from the original expectation of 2027 to 2028.

TSMC anticipates that revenue contribution from server AI processors will more than double this year, accounting for a low-teens percentage of the company’s total revenue in 2024. It also expects a 50% compound annual growth rate for server AI processors over the next five years, with these processors projected to contribute over 20% to TSMC’s revenue by 2028.

Per the industry sources cited by the same report from Economic Daily News, they have indicated that the strong demand for AI has led to a fierce competition among the four global cloud service giants, including Amazon AWS, Microsoft, Google, and Meta, to bolster their AI server arsenal. This has resulted in a supply shortage for AI chips from major manufacturers like NVIDIA and AMD.

Consequently, these companies have heavily invested in TSMC’s advanced process and packaging capabilities to meet the substantial order demands from cloud service providers. TSMC’s advanced packaging capacity, including CoWoS and SoIC, for 2024 and 2025 has been fully booked.

To address the massive demand from customers, TSMC is actively expanding its advanced packaging capacity. Industry sources cited by the report have estimated that by the end of this year, TSMC’s CoWoS monthly capacity could reach between 45,000 to 50,000 units, representing a significant increase from the 15,000 units in 2023. By the end of 2025, CoWoS monthly capacity is expected to reach a new peak of 50,000 units.

Regarding SoIC, it is anticipated that the monthly capacity by the end of this year could reach five to six thousand units, representing a multiple-fold increase from the 2,000 units at the end of 2023. Furthermore, by the end of 2025, the monthly capacity is expected to surge to a scale of 10,000 units.

It is understood that NVIDIA’s mainstay H100 chip currently in mass production utilizes TSMC’s 4-nanometer process and adopts CoWoS advanced packaging. Additionally, it supplies customers with SK Hynix’s High Bandwidth Memory (HBM) in a 2.5D packaging form.

As for NVIDIA’s next-generation Blackwell architecture AI chips, including the B100, B200, and the GB200 with Grace CPU, although they also utilize TSMC’s 4-nanometer process, they are produced using an enhanced version known as N4P. The production for the B100, per a previous report from TechNews, is slated for the fourth quarter of this year, with mass production expected in the first half of next year.

Additionally, they are equipped with higher-capacity and updated specifications of HBM3e high-bandwidth memory. Consequently, their computational capabilities will see a multiple-fold increase compared to the H100 series.

On the other hand, AMD’s MI300 series AI accelerators are manufactured using TSMC’s 5-nanometer and 6-nanometer processes. Unlike NVIDIA, AMD adopts TSMC’s SoIC advanced packaging to vertically integrate CPU and GPU dies before employing CoWoS advanced packaging with HBM. Hence, the production process involves an additional step of advanced packaging complexity with the SoIC process.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Economic Daily News and TechNews.

2024-05-06

[Insights] Big Four CSPs Continue to Shine in Q1 2024 Financial Reports, AI Returns Garnering Attention

Four major cloud service providers (CSPs) including Google, Microsoft, Amazon, and Meta, sequentially released their first-quarter financial performance for the year 2024 (January 2024 to March 2024) at the end of April.

Each company has achieved double-digit growth of the revenue, with increased capital expenditures continuing to emphasize AI as their main development focus. The market’s current focus remains on whether AI investment projects can successfully translate into revenue from the previous quarter to date.

TrendForce’s Insights:

1. Strong Financial Performance of Top Four CSPs Driven by AI and Cloud Businesses

Alphabet, the parent company of Google, reported stellar financial results for the first quarter of 2024. Bolstered by growth in search engine, YouTube, and cloud services, revenue surpassed USD 80 billion, marking a 57% increase in profit. The company also announced its first-ever dividend payout, further boosting its stock price as all metrics exceeded market expectations, pushing its market capitalization past USD 2 trillion for the first time.For Google, the current development strategy revolves around its in-house LLM Gemini layout, aimed at strengthening its cloud services, search interaction interfaces, and dedicated hardware development.

Microsoft’s financial performance is equally impressive. This quarter, its revenue reached USD 61.9 billion, marking a year-on-year increase of 17%. Among its business segments, the Intelligent Cloud sector saw the highest growth, with a 21% increase in revenue, totaling $26.7 billion. Notably, the Azure division experienced a remarkable 31% growth, with Microsoft attributing 7% of this growth to AI demand.

In other words, the impact of AI on its performance is even more pronounced than in the previous quarter, prompting Microsoft to focus its future strategies more on the anticipated benefits from Copilot, both in software and hardware.

This quarter, Amazon achieved a remarkable revenue milestone, surpassing USD 140 billion, representing a year-on-year increase of 17%, surpassing market expectations. Furthermore, its profit reached USD 10.4 billion, far exceeding the USD 3.2 billion profit recorded in the same period in 2023.

The double-digit growth in advertising business and AWS (Amazon Web Services) drove this performance, with the latter being particularly highlighted for its AI-related opportunities. AWS achieved a record-high operating profit margin of 37.6% this quarter, with annual revenue expected to exceed $100 billion, and short-term plans to invest USD 150 billion in expanding data centers.

On the other hand, Meta reported revenue of USD 36.46 billion this quarter, marking a significant year-on-year growth of 27%, the largest growth rate since 2021. Profit also doubled compared to the same period in 2023, reaching USD 12.37 billion.

Meta’s current strategy focuses on allocating resources to areas such as smart glasses and mixed reality (MR) in the short and medium term. The company continues to leverage AI to enhance the user value of the virtual world.

2. Increased Capital Expenditure to Develop AI is a Common Consensus, Yet Profitability Remains Under Market Scrutiny

Observing the financial reports of major cloud players, the increase in capital expenditure to solidify their commitment to AI development can be seen as a continuation of last quarter’s focus.

In the first quarter of 2024, Microsoft’s capital expenditure surged by nearly 80% compared to the same period in 2023, reaching USD 14 billion. Google expects its quarterly expenditure to remain above USD 12 billion. Similarly, Meta has raised its capital expenditure guidance for 2024 to the range of USD 35 to USD 40 billion.

Amazon, considering its USD 14 billion expenditure in the first quarter as the minimum for the year, anticipates a significant increase in capital expenditure over the next year, exceeding the USD 48.4 billion spent in 2023. However, how these increased investments in AI will translate into profitability remains a subject of market scrutiny.

While the major cloud players remain steadfast in their focus on AI, market expectations may have shifted. For instance, despite impressive financial reports last quarter, both Google and Microsoft saw declines in their stock prices, unlike the significant increases seen this time. This could partly be interpreted as an expectation of short- to medium-term AI investment returns from products and services like Gemini and Copilot.

In contrast, Meta, whose financial performance is similarly impressive to other cloud giants, experienced a post-earnings stock drop of over 15%. This may be attributed partly to its conservative financial outlook and partly to the less-than-ideal investment returns from its focused areas of virtual wearable devices and AI value-added services.

Due to Meta’s relatively limited user base compared to the other three CSPs in terms of commercial end-user applications, its AI development efforts, such as the practical Llama 3 and the value-added Meta AI virtual assistant for its products, have not yielded significant benefits. While Llama 3 is free and open-source, and Meta AI has limited shipment, they evidently do not justify the development costs.

Therefore, Meta still needs to expand its ecosystem to facilitate the promotion of its AI services, aiming to create a business model that can translate technology into tangible revenue streams.

For example, Meta recently opened up the operating system Horizon OS of its VR device Quest to brands like Lenovo and Asus, allowing them to produce their own branded VR/MR devices. The primary goal is to attract developers to enrich the content database and thereby promote industry development.

Read more

2024-05-03

[News] NVIDIA Reportedly Fueling Samsung and SK Hynix Competition, Impacting HBM Pricing?

According to South Korean media outlet BusinessKorea’s report on May 2nd, NVIDIA is reported to be fueling competition between Samsung Electronics and SK Hynix, possibly in an attempt to lower the prices of High Bandwidth Memory (HBM).

The report on May 2nd has cited sources, indicating that the prices of the third-generation “HBM3 DRAM” have soared more than fivefold since 2023. For NVIDIA, the significant increase in the pricing of critical component HBM is bound to affect research and development costs.

The report from BusinessKorea thus accused that NVIDIA is intentionally leaking information to fan current and potential suppliers to compete against each other, aiming to lower HBM prices. On April 25th, SK Group Chairman Chey Tae-won traveled to Silicon Valley to meet with NVIDIA CEO Jensen Huang, potentially related to these strategies.

Although NVIDIA has been testing Samsung’s industry-leading 12-layer stacked HBM3e for over a month, it has yet to indicate willingness to collaborate. BusinessKorea’s report has cited sources, suggesting this is a strategic move aimed at motivate Samsung Electronics. Samsung only recently announced that it will commence mass production of 12-layer stacked HBM3e starting from the second quarter.

SK Hynix CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for 2024 has already been fully sold out, and 2025’s capacity is also nearly sold out.  He mentioned that samples of the 12-layer stacked HBM3e will be sent out in May, with mass production expected to begin in the third quarter.

Kwak Noh-Jung further pointed out that although AI is currently primarily centered around data centers, it is expected to rapidly expand to on-device AI applications in smartphones, PCs, cars, and other end devices in the future. Consequently, the demand for memory specialized for AI, characterized by “ultra-fast, high-capacity and low-power,” is expected to skyrocket.

Kwak Noh-Jung also addressed that SK Hynix possesses industry-leading technological capabilities in various product areas such as HBM, TSV-based high-capacity DRAM, and high-performance eSSD. In the future, SK Hynix looks to provide globally top-tier memory solutions tailored to customers’ needs through strategic partnerships with global collaborators.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from BusinessKorea.

2024-05-03

[News] PSMC’s New Tongluo Plant Unveiled, CoWoS Packaging Ready to Roll

Powerchip Semiconductor Manufacturing Corporation (PSMC) held the inauguration ceremony for its new Tongluo plant on May 2nd. This investment project, totaling over NTD 300 billion  for a 12-inch fab, has completed the installation of its initial equipment and commenced trial production. According to a report from Commercial Times, it will serve as PSMC’s primary platform for advancing process technology and pursuing orders from large international clients.

Additionally, PSMC has ventured into advanced CoWoS packaging, primarily producing Silicon Interposers, with mass production expected in the second half of the year and a monthly capacity of several thousand units.

Frank Huang, Chairman of PSMC, stated that construction of the new Tongluo plant began in March 2021. Despite challenges posed by the pandemic, the plant was completed and commenced operations after a three-year period.

As of now, the investment in this 12-inch fab project has exceeded NTD 80 billion, underscoring the significant time, technology, and financial requirements for establishing new semiconductor production capacity. Fortunately, the company made swift decisions and took action to build the plant. Otherwise, with the recent international inflation driving up costs of various raw materials, the construction costs of this new plant would undoubtedly be even higher.

The land area of Powerchip Semiconductor Manufacturing Corporation’s Tongluo plant exceeds 110,000 square meters. The first phase of the newly completed plant comprises a cleanroom spanning 28,000 square meters. It is projected to house 12-inch wafer production lines for 55nm, 40nm, and 28nm nodes with a monthly capacity of 50,000 units. In the future, as the business grows, the company can still construct a second phase of the plant on the Tongluo site to continue advancing its 2x nanometer technology.

Frank Huang indicated that the first 12-inch fab in Taiwan was established by the Powerchip group. To date, they have built eight 12-inch fabs and plan to construct four more in the future. Some of these fabs will adopt the “Fab IP” technology licensing model. For example, the collaboration with Tata Group in India operates under this model.

According to a previous report from TechNews, Frank Huang believes that IP transfer will also become one of the important sources of revenue in the future. “Up to 7-8 countries have approached PSMC,” including Vietnam, Thailand, India, Saudi Arabia, France, Poland, Lithuania, and others, showing interest in investing in fabs, indicating optimism for PSMC’s future Fab IP operating model.

PSMC’s Fab IP strategy, according to the same report, leverages its long-term accumulated experience in plant construction and semiconductor manufacturing technology to assist other countries, extending from Japan and India to countries in the Middle East and Europe, in building semiconductor plants while earning royalties for technology transfers.

Looking ahead to the second half of the year, Frank Huang indicated that the current issue lies in the less-than-stellar performance of the economies of the United States and China. While the United States is showing relatively better performance in AI and technology, China’s performance is not as strong.

Huang believes that after the fourth quarter of this year, there is a chance for accelerated deployment of AI application products such as smartphones, PCs, and notebooks. With the explosive demand brought about by AI, 2025 is expected to be a very good year for the semiconductor industry, and PSMC has already seized the opportunity.

In addition, PSMC also mentioned that since last year, there has been a continuous tight supply of advanced CoWoS packaging. In response to the demands of global chip clients, the company has also ventured into CoWoS-related businesses, primarily providing the Silicon Interposer needed for advanced CoWoS packaging. Currently in the validation stage, mass production is expected to commence in the second half of the year, with an initial monthly capacity of several thousand units.

Read more

(Photo credit: PSMC)

Please note that this article cites information from Commercial Times and TechNews.

  • Page 192
  • 449 page(s)
  • 2241 result(s)

Get in touch with us