News
The surge in memory product prices continues, driven by the AI wave revitalizing the memory market. According to a report from Liberty Times Net, prices of high-performance DRAM are also on the rise. Industry sources cited by the same report have indicated that SK Hynix’s LPDDR5/LPDDR4/DDR5 and other DRAM products will see a comprehensive price hike of 15-20%.
According to a report from Chinese media Wallstreetcn, it has cited industry sources, noting that SK Hynix’s DRAM product prices have been steadily increasing month by month since the fourth quarter of last year, with cumulative increases ranging from approximately 60% to 100%. This upward trend in memory prices is expected to continue until the second half of the year.
On April 25th, SK Hynix announced its first-quarter financial results, with revenue soaring to KRW 12.42 trillion, marking a staggering 144.3% increase compared to the same period last year. Operating profit reached KRW 2.88 trillion, far exceeding market expectations of KRW 1.8 trillion, and achieving the second-highest historical figure for the same period.
Contrasting with the loss of KRW 3.4 trillion in the same period last year, this performance represents a significant turnaround for SK Hynix, signaling a shift from a prolonged period of stagnation to comprehensive recovery.
Looking ahead, SK Hynix expressed optimism, stating that the growing demand for memory driven by AI and the recovery of demand for general DRAM products starting from the second half of this year will contribute to a stable growth trend in the memory market for the rest of the year.
Industry sources cited by the report predict that as demand for high-end products like HBM increases, requiring larger capacity compared to general DRAM products, the increase in output of high-end products will lead to a relative decrease in supply of general DRAM products. Consequently, both suppliers and clients are expected to deplete their inventories.
In line with the trend of growing memory demand for AI applications, SK Hynix has decided to ramp up the production of its HBM3e products, which began global production in March this year, and expand its customer base. Additionally, the company plans to launch its fifth-generation 10-nanometer class (1b) 32Gb DDR5 DRAM products within this year, aiming to strengthen its market leadership in high-capacity DRAM products for servers.
Read more
(Photo credit: SK Hynix)
News
With the flourishing of AI applications, two major AI giants, NVIDIA and AMD, are fully committed to the high-performance computing (HPC) market. It’s reported by the Economic Daily News that they have secured TSMC’s advanced packaging capacity for CoWoS and SoIC packaging through this year and the next, bolstering TSMC’s AI-related business orders.
TSMC holds a highly positive outlook on the momentum brought by AI-related applications. During the April earnings call, CEO C.C. Wei revised the visibility of AI orders and their revenue contribution, extending the visibility from the original expectation of 2027 to 2028.
TSMC anticipates that revenue contribution from server AI processors will more than double this year, accounting for a low-teens percentage of the company’s total revenue in 2024. It also expects a 50% compound annual growth rate for server AI processors over the next five years, with these processors projected to contribute over 20% to TSMC’s revenue by 2028.
Per the industry sources cited by the same report from Economic Daily News, they have indicated that the strong demand for AI has led to a fierce competition among the four global cloud service giants, including Amazon AWS, Microsoft, Google, and Meta, to bolster their AI server arsenal. This has resulted in a supply shortage for AI chips from major manufacturers like NVIDIA and AMD.
Consequently, these companies have heavily invested in TSMC’s advanced process and packaging capabilities to meet the substantial order demands from cloud service providers. TSMC’s advanced packaging capacity, including CoWoS and SoIC, for 2024 and 2025 has been fully booked.
To address the massive demand from customers, TSMC is actively expanding its advanced packaging capacity. Industry sources cited by the report have estimated that by the end of this year, TSMC’s CoWoS monthly capacity could reach between 45,000 to 50,000 units, representing a significant increase from the 15,000 units in 2023. By the end of 2025, CoWoS monthly capacity is expected to reach a new peak of 50,000 units.
Regarding SoIC, it is anticipated that the monthly capacity by the end of this year could reach five to six thousand units, representing a multiple-fold increase from the 2,000 units at the end of 2023. Furthermore, by the end of 2025, the monthly capacity is expected to surge to a scale of 10,000 units.
It is understood that NVIDIA’s mainstay H100 chip currently in mass production utilizes TSMC’s 4-nanometer process and adopts CoWoS advanced packaging. Additionally, it supplies customers with SK Hynix’s High Bandwidth Memory (HBM) in a 2.5D packaging form.
As for NVIDIA’s next-generation Blackwell architecture AI chips, including the B100, B200, and the GB200 with Grace CPU, although they also utilize TSMC’s 4-nanometer process, they are produced using an enhanced version known as N4P. The production for the B100, per a previous report from TechNews, is slated for the fourth quarter of this year, with mass production expected in the first half of next year.
Additionally, they are equipped with higher-capacity and updated specifications of HBM3e high-bandwidth memory. Consequently, their computational capabilities will see a multiple-fold increase compared to the H100 series.
On the other hand, AMD’s MI300 series AI accelerators are manufactured using TSMC’s 5-nanometer and 6-nanometer processes. Unlike NVIDIA, AMD adopts TSMC’s SoIC advanced packaging to vertically integrate CPU and GPU dies before employing CoWoS advanced packaging with HBM. Hence, the production process involves an additional step of advanced packaging complexity with the SoIC process.
Read more
(Photo credit: TSMC)
News
According to South Korean media outlet BusinessKorea’s report on May 2nd, NVIDIA is reported to be fueling competition between Samsung Electronics and SK Hynix, possibly in an attempt to lower the prices of High Bandwidth Memory (HBM).
The report on May 2nd has cited sources, indicating that the prices of the third-generation “HBM3 DRAM” have soared more than fivefold since 2023. For NVIDIA, the significant increase in the pricing of critical component HBM is bound to affect research and development costs.
The report from BusinessKorea thus accused that NVIDIA is intentionally leaking information to fan current and potential suppliers to compete against each other, aiming to lower HBM prices. On April 25th, SK Group Chairman Chey Tae-won traveled to Silicon Valley to meet with NVIDIA CEO Jensen Huang, potentially related to these strategies.
Although NVIDIA has been testing Samsung’s industry-leading 12-layer stacked HBM3e for over a month, it has yet to indicate willingness to collaborate. BusinessKorea’s report has cited sources, suggesting this is a strategic move aimed at motivate Samsung Electronics. Samsung only recently announced that it will commence mass production of 12-layer stacked HBM3e starting from the second quarter.
SK Hynix CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for 2024 has already been fully sold out, and 2025’s capacity is also nearly sold out. He mentioned that samples of the 12-layer stacked HBM3e will be sent out in May, with mass production expected to begin in the third quarter.
Kwak Noh-Jung further pointed out that although AI is currently primarily centered around data centers, it is expected to rapidly expand to on-device AI applications in smartphones, PCs, cars, and other end devices in the future. Consequently, the demand for memory specialized for AI, characterized by “ultra-fast, high-capacity and low-power,” is expected to skyrocket.
Kwak Noh-Jung also addressed that SK Hynix possesses industry-leading technological capabilities in various product areas such as HBM, TSV-based high-capacity DRAM, and high-performance eSSD. In the future, SK Hynix looks to provide globally top-tier memory solutions tailored to customers’ needs through strategic partnerships with global collaborators.
Read more
(Photo credit: SK Hynix)
News
Powerchip Semiconductor Manufacturing Corporation (PSMC) held the inauguration ceremony for its new Tongluo plant on May 2nd. This investment project, totaling over NTD 300 billion for a 12-inch fab, has completed the installation of its initial equipment and commenced trial production. According to a report from Commercial Times, it will serve as PSMC’s primary platform for advancing process technology and pursuing orders from large international clients.
Additionally, PSMC has ventured into advanced CoWoS packaging, primarily producing Silicon Interposers, with mass production expected in the second half of the year and a monthly capacity of several thousand units.
Frank Huang, Chairman of PSMC, stated that construction of the new Tongluo plant began in March 2021. Despite challenges posed by the pandemic, the plant was completed and commenced operations after a three-year period.
As of now, the investment in this 12-inch fab project has exceeded NTD 80 billion, underscoring the significant time, technology, and financial requirements for establishing new semiconductor production capacity. Fortunately, the company made swift decisions and took action to build the plant. Otherwise, with the recent international inflation driving up costs of various raw materials, the construction costs of this new plant would undoubtedly be even higher.
The land area of Powerchip Semiconductor Manufacturing Corporation’s Tongluo plant exceeds 110,000 square meters. The first phase of the newly completed plant comprises a cleanroom spanning 28,000 square meters. It is projected to house 12-inch wafer production lines for 55nm, 40nm, and 28nm nodes with a monthly capacity of 50,000 units. In the future, as the business grows, the company can still construct a second phase of the plant on the Tongluo site to continue advancing its 2x nanometer technology.
Frank Huang indicated that the first 12-inch fab in Taiwan was established by the Powerchip group. To date, they have built eight 12-inch fabs and plan to construct four more in the future. Some of these fabs will adopt the “Fab IP” technology licensing model. For example, the collaboration with Tata Group in India operates under this model.
According to a previous report from TechNews, Frank Huang believes that IP transfer will also become one of the important sources of revenue in the future. “Up to 7-8 countries have approached PSMC,” including Vietnam, Thailand, India, Saudi Arabia, France, Poland, Lithuania, and others, showing interest in investing in fabs, indicating optimism for PSMC’s future Fab IP operating model.
PSMC’s Fab IP strategy, according to the same report, leverages its long-term accumulated experience in plant construction and semiconductor manufacturing technology to assist other countries, extending from Japan and India to countries in the Middle East and Europe, in building semiconductor plants while earning royalties for technology transfers.
Looking ahead to the second half of the year, Frank Huang indicated that the current issue lies in the less-than-stellar performance of the economies of the United States and China. While the United States is showing relatively better performance in AI and technology, China’s performance is not as strong.
Huang believes that after the fourth quarter of this year, there is a chance for accelerated deployment of AI application products such as smartphones, PCs, and notebooks. With the explosive demand brought about by AI, 2025 is expected to be a very good year for the semiconductor industry, and PSMC has already seized the opportunity.
In addition, PSMC also mentioned that since last year, there has been a continuous tight supply of advanced CoWoS packaging. In response to the demands of global chip clients, the company has also ventured into CoWoS-related businesses, primarily providing the Silicon Interposer needed for advanced CoWoS packaging. Currently in the validation stage, mass production is expected to commence in the second half of the year, with an initial monthly capacity of several thousand units.
Read more
(Photo credit: PSMC)
News
Tesla’s journey towards autonomous driving necessitates substantial computational power. Earlier today, TSMC has confirmed the commencement of production for Tesla’s next-generation Dojo supercomputer training chips, heralding a significant leap in computing power by 2027.
As per a report from TechNews, Elon Musk’s plan reportedly underscores that software is the true key to profitability. Achieving this requires not only efficient algorithms but also robust computing power. In the hardware realm, Tesla adopts a dual approach: procuring tens of thousands of NVIDIA H100 GPUs while simultaneously developing its own supercomputing chip, Dojo. TSMC, reportedly being responsible for producing the supercomputer chips, has confirmed the commencement of production.
Previously reported by another report from TechNews, Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip. The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.
At the North American technology conference, TSMC provided detailed insights into semiconductor technology and advanced packaging, enabling the establishment of system-level integration on the entire wafer and creating ultra-high computing performance. TSMC stated that the next-generation Dojo training modules for Tesla have begun production. By 2027, TSMC is expected to offer more complex wafer-scale systems with computing power exceeding current systems by over 40 times.
The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power. Crucially, it possesses scalability, allowing for continuous stacking and adjustment of computing power and power consumption based on software demands.
Wafer Integration Offers 40x Computing Power
According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems provided to Cerebras. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.
By 2027, TSMC predicts a comprehensive wafer integration offering 40 times the computing power, surpassing 40 optical mask silicon, and accommodating over 60 HBMs.
As per Musk, if NVIDIA provides enough GPUs, Tesla probably won’t need to develop Dojo on its own. Preliminary estimates suggest that this batch of next-generation Dojo supercomputers will become part of Tesla’s new Dojo cluster, located in New York, with an investment of at least USD 500 million.
Despite substantial computing power investments, Tesla’s AI business still faces challenges. In December last year, two senior engineers responsible for the Dojo project left the company. Now, Tesla is continuously downsizing to cut costs, requiring more talented individuals to contribute their brains and efforts. This is crucial to ensure the timely launch of self-driving taxis and to enhance the Full Self-Driving (FSD) system.
The next-generation Dojo computer for Tesla will be located in New York, while the headquarters in Texas’s Gigafactory will house a 100MW data center for training self-driving software, with hardware using NVIDIA’s supply solutions. Regardless of the location, these chips ultimately come from TSMC’s production line. Referring to them as AI enablers is no exaggeration at all.
Read more
(Photo credit: TSMC)