News
While introducing the industry’s first 48GB 16-high HBM3E at SK AI Summit in Seoul today, South Korean memory giant SK hynix has reportedly seen strong demand for its next-gen HBM. According to reports by Reuters and South Korean media outlet ZDNet, NVIDIA CEO Jensen Huang requested SK hynix to accelerate the supply of HBM4 by six months.
The information was disclosed by SK Group Chairman Chey Tae-won earlier today at the SK AI Summit, according to the reports. In October, the company said that it planned to deliver the chips to customers in the second half of 2025, according to the reports.
When asked by ZDNet about HBM4’s accelerated timetable, SK hynix President Kwak Noh-Jung responded by saying “We will give it a try.”
A spokesperson for SK hynix cited by Reuters noted that this new timeline is quicker than their original target, but did not provide additional details.
According to ZDNet, NVIDIA CEO Jensen Huang also made his appearance in a video interview at the Summit, stating that by collaborating with SK hynix, NVIDIA has been able to achieve progress beyond Moore’s Law, and the company will continue to need more of SK hynix’s HBM in the future.
According to the third-quarter financial report released by SK hynix in late October, the company posted record-breaking figures, including revenues of 17.5731 trillion won, an operating profit of 7.03 trillion won (with an operating margin of 40%), and a net profit of 5.7534 trillion won (with a net margin of 33%) for the third quarter of this year.
In particular, HBM sales showed excellent growth, up more than 70% from the previous quarter and more than 330% from the same period last year.
SK hynix is indeed making strides in its HBM, as it started mass production of the world’s first 12-layer HBM3E product with 36GB in September. It has also been developing 48GB 16-high HBM3E in a bid to secure technological stability and plans to provide samples to customers early next year, according to the company’s press release.
On the other hand, according to another report by Business Korea, Kim Jae-jun, Vice President of the Memory Business Division, stated In the earnings call that the company is mass-producing and selling both HBM3E 8-stack and 12-stack products, and have completed key stages of the quality testing process for a major customer. Though Kim did not specify the identity of the major customer, industry analysts suggest it is likely NVIDIA.
To shorten the technology gap with SK hynix, Samsung is reportedly planning to produce the next-generation HBM4 products in the latter half of next year.
Read more
(Photo credit: NVIDIA)
News
The latest quarterly reports from the big four cloud service providers (CSPs) have been released in succession. According to a report from Commercial Times, not only has there been significant revenue growth, but capital expenditures for these CSPs have also surged compared to the same period last year, underscoring the ongoing momentum in AI investments.
Industry scources cited by Commercial Times estimate that capital expenditures by CSPs will surpass USD 240 billion by 2025, reflecting an annual increase of over 10%.
The report indicated that the increase in capital expenditures by CSPs is expected to boost demand for Taiwanese companies in the supply chain during the fourth quarter of this year and into next year, benefiting companies such as Quanta, Wistron, Wiwynn, and Inventec.
According to the report, Microsoft’s capital expenditures for the first quarter of fiscal year 2025 (the third quarter of 2024) reached USD 20 billion, higher than USD 19 billion of the previous quarter, reflecting a 78% increase year-on-year. Microsoft noted that the demand for AI now exceeds available production capacity, and they plan to continue increasing investment, expanding data center construction, and promoting AI services.
The report indicated that the market estimates Microsoft’s total expenditures for fiscal year 2025 will reach USD 80 billion, an increase of over USD 30 billion compared to the previous year.
Google’s capital expenditures in the third quarter reached USD 13.1 billion, an annual increase of 62%, which means that total capital expenditures in 2024 will reach USD 51.4 billion, an annual increase of 59%, and capital expenditures will continue to increase next year, according to the report.
Amazon’s capital expenditures for the third quarter reached USD 22.62 billion, reflecting an 81% year-on-year increase. This year, Amazon’s total capital expenditures have reached USD 51.9 billion, and full-year investments are projected to be as high as USD 75 billion. Furthermore, capital expenditures for next year are expected to be even higher, as the report indicated.
According to the report, as for Meta, capital expenditures in the third quarter were USD 9.2 billion, an annual increase of 36%. Moreover, Meta adjusted their capital expenditure forecast for fiscal 2024 to an upward revision of USD 40 billion. The report indicated that its capital expenditures will continue to grow in 2025.
The report highlighted that AI business opportunities will continue to benefit Taiwan’s major server ODMs. Companies such as Quanta, Wistron, Wiwynn, Inventec, and Foxconn all reported strong results in the third quarter and are optimistic about the fourth quarter and the year ahead.
According to the report, Quanta’s third-quarter revenue reached a record high, driven by strong demand for AI server orders. Quanta Chairman Barry Lam also expressed an optimistic outlook on the future of AI, noting that as large-scale CSPs develop generative AI applications, the scale of AI data centers is continually expanding, leading to a substantial increase in orders.
After demonstrating strong growth momentum in the first half of the year, Wistron has benefited from urgent orders in the second half. Additionally, some B200 series products utilizing the next-generation Blackwell platform are scheduled to be shipped after the fourth quarter. The report indicated that Wistron is quite optimistic about its performance for this quarter and next year.
Inventec plans to ship servers to customers primarily from US-based CSPs in the second half of the year. The report highlighted that orders from Google have increased as the company expands its purchase of AI servers based on its own TPU architecture, in addition to acquiring general-purpose servers for new platforms.
Read more
(Photo credit: Microsoft)
News
South Korean memory giant SK hynix has introduced the industry’s first 48GB 16-high HBM3E at SK AI Summit in Seoul today, which is the world’s highest number of layers followed by the 12-high product, according to its press release.
According to SK hynix CEO Kwak Noh-Jung, though the market for 16-high HBM is expected to open up from the HBM4 generation, SK hynix has been developing 48GB 16-high HBM3E in a bid to secure technological stability and plans to provide samples to customers early next year, the press release noted.
In late September, SK hynix announced that it has begun mass production of the world’s first 12-layer HBM3E product with 36GB.
On the other hand, SK hynix is expected to apply Advanced MR-MUF process, which enabled the mass production of 12-high products, to produce 16-high HBM3E, while also developing hybrid bonding technology as a backup, Kwak explained.
According to Kwak, SK hynix’s 16-high products come with performance improvement of 18% in training, 32% in inference vs 12-high products.
Kwak Noh-Jung made the introduction of SK hynix’s 16-high HBM3E during his keynote speech at SK AI Summit today, titled “A New Journey in Next-Generation AI Memory: Beyond Hardware to Daily Life.” He also shared the company’s vision to become a “Full Stack AI Memory Provider”, or a provider with a full lineup of AI memory products in both DRAM and NAND spaces, through close collaboration with interested parties, the press release notes.
It is worth noting that SK hynix highlighted its plans to adopt logic process on base die from HBM4 generation through collaboration with a top global logic foundry to provide customers with best products.
A previous press release in April notes that SK hynix has signed a memorandum of understanding with TSMC for collaboration to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4, or the sixth generation of the HBM family, slated to be mass produced from 2026, through this initiative.
To further expand the memory giant’s product roadmap, it is developing LPCAMM2 module for PC and data center, 1cnm-based LPDDR5 and LPDDR6, taking full advantage of its competitiveness in low-power and high-performance products, according to the press release.
The company is readying PCIe 6th generation SSD, high-capacity QLC-based eSSD and UFS 5.0.
As powering AI system requires a sharp increase in capacity of memory installed in servers, SK hynix revealed in the press release that it is preparing CXL Fabrics that enables high capacity through connection of various memories, while developing eSSD with ultra-high capacity to allow more data in a smaller space at low power.
SK hynix is also developing technology that adds computational functions to memory to overcome so-called memory wall. Technologies such as Processing near Memory(PNM), Processing in Memory(PIM), Computational Storage, essential to process enormous amount of data in future, will be a challenge that transforms structure of next-generation AI system and a future of AI industry, according to the press release.
Read more
(Photo credit: SK hynix)
News
Driven by booming demand for AI chips, TSMC’s advanced CoWoS (Chip on Wafer on Substrate) packaging faces a significant supply shortage. In response, TSMC is expanding its production capacity and is considering price increases to maintain supply chain stability.
According to a recent report from Morgan Stanley cited by Commercial Times, TSMC has received approval from NVIDIA to raise prices next year, with CoWoS packaging expected to increase by 10% to 20%, depending on capacity expansion.
At TSMC’s Q3 earnings call, Chairman C.C. Wei highlighted that customer demand for CoWoS far outstrips supply. Despite TSMC’s plan to more than double CoWoS capacity in 2024 compared to 2023, supply constraints persist.
To meet demand, TSMC is collaborating closely with packaging and testing firms to expand CoWoS capacity. Industry sources quoted by CNA reveal that ASE Group and SPIL are working with TSMC on the back-end CoWoS-S oS (on-Substrate) process. By 2025, ASE may handle 40-50% of TSMC’s outsourced CoWoS-S oS packaging.
ASE announced investments in advanced packaging, covering CoWoS front-end (Chip on Wafer) and oS processes, along with advanced testing.
SPIL, a subsidiary of ASE, recently invested NT$419 million in land at Central Taiwan Science Park’s Erlin Park, boosting CoWoS capacity. Additionally, SPIL has allocated NT$3.702 billion to acquire property from Ming Hwei Energy in Douliu, Yunlin, for further expansion.
ASE also announced in early October that its new Kaohsiung K28 facility, slated for completion in 2026, will expand CoWoS capacity.
In early October, TSMC announced a partnership with Amkor in Arizona to expand InFO and CoWoS packaging capabilities. Industry sources cited by CNA suggest that Apple, a user of TSMC’s U.S.-based 4nm process for application processors, may leverage Amkor’s CoWoS capacity. Other U.S.-based AI clients utilizing TSMC’s advanced nodes for ASICs and GPUs are also expected to consider Amkor’s CoWoS packaging in the future.
(Photo credit: TSMC)
News
Ahead of the upcoming presidential election and shortly after the TSMC-Huawei investigation, U.S.-based GlobalFoundries, the world’s third largest foundry company, was fined by the U.S. government for USD 500,000 for unauthorized shipments of chips to SJ Semiconductor, an affiliate of blacklisted Chinese chipmaker SMIC, according to a report by Reuters.
According to a press release by the Bureau of Industry and Security (BIS), GlobalFoundries made 74 shipments, valued at USD17.1 million, to SJ Semiconductor, an SMIC affiliate, without obtaining the required license. “We want U.S. companies to be hypervigilant when sending semiconductor materials to Chinese parties,” said Assistant Secretary for Export Enforcement Matthew S. Axelrod in the press release.
The report by Reuters notes that both SMIC and SJ Semiconductor were placed on the Entity List, which is a trade restriction list, in 2020 due to SMIC’s alleged ties to China’s military-industrial complex. SMIC has denied any wrongdoing, according to Reuters.
According to BIS’ explanation, GlobalFoundries understood that shipments of items subject to the Export Administration Regulations (EAR) to SJS required a BIS license. Although SJS was not a direct customer of GlobalFoundries, it was the designated third-party outsource assembly and test service provider (OSAT) for one of GlobalFoundries’ customers, which meant it should have been screened by GlobalFoundries’ transaction screening system.
Citing a statement by GlobalFoundries, the report by Reuters states that the foundry giant expressed regret for “the inadvertent action, which was the result of a data-entry error made prior to the entity listing,” leading to the unintentional shipment of legacy chips without a license.
It is worth noting that Washington seems to be gradually enhancing its regulatory strength on preventing China from accessing leading-edge semiconductors recently. Just a weeks ago, foundry leader TSMC has been amidst the storm when its chips were found on Huawei’s Ascend910B. The Taiwan-headquartered foundry giant reportedly notified the U.S. afterwards about chips fabricated for a potential Huawei proxy, Sophgo, as well as halting the supply.
According to Reuters, this development also coincides with GlobalFoundries set to receive approximately USD 1.5 billion from the Commerce Department to establish a new semiconductor manufacturing facility in Malta, New York, and to expand its existing operations in Burlington, Vermont.
Read more
(Photo credit: GlobalFoundries)