News
In mid-August, Samsung is said to be accelerating its progress on next-gen HBM, targeting to tape-out HBM4 by the end of this year. Now it seems SK hynix has maintained its competitive edge, as the company aims to tape out HBM4 in October, which will be used to power NVIDIA’s Rubin AI chips, according to the reports by Wccftech and ZDNet.
In addition, the reports note that SK hynix also plans to tape out HBM4 for AMD’s AI chips, which is expected to take place a few month later.
To further prepare for the strong demand from AI chip giants’ upcoming product launch, SK hynix is assembling development teams to supply HBM4 to NVIDIA and AMD, according to Wccftech and ZDNet.
Per SK hynix’s product roadmap, the company plans to launch 12-layer stacked HBM4 in the second half of 2025 and 16-layer in 2026. With NVIDIA’s Rubin series planned for 2026, it is expected to adopt HBM4 12Hi with 8 clusters per GPU.
SK hynix is the major HBM3e supplier for NVIDIA’s AI chips, as the memory giant has taken the lead by starting shipping the product a few months ago, followed by Micron. Samsung’s HBM3, on the other hand, have been cleared by NVIDIA in July, while its HBM3e is still striving to pass NVIDIA’s qualification.
According to the reports, the introduction of HBM4 represents another major milestone for SK hynix, as it offers the fastest DRAM with exceptional power efficiency and higher bandwidth.
HBM4 will feature double the channel width of HBM3E, offering 2048 bits versus 1024 bits. Moreover, it supports stacking 16 DRAM dies, up from 12 in HBM3e, with options for 24Gb and 32Gb layers. This advancement will enable a capacity of 64GB per stack, compared to 32GB with HBM3e, the reports suggest.
On August 19, SK hynix showcased the ambition on securing its leadership on HBM, claiming that the company is developing a product with performance up to 30 times higher than current HBM.
Read more
(Photo credit: SK hynix)
News
According to a report from the Commercial Times, SK hynix is expected to announce a plan of closer collaboration with TSMC and NVIDIA during the Semicon Taiwan exhibition in September, which is likely to focusing on the development of next-generation HBM. This partnership is expected to further strengthen their leadership in the supply of critical components for AI servers.
Semicon Taiwan will be held from September 4 to 6, and sources cited by the same report indicate that SK hynix President Justin Kim will attend the event and deliver a keynote speech for the first time.
Upon arriving in Taiwan, Justin Kim is expected to meet with TSMC executives. The report, citing rumors, suggests that NVIDIA CEO Jensen Huang might also join the meeting, further strengthening the alliance among the tech giants.
The core of this collaboration will revolve around HBM technology. In the past, SK hynix used its own processes to manufacture base dies up to HBM3e (the fifth-generation HBM).
However, industry sources cited by the report reveal that SK hynix will adopt TSMC’s logic process to manufacture the base die starting from HBM4, which would allow the memory giant to customize products for its clients in terms of performance and efficiency.
Industry sources cited by the report also indicate that SK hynix and TSMC have agreed to collaborate on the development and production of HBM4, scheduled for mass production in 2026.
This collaboration will reportedly involve manufacturing HBM4 interface chips using 12FFC+ (12nm class) and 5nm processes to achieve smaller interconnect spacing and enhance memory performance for AI and high-performance computing (HPC) processors.
Per SK hynix’s product roadmap, the company plans to launch a 12-layer stacked HBM4 in the second half of 2025 and 16-layer in 2026. TSMC, on the other hand, is also working to strengthen and expand its CoWoS-L and CoWoS-R packaging capacity to support the large-scale production of HBM4.
SK hynix has been the major supplier of HBM for NVIDIA’s AI GPUs, and with the upcoming Rubin series planned for 2026, it is expected to adopt HBM4 12Hi with 8 clusters per GPU. This partnership between SK hynix, TSMC and NVIDIA, therefore, is expected to expanding its influence and widening the gap with Samsung.
Read more
(Photo credit: SK hynix)
News
China’s GPU company Lisuan Technology, based in Shanghai, has averted the crisis of bankruptcy, as it secures around 328 million yuan (nearly USD 46 billion) in financing from domestic NAND/ DRAM manufacturer Dosilicon and others, according to a report by Chinese media outlet Sina.
On August 20th, Dosilicon made an announcement, stating that it plans to invest 200 million yuan of its own funds to increase the stake in Lisuan Tech. By subscribing to an additional 5 million yuan of Lisuan’s newly registered capital, the memory company will hold approximately 37.88% of Lisuan’s equity.
On the other hand, the report notes that other investors plan to inject a total of 128 million yuan to Lisuan, subscribing to a total of 3.2 million yuan in its newly added registered capital. In total, Lisuan Tech has received 328 million yuan in financing from Dosilicon and others.
Regarding the reasons behind the investment, the report indicates that there is a certain level of synergy between Dosilicon and its target company, Lisuan Tech. As Dosilicon has already established a portfolio of both standard and niche DRAM products, its R&D team can further engage in technical collaboration with the graphics rendering chip design team at Lisuan to enhance the design capabilities of both parties.
The report, citing public information, states that Lisuan Tech, with 20 years of experience in GPU development and design, is one of the few domestic companies in China capable of providing customized high-performance GPU solutions.
The firm’s first 6nm GPU, based on its self-developed ‘Pangu’ architecture, is ready for tape-out, the report suggests. The product even boasts to offer performance on par with NVIDIA’s high-end graphics cards.
However, due to delays in securing financing, the company has fallen into difficulties, with rumors circulating that it was facing bankruptcy.
According to the data cited by the report, in 2023, Lisuan had no revenue and a net loss of 145 million yuan. In the first half of 2024, it reported no revenue and a net loss of 97.9 million yuan. The bulk of the losses was said to stem from R&D investments.
Read more
(Photo credit: Lisuan Tech)
News
On August 19, AMD announced its acquisition of ZT Systems, a cloud computing and AI data center equipment designer, for a total value of USD 4.9 billion. This move is intended to strengthen AMD’s AI computing capabilities, while hinting at its challenge to NVIDIA’s dominance in the AI market.
AMD plans to complete the acquisition in the first half of next year. Industry sources cited in a report from Economic Daily News interpret this move as AMD’s strategy to expand its AI chip market reach, extending its influence from chip design to system integration in the AI sector.
AMD CEO Lisa Su stated in an interview that ZT Systems generates over USD 10 billion in annual revenue, nearly half of AMD’s reported USD 22.7 billion revenue last year.
However, AMD plans to sell off ZT Systems’ manufacturing business after the acquisition is completed, while retaining its system design business. Per a report from Reuters, Su further explained that this decision is because AMD has no plans to compete with companies like Super Micro Computer.
“Our acquisition of ZT Systems is the next major step in our long-term AI strategy to deliver leadership training and inferencing solutions that can be rapidly deployed at scale across cloud and enterprise customers,” said Lisa Su.
AMD will be able to offer a broader range of chips, software, and system designs to large data center clients like Microsoft and Meta after acquiring ZT Systems.
AMD also noted that once the acquisition is finalized, ZT Systems CEO Frank Zhang will remain in his position. In the statement, Frank Zhang expressed that joining AMD will help ZT Systems play a larger role in designing AI infrastructure that defines the future of computing.
Regarding concerns about the potential impact of AMD’s acquisition of ZT Systems on NVIDIA chip supplies, one of ZT Systems’ major shareholders Inventec reassured that existing orders for H100, H200, and GB200 chips will remain unaffected. Current collaboration projects will continue as planned, and the customer base will not change.
Inventec originally partnered with ZT Systems to focus on contract manufacturing for NVIDIA’s Blackwell servers.
Through its facility in Mexico, Inventec was responsible for the assembly of the GB200 server motherboards, while ZT Systems handled further assembly and testing and complete system integration. This collaboration enabled them to secure orders from the four major cloud service providers in North America: Google, Microsoft, Amazon, and Meta.
ZT Systems, founded in 1994 and headquartered in Secaucus, New Jersey, is a privately held company which specializes in designing and manufacturing servers, server racks, and other infrastructure that houses and connects chips for massive data centers, powering AI systems like ChatGPT.
Read more
(Photo credit: AMD)
News
As global competition heats up in the AI sector, an emerging power has now joining the battlefield. Ola, an automotive manufacturer in India, plans to launch the country’s first in-house AI chip by 2026, which is based on ARM architecture, according to a report by Wccftech.
Though there are more details yet to be revealed, the report notes that Ola did highlight its key chip offerings, featuring the Bodhi series, which would be the nation’s first self-developed AI chips. The company’s product lineup also reportedly includes the Sarv-1 cloud-native CPUs and the Ojas edge AI chip.
When asked about the potential foundry partners in the future, Ola’s CEO Bhavish Aggarwal mentioned that the company plans to collaborate with a global tier I or II foundry, likely TSMC or Samsung, according to the report.
Ola’s AI lineup is expected to start with the Bodhi-1 AI chip, which is specifically designed for large-scale LLMs, with a focus on inferencing workloads, Wccftech suggests. Positioned as a low-to-mid-tier offering from Ola, the chip is said to be launched by 2026, followed by a more potent successor, the Bodhi-2, slated to be released in 2028.
According to Wccftech, it is worth noting that Ola also introduced an edge AI chip named Ojas, which is likely to be integrated into Ola’s next-generation electric vehicles. In addition, the Sarv-1, specifically designed for cloud computing, is expected to feature ARM Neoverse N3 cores, though this hasn’t been confirmed yet, the report states.
As the world’s fifth largest economy, India seems to be relatively slow in developing its own AI chips. China, the world’s largest developing country, has quite a long history in developing in-house AI chips.
Chinese tech giant Huawei is said to be testing its latest processor, the “Ascend 910C,” with internet companies and telecom operators recently. Reportedly, the company has informed potential customers that this new chip is comparable to NVIDIA’s H100 GPU, which cannot be directly sold in China.
On the other hand, Baidu’s foray into AI chips can be traced back to as early as 2011. After seven years of development, Baidu officially unveiled its self-developed AI chip, Kunlun 1, in 2018. T-Head, owned by Alibaba, introduced its first high-performance AI inference chip, the HanGuang 800, in September 2019.
Read more
(Photo credit: Krutrim)