News
Global Unichip Corp. (GUC), a leading provider of advanced ASIC solutions, announced that its 3nm HBM3E Controller and PHY IP have been adopted by a major cloud service provider and several high-performance computing (HPC) companies. The cutting-edge ASIC is expected to tape out this year, featuring the latest 9.2Gbps HBM3E memory technology.
In the same announcement, GUC highlighted its active collaboration with HBM suppliers like Micron, stating it is developing HBM4 IP for next-generation AI ASICs.
GUC noted that its joint efforts with Micron have demonstrated the ability of GUC’s HBM3E IP to achieve 9.2Gbps with Micron’s HBM3E on both CoWoS-S and CoWoS-R technologies. Test chip results from GUC show successful PI and SI outcomes, with excellent eye margins across temperature and voltage variations at these speeds.
Moreover, when GUC’s HBM3E IP is integrated with Micron’s HBM3E timing parameters, it improves effective bus utilization, further boosting overall system performance.
“We are thrilled to see our HBM3E Controller and PHY IP being integrated in CSP and HPC ASICs.” said Aditya Raina, CMO of GUC. “This adoption underscores the robustness and advantages of our HBM3E solution, which is silicon-proven and validated across multiple advanced technologies and major vendors. We look forward to continuing our support for various applications, including AI, high-performance computing, networking, and automotive.”
“Memory is an integral part of AI servers and foundational to the performance and advancement of data center systems,” said Girish Cherussery, senior director of Micron’s AI Solutions Group. “Micron’s best-in-class memory speeds and energy efficiency greatly benefit the increasing demands of Generative AI workloads, such as large language models like ChatGPT, sustaining the pace of AI growth.”
(Photo credit: GUC)
News
No eternal allies. No perpetual enemies. The old proverb seems so true when it comes to the semiconductor industry, when the world’s top foundry, TSMC, announced the collaboration with its rival Samsung, the second largest foundry globally, on the development of HBM4, according to the reports by Korean media outlets the Korea Economic Daily and Business Korea. According to analysts cited by the Korea Economic Daily, it would mark their first partnership in the AI chip sector.
Citing the remarks of Lee Jung-bae, Head of the Memory Business Division at Samsung Electronics at SEMICON Taiwan, the reports note that in order to advance in HBM, Samsung is preparing over 20 customized solutions in collaboration with various foundry partners. However, Lee declined to comment on which specific foundry they were partnering with.
The answer has been revealed when on September 5th, Dan Kochpatcharin, Head of Ecosystem and Alliance Management at TSMC, confirmed that the two companies are working together on developing a buffer-less HBM4 chip.
According to Business Korea, buffer-less HBM is a product that eliminates the buffer used to prevent electrical issues and manage voltage distribution, which Samsung targets to introduce with HBM4. The innovation is expected to enhance power efficiency by 40% and reduce latency by 10% compared to existing models.
The reports note that the main consideration Samsung chooses to team up with TSMC would be the attempt to incorporate customized features requested by major clients like NVIDIA and Google.
Although Samsung can offer a full range of HBM4 services, including memory production, foundry, and advanced packaging, the company aims to utilize TSMC’s technology to attract more clients, according to sources cited by the reports.
The manufacturing process for HBM4 differs from previous generations, with the logic die, the component that functions as the brain of an HBM chip, may now be produced by foundry companies rather than memory manufacturers.
Earlier in April, SK hynix, the current HBM leader as well as Samsung’s biggest rival on memory, announced the partnership with TSMC on HBM4 development and next-generation packaging technology.
Though months later than SK hynix and Micron, Samsung’s 8-layer HBM3e has reportedly started shipments to NVIDIA. It targets to gain a competitive edge with its rival in HBM4, eyeing to enter mass production by late 2025.
Read more
(Photo credit: Samsung)
News
At the SEMICON Taiwan 2024, Samsung’s Head of Memory Business, Jung Bae Lee, stated that as the industry enters the HBM4 era, collaboration between memory makers, foundries, and customers is becoming increasingly crucial.
Reportedly, Samsung is prepared with turnkey solutions while maintaining flexibility, allowing customers to design their own basedie (foundation die) and not restricting production to Samsung’s foundries.
As per anue, Samsung will actively collaborate with others, with speculation suggesting this may involve outsourcing orders to TSMC.
Citing sources, anue reported that SK hynix has signed a memorandum of understanding with TSMC in response to changes in the HBM4 architecture. TSMC will handle the production of SK hynix’s basedie using its 12nm process.
This move helps SK hynix maintain its leadership while also ensuring a close relationship with NVIDIA.
Jung Bae Lee further noted that in the AI era, memory faces challenges of high performance and low energy consumption, such as increasing I/O counts and faster transmission speeds. One solution is to outsource the basedie to foundries using logic processes, then integrate it with memory through Through-Silicon Via (TSV) technology to create customized HBM.
Lee anticipates that this shift will occur after HBM4, signifying increasingly close collaboration between memory makers, foundries, and customers. With Samsung’s expertise in both memory and foundry services, the company is prepared with turnkey solutions, offering customers end-to-end production services.
Still, Jung Bae Lee emphasized that Samsung’s memory division has also developed an IP solution for basedie, enabling customers to design their own chips. Samsung is committed to providing flexible foundry services, with future collaborations not limited to Samsung’s foundries, and plans to actively partner with others to drive industry transformation.
Reportedly, Samsung is optimistic about the HBM market, projecting it to reach 1.6 billion Gb this year—double the combined figure from 2016 to 2023—highlighting HBM’s explosive growth.
Address the matter, TrendForce further notes that for the HBM4 generation base die, SK hynix plans to use TSMC’s 12nm and 5nm foundry services. Meanwhile, Samsung will employ its own 4nm foundry, and Micron is expected to produce in-house using a planar process. These plans are largely finalized.
For the HBM4e generation, TrendForce anticipates that both Samsung and Micron will be more inclined to outsource the production of their base dies to TSMC. This shift is primarily driven by the need to boost chip performance and support custom designs, making further process miniaturization more critical.
Moreover, the increased integration of CoWoS packaging with HBM further strengthens TSMC’s position as it is the main provider of CoWoS services.
Read more
(Photo credit: TechNews)
News
Jun He, Vice President of Advanced Packaging Technology and Service at TSMC, stated that 3D IC is a crucial method for integrating AI chip memory with logic chips.
According to a report from TechNews, regarding the development of 2.5D CoWoS advanced packaging, which integrates eight chiplets, TSMC will use the A16 advanced process to manufacture the chiplets, and integrated them with 12 HBM4, which is expected to be launched in 2027.
Reportedly, in his speech at the Semicon Taiwan 2024 “3D IC / CoWoS for AI Summit,” He noted that the global semiconductor market is projected to become a trillion-dollar industry by 2030, with HPC and AI being the key drivers, accounting for 40% of the market, which also make AI chips crucial drivers for 3D IC packaging.
The reasons customers choose to manufacture AI chips with 3D IC platform for multi-chiplet design would be related to their lower costs and reduced design transition burdens.
Jun He explained that by converting a traditional SoC+HBM design to a chiplet and HBM architecture, the new logic chip would be the only component that needed to be designed from scratch, while other components such as I/O and SoC can use existing process technologies. This approach reduces mass production costs by up to 76%.
Although the new architecture might increase production costs by 2%, the total cost of ownership (TCO) is improved by 22% due to these efficiencies, He noted.
However, 3D IC still faces challenges, particularly in increasing production capacity. Jun He emphasized that the key to enhancing 3D IC capacity lies in the size of the chips and the complexity of the manufacturing process.
Regarding chip size, larger chips can accommodate more chiplets, improving performance. However, this also increases the complexity of the process, which can be three times more challenging. Additionally, there are risks associated with chip misalignment, breakage, and failure during extraction.
To address these risk challenges, Jun He identified three key factors: tool automation and standardization, process control and quality, and the support of the 3DFabric manufacturing platform.
For tool automation and standardization, TSMC’s differentiated capabilities with its tool suppliers are crucial. With 64 suppliers now involved, TSMC has gained the ability to lead in advanced packaging tools.
In terms of process control and quality, TSMC utilizes high-resolution PnP tools and AI-driven quality control to ensure comprehensive and robust quality management. Finally, the 3DFabric manufacturing platform integrates 1,500 types of materials within the supply chain to achieve optimization.
Read more
(Photo credit: TSMC)
News
Among memory giants which are accelerating their development of next-gen HBM amid the AI boom, SK hynix, NVIDIA’s major HBM supplier, is at the forefront as it dominates the market. According to Kangwook Lee, Senior Vice President and Vice President of Packaging, though it might be the case that certain startups would choose to forgo HBM in their AI chip design due to cost considerations, high-performance computing products still require HBM, a report by Technews notes.
Lee’s attendance marks the first time SK hynix has delivered a keynote speech in SEMICON Taiwan, as he gave a presentation on September 3rd at the Heterogeneous Integration Global Summit, sharing the company’s observation on HBM trends in the future. Here are the key takeaways complied by Technews.
Customized HBM Will Be the Future
Citing Lee’s remarks, the report states that customization will be a crucial trend in the HBM sector. Lee further noted that the major difference between standard and customized HBM lies in the base logic die, as customers’ IPs have been integrated. The two categories of HBMs, though, share similar core dies.
TrendForce also predicted that HBM industry will become more customization-oriented in the future. Unlike other DRAM products, HBM will increasingly break away from the standard DRAM framework in terms of pricing and design, turning to more specialized production.
SK hynix has been in collaboration with TSMC to develop the sixth generation of HBM products, known as HBM4, which is expected to enter production in 2026. Unlike previous generations, which were based on SK hynix’s own process technology, HBM4 will leverage TSMC’s advanced logic process, which is anticipated to significantly enhance the performance of HBM products, while enabling the addition of more features in the meantime.
SK hynix: Chiplet to Be Applied Not Only in HBM But in SSD
Regarding the challenges of HBM in the future, Lee mentioned that there are many obstacles in packaging and design. In terms of packaging, the main challenge is the limitation on the number of stacked layers.
According to Lee, SK hynix is particularly interested in directly integrating logic chips with HBM stacks. On the other hand, customers are also showing interest in 3D System-in-Package (3D SIP) technology. In sum, 3D SIP, memory bandwidth, customer need alignment and collaboration will be among the challenges going forward.
Per a report by Korean media outlet TheElec, SK hynix intends to integrate the chiplet technology into its memory controllers over the next three years to improve cost management, which means that parts of the controller would be manufactured with advanced nodes, while other sections will use legacy nodes.
In response, Lee stated that this technology will be used not only for HBM but also for SSD SoC controllers.
When asked about whether some startups might choose to forgo HBM in AI chip design due to cost considerations, Lee responded that it largely depends on the product application. Some companies claim that HBM is too expensive, so they may seek alternative solutions without HBM. High-performance computing products, on the other hand, still require HBM.
Read more
(Photo credit: SK hynix)