News
To capture the booming demand of AI processors, memory heavyweights have been aggressively expanding HBM (High Bandwidth Memory) capacity, as well as striving to improve its yield and competitiveness. The latest development would be Micron’s reported new plant in Hiroshima Prefecture, Japan.
The fab, targeting to produce chips and HBM as early as 2027, is reported to manufacture DRAM with the most advanced “1γ” (gamma; 11-12 nanometers) process, using extreme ultraviolet (EUV) lithography equipment in the meantime.
Why is HBM such a hot topic, and why is it so important?
HBM: Solution to High Performance Computing; Perfectly Fitted for AI Chips
By applying 3D stacking technology, which enables multiple layers of chips to be stacked on top of each other, HBM’s TSVs (through-silicon vias) process allows for more memory chips to be packed into a smaller space, thus shortening the distance data needs to travel. This makes HBM perfectly fitted to high-performance computing applications, which requires fast data speed. Additionally, replacing GDDR SDRAM or DDR SDRAM with HBM will help control energy consumption.
Thus, it would not be surprising that AMD, the GPU heavyweight, collaborated with memory leader SK hynix to develop HBM in 2013. In 2015, AMD launched the world’s first high-end consumer GPU with HBM, named Fiji. While in 2016, NVIDIA introduced P100, its first AI server GPU with HBM.
Entering the Era of HBM3e
Years after the first AI server GPU with HBM was launched, NVIDIA has now incorporated HBM3e (the 5th generation HBM) in its Blackwell B100/ Hopper H200 models. The GPU giant’s GB200 and B100, which will also adopt HBM3e, are on the way, expected to be launched in 2H24.
The current HBM3 supply for NVIDIA’s H100 is primarily met by SK hynix. In March, it has reportedly started mass production of HBM3e, and secured the order to NVIDIA. In May, yield details regarding HBM3e have been revealed for the first time. According to Financial Times, SK hynix has achieved the target yield of nearly 80%.
On the other hand, Samsung made it into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, while received AMD MI300 certification by 1Q24. In March, Korean media Alphabiz reported that Samsung may exclusively supply its 12-layer HBM3e to NVIDIA as early as September. However, rumors have it that it failed the test with NVIDIA, though Samsung denied the claims, noting that testing proceeds smoothly and as planned.
According to Korea Joongang Daily, Micron has roused itself to catch up in the heated competition of HBM3e. Following the mass production in February, it has recently secured an order from NVIDIA for H200.
Regarding the demand, TrendForce notes that HBM3e may become the market mainstream for 2024, which is expected to account for 35% of advanced process wafer input by the end of 2024.
HBM4 Coming Soon? Major Players Gear up for Rising Demand
As for the higher-spec HBM4, TrendForce expects its potential launch in 2026. With the push for higher computational performance, HBM4 is set to expand from the current 12-layer (12hi) to 16-layer (16hi) stacks. HBM4 12hi products are set for a 2026 launch, with 16hi in 2027.
The Big Three have all revealed product roadmaps for HBM4. SK hynix, according to reports from Wccftech and TheElec, stated to commence large-scale production of HBM4 in 2026. The chip will, reportedly, be the first chip from SK hynix made through its 10-nm class Gen 6 (1c) DRAM.
As the current market leader in HBM, SK hynix shows its ambition in capacity expansion as well as industrial collaboration. According to Nikkei News, it is considering expanding the investment to Japan and the US to increase HBM production and meet customer demand.
In April, it disclosed details regarding the collaboration with TSMC, of which SK hynix plans to adopt TSMC’s advanced logic process (possibly CoWoS) for HBM4’s base die so additional functionality can be packed into limited space.
Samsung, on the other hand, claimed to introduce HBM4 in 2025, according to Korea Economic Daily. The memory heavyweight stated at CES 2024 that its HBM chip production volume will increase 2.5 times compared to last year and is projected to double again next year. In order to embrace the booming demands, the company spent KRW 10.5 billion to acquire the plant and equipment of Samsung Display located in Tianan City, South Korea, for HBM capacity expansion. It also plans to invest KRW 700 billion to 1 trillion in building new packaging lines.
Meanwhile, Micron anticipates launching 12-layer and 16-layer HBM4 with capacities of 36GB to 48GB between 2026 and 2027. After 2028, HBM4e will be introduced, pushing the maximum bandwidth beyond 2+ TB/s and increasing stack capacity to 48GB to 64GB.
Look back at history. As the market demand for AI chips keeps its momentum, GPU companies tend to diversify their sources, while memory giants vie for their favor by improving yields and product competitiveness.
In the era of HBM3, the supply for NVIDIA’s H100 solution is primarily met by SK hynix at first. Afterwards, Samsung’s entry into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment. This trend of diversifying suppliers may continue in HBM4. Who would be able to claim the lion’s share in the next-gen HBM market? Time will tell sooner or later.
Read more
(Photo credit: Samsung)
News
As the demand for AI chips keeps booming, memory giants have been aggressive in their HBM roadmaps. SK hynix, with its leading market position in HBM3e, has now revealed more details regarding HBM4e. According to reports by Wccftech and ET News, SK hynix plans to further distinguish itself by introducing an HBM variant capable of supporting multiple functionalities including computing, caching, and network memory.
While this concept is still in the early stages, SK hynix has begun acquiring semiconductor design IPs to support its objectives, the aforementioned reports noted.
According to ET News, the memory giant intends to establish the groundwork for a versatile HBM with its forthcoming HBM4 architecture. The company reportedly plans to integrate a memory controller onboard, paving the way for new computing capabilities with its 7th-generation HBM4e memory.
By employing SK hynix’s technique, the package will become a unified unit. This will not only ensure faster transfer speeds due to significantly reduced structural gaps but also lead to higher power efficiencies, according to the reports.
Previously in April, SK hynix announced that it has been collaborating with TSMC to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4 slated to be mass produced from 2026, in the initiative.
As more details relating to HBM4 have been revealed now, the memory heavyweight seems to extend its leading market position in HBM3 by addressing the semiconductor aspect of the HBM structure, Wccftech said.
According to TrendForce’s analysis earlier, as of early 2024, the current landscape of the HBM market is primarily focused on HBM3. NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, signaling the next step in memory technology. The current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands.
In late May, SK hynix has disclosed yield details regarding HBM3e for the first time. According to a report from the Financial Times, the memory giant has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Read more
(Photo credit: SK hynix)
News
Currently, the issue of low power consumption remains a key concern in the industry. According to a recent report by the International Energy Agency (IEA), given that an average Google search requires 0.3Wh and each request to OpenAI’s ChatGPT consumes 2.9Wh, the 9 billion searches conducted daily would require an additional 10 terawatt-hours (TWh) of electricity annually. Based on the projected sales of AI servers, AI industry might see exponential growth in 2026, with power consumption needs at least ten times that of last year.
Ahmad Bahai, CTO of Texas Instruments, per a previous report from Business Korea, stated that recently, in addition to the cloud, AI services have also shifted to mobile and PC devices, leading to a surge in power consumption, and hence, this will be a hot topic.
In response to market demands, the industry is actively developing semiconductors with lower power consumption. On memory products, the development of LPDDR and related products such as Low Power Compression Attached Memory Module (LPCAMM) is accelerating. These products are particularly suitable for achieving energy conservation in mobile devices with limited battery capacity. Additionally, the expansion of AI applications in server and automotive fields is driving the increased use of LPDDR to reduce power consumption.
In terms of major companies, Micron, Samsung Electronics, and SK Hynix are speeding up the development of the next generation of LPDDR. Recently, Micron announced the launch of Crucial LPCAMM2. Compared to existing modules, this product is 64% smaller and 58% more power-efficient. As a low-power dedicated packaging module that includes several latest LPDDR products (LPDDR5X), it is a type of LPCAMM. LPCAMM was first introduced by Samsung Electronics last year, and it is expected to enjoy significant market growth this year.
Currently, the Joint Electron Device Engineering Council (JEDEC) plans to complete the development of LPDDR6 specifications within this year. According to industry news cited by the Korean media BusinessKorea, LPDDR6 is expected to start commercialization next year. The industry predicts that LPDDR6’s bandwidth may more than double that of previous generation.
Read more
(Photo credit: SK Hynix)
News
Samsung’s HBM, according to a report from TechNews, has yet to pass certification by GPU giant NVIDIA, causing it to fall behind its competitor SK Hynix. As a result, the head of Samsung’s semiconductor division was replaced. Although Samsung denies any issues with their HBM and emphasizes close collaboration with partners, TechNews, citing market sources, indicates that Samsung has indeed suffered a setback.
Samsung invested early in HBM development and collaborated with NVIDIA on HBM and HBM2, but sales were modest. Eventually, the HBM team, according to TechNews’ report, moved to SK Hynix to develop HBM products. Unexpectedly, the surge in generative AI led to a sharp increase in HBM demand, and SK Hynix, benefitting from the trend, seized the opportunity with the help of the team.
Yet, in response to the rumors about changes in the HBM team, SK Hynix has denied the claims that SK Hynix developed HBM with the help of the Samsung team and also denied the information that Samsung’s HBM team transferred to SK Hynix. SK Hynix further emphasized the fact that SK Hynix’s HBM was developed solely by its own engineers.
Samsung’s misfortune is evident; despite years of effort, they faced setbacks just as the market took off. Samsung must now find alternative ways to catch up. The market still needs Samsung, as noted by Wallace C. Kou, President of memory IC design giant Silicon Motion.
Kou reportedly stated that Samsung remains the largest memory producer, and as NVIDIA faces a supply shortage for AI chips, the GPU giant is keen to cooperate with more suppliers. Therefore, it’s only a matter of time before Samsung supplies HBM to NVIDIA.
Furthermore, Samsung also indicated in a recent statement, addressing that they are conducting HBM tests with multiple partners to ensure quality and reliability.
In the statement, Samsung indicates that it is in the process of optimizing their products through close collaboration with its customers, with testing proceeding smoothly and as planned. As HBM is a customized memory product, it requires optimization processes in line with customers’ needs.
Samsung also states that it is currently partnering closely with various companies to continuously test technology and performance, and to thoroughly verify the quality and performance of its HBM.
On the other hand, NVIDIA has various GPUs adopting HBM3e, including H200, B200, B100, and GB200. Although all of them require HBM3e stacking, their power consumption and heat dissipation requirements differ. Samsung’s HBM3e may be more suitable for H200, B200, and AMD Instinct MI350X.
Read more
(Photo credit: SK Hynix)
News
According to a report by Nikkei News, SK Hynix is considering expanding its investment to Japan and the US to increase HBM production and meet customer demand.
Reportedly, the demand for high-bandwidth memory (HBM) is surging thanks to the AI boom. SK Group Chairman and CEO Chey Tae-won stated at the Future of Asia forum in Tokyo on May 23rd that if overseas investment becomes necessary, the company would consider manufacturing these products in Japan and the United States.
Chey Tae-won also mentioned that SK will further strengthen its partnerships with Japanese chip manufacturing equipment makers and materials suppliers, considering increased investments in Japan. He emphasized that collaboration with Japanese suppliers is crucial for advanced semiconductor manufacturing.
When selecting chip manufacturing sites, Chey highlighted the importance of accessing clean energy, as customers are demanding significant reductions in supply chain greenhouse gas emissions.
Additionally, Chey stated that SK intends to enhance R&D collaboration with Japanese partners for next-generation semiconductor products.
Kwon Jae-soon, a senior executive at SK Hynix, stated in a report published by the Financial Times on May 21 that the yield rate of their HBM3e is approaching the 80% target, and the production time has been reduced by 50%.
Kwon emphasized that the company’s goal this year is to produce 8-layer stacked HBM3e, as this is what customers need the most. He noted that improving yield rates is becoming increasingly important to maintain a leading position in the AI era.
SK Hynix’s HBM capacity is almost fully booked through next year. The company plans to collaborate with TSMC to mass-produce more advanced HBM4 chips starting next year.
Read more
(Photo credit: SK Hynix)