News
Per a report by BusinessKorea, SK hynix Vice President Ryu Seong-su announced the company’s strategic plan in the HBM field during the SK Group Icheon Forum 2024 held on August 19. SK hynix plans to develop a product that boasts dozen of times the performance of existing HBM technologies.
The report indicates that SK hynix aims to develop a product with performance 20 to 30 times higher than current HBM offerings, achieving product differentiation.
During the forum, Ryu Seong-su emphasized that SK hynix will concentrate on leveraging advanced execution capabilities to provide memory solutions tailored for the AI (Artificial Intelligence) sector to meet the demands of the mass market.
Amid AI advancements, the demand for high-performance HBM has been on the rise, making it a hotspot among global high-tech companies.
According to Ryu Seong-su, Apple, Microsoft, Google Alphabet, Amazon, NVIDIA, Meta, and Tesla—seven of the world’s tech giants—have all engaged with SK hynix , seeking customized HBM solutions tailored to their specific needs.
Compared to existing HBM products, customized HBM offers clients more options in terms of PPA (Performance, Power, Area), thereby delivering more substantial value.
For example, Samsung believes that the power consumption and area of semiconductors can be largely reduced by stacking HBM memory with custom logic chips in a 3D configuration.
On this trend towards customization in the HBM sector, TrendForce predicted that HBM industry will become more customization-oriented in the future. Unlike other DRAM products, HBM will increasingly break away from the standard DRAM framework in terms of pricing and design, turning to more specialized production.
SK hynix CEO Kwak Noh-Jung also believes that as HBM4 continues to advance, the demand for customization will grow, which is likely to become a global trend and shift towards a more contract-based model. Moreover, it is expected to mitigate the risk of oversupply in the memory market.
In fact, HBM market is gradually evolving from a “general-purpose” to a “customization-oriented” market with the rise of AI. Later, as breakthroughs are made in speed, capacity, power consumption, and cost, HBM is poised to play an even more critical role in the AI sector.
Currently, buyers have already begun making customized requests for HBM4, and both SK hynix and Samsung Electronics have developed strategies to address these demands.
SK hynix has been in collaboration with TSMC to develop the sixth generation of HBM products, known as HBM4, which is expected to enter production in 2026.
Unlike previous generations, inclusive of the fifth-generation HBM3E, which were based on SK hynix’s own process technology, HBM4 will leverage TSMC’s advanced logic process, which is anticipated to significantly enhance the performance of HBM products.
Additionally, adopting ultra-fine processing technology for the base die could enable the addition of more features.
SK hynix has stated that with these two major technological upgrades, the company plans to produce HBM products that excel in performance and efficiency, thereby meeting the demand for customized HBM solutions.
Ryu Seong-su believes that as customized products enjoy burgeoning growth, memory industry is approaching a critical point of paradigm shift, and SK hynix will continue to make advantage of the opportunities presented by these changes to advance its memory business.
Meanwhile, Samsung Electronics, as a leading IDM semiconductor company with capabilities in wafer foundry, memory, and packaging, is also actively promoting customized HBM AI solutions.
In July 2024, Choi Jang-seok, head of the new business planning group at Samsung Electronics’ memory division, stated at the “Samsung Foundry Forum” that the company intends to develop a variety of customized HBM memory products for the HBM4 generation and announced collaborations with major clients like AMD and Apple.
Choi Jang-seok pointed out that the HBM architecture is undergoing profound changes, with many customers shifting from traditional general-purpose HBM to customized products. Samsung Electronics believes that customized HBM will become a reality in the HBM4 generation.
Read more
(Photo credit: Micron)
Insights
According to TrendForce’s latest memory spot price trend report, regarding DRAM spot prices, though the top three DRAM suppliers have not lowered their official prices, spot prices could weaken further because buyers are more passive than before. On the other hand, the ample supply of reball DDR4 chips have also depressed the momentum for DDR4’s price hike. As for NAND flash, spot traders are depleting in anticipation on a revitalization of demand for 3Q24 having perceived the sluggishness from the contract market. Details are as follows:
DRAM Spot Price:
Continuing from last week, there has been no noticeable demand rebound in the spot market. Although the top three DRAM suppliers have not lowered their official prices, spot prices could weaken further because spot buyers are more passive than before, and the demand situation for consumer electronics has yet to experience a turnaround. It is worth noting that spot prices of DDR4 products have been rising by a smaller margin compared with spot prices of DDR5 products since the start of this year. Consequently, some module houses are showing willingness to restock DDR4 products. However, there is still ample supply of reball DDR4 chips, so spot prices of DDR4 products reveal no indication of hikes. The average spot price of the mainstream chips (i.e., DDR4 1Gx8 2666MT/s) dropped by 0.35% from US$1.985 last week to US$1.978 this week.
NAND Flash Spot Price:
Spot traders are depleting in anticipation on a revitalization of demand for 3Q24 having perceived the sluggishness from the contract market. Wafer prices have dropped compared to last week, which reflect how traders are hoping to minimize their losses from inventory by attempting to finalize transactions as soon as possible amidst pessimism. Spot prices of 512Gb TLC wafers have dropped by 1.15% this week, arriving at US$3.234.
News
According to a report from Nikkei, Samsung Electronics, currently lagging behind SK hynix in the HBM market, is said to be betting on the next-generation CXL memory, with shipments expected to begin in the second half of this year, while anticipating the CXL memory to become the next rising star in AI.
CXL is a cache-coherent interconnect for memory expansion, which may maintain memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance.
The CXL module stacks DRAM layers and connects different semiconductor devices like GPUs and CPUs, expanding server memory capacity up to tenfold.
Choi Jang-seok, head of Samsung Electronics’ memory division, explained that CXL technology is comparable to merging wide roads, enabling the efficient transfer of large volumes of data.
As tech companies rush to develop AI models, existing data centers are gradually becoming unable to handle the enormous data processing demands.
As a result, companies are beginning to build larger-scale data centers, but this also significantly increases power consumption. On average, the energy required for a general AI to answer user queries is about ten times that of a traditional Google search.
Choi further highlighted that incorporating CXL technology allows for server expansion without the need for physical growth.
In 2021, Samsung became one of the first companies in the world to invest in the development of CXL. This June, Samsung announced that its CXL infrastructure had received certification from Red Hat.
Additionally, Samsung is a member of the CXL Consortium, which is composed of 15 tech companies, with Samsung being the only memory manufacturer among them. This positions Samsung to potentially gain an advantage in the CXL market.
While HBM remains the mainstream memory used in AI chipsets today, Choi Jang-seok anticipates that the CXL market will take off starting in 2027.
Since the surge in demand for NVIDIA’s AI chips, the HBM market has rapidly expanded. SK hynix, which was the first to develop HBM in 2013, has since secured the majority of NVIDIA’s orders, while Samsung has lagged in HBM technology.
Seeing Samsung’s bet on CXL, SK Group Chairman Chey Tae-won remarked that SK Hynix should not settle for the status quo and immediately start seriously considering the next generation of profit models.
Read more
(Photo credit: Samsung)
News
NEO Semiconductor, a company focused on 3D DRAM and 3D NAND memory, has unveiled its latest 3D X-AI chip technology, which could potentially replace the existing HBM used in AI GPU accelerators.
Reportedly, this 3D DRAM comes with built-in AI processing capabilities, enabling processing and generation without the need for mathematical output. When large amounts of data are transferred between memory and processors, it can reduce data bus issues, thereby enhancing AI performance and reducing power consumption.
The 3D X-AI chip has a underlying neuron circuit layer that can process data stored in 300 memory layers on the same chip. NEO Semiconductor states that with 8,000 neutron circuits performing AI processing in memory, the 3D memory performance can be increased by 100 times, with memory density 8 times higher than current HBM. By reducing the amount of data processed in the GPU, power consumption can be reduced by 99%.
A single 3D X-AI die contains 300 layers of 3D DRAM cells and one layer of neural circuits with 8,000 neurons. It also has a capacity of 128GB, with each chip supporting up to 10 TB/s of AI processing capability. Using 12 3D X-AI dies stacked with HBM packaging can achieve 120 TB/s processing throughput. Thus, NEO estimates that this configuration may eventually result in a 100-fold performance increase.
Andy Hsu, Founder & CEO of NEO Semiconductor, noted that current AI chips waste significant amounts of performance and power due to architectural and technological inefficiencies. The existing AI chip architecture stores data in HBM and relies on a GPU for all calculations.
He further claimed that the separation of data storage and processing architecture has made the data bus an unavoidable performance bottleneck, leading to limited performance and high power consumption during large data transfers.
The 3D X-AI, as per Hsu, can perform AI processing within each HBM chip, which may drastically reduce the data transferred between HBM and the GPU, thus significantly improving performance and reducing power consumption.
Many companies are researching technologies to increase processing speed and communication throughput. As semiconductor speeds and efficiencies continue to rise, the data bus transferring information between components will become a bottleneck. Therefore, such technologies will enable all components to accelerate together.
As per a report from tom’s hardware, companies like TSMC, Intel, and Innolux are already exploring optical technologies, looking for faster communications within the motherboard. By shifting some AI processing from the GPU to the HBM, NEO Semiconductor may reduce the workload and potentially achieve better efficiency than current power-hungry AI accelerators.
Read more
(Photo credit: NEO Semiconductor)
Insights
According to TrendForce’s latest memory spot price trend report, regarding DRAM spot prices, demand has yet to show improvement, leading to increased inventory pressure on suppliers, which indicates the potential for larger price drops in the future. As for NAND flash, the overall price trend is still shifting to a reduction, which led to a small drop in prices for packaged dies and wafers from the spot market. Details are as follows:
DRAM Spot Price:
Continuing from last week, demand has yet to show improvement, leading to increased inventory pressure on suppliers. Consequently, suppliers are more willing to offer price concessions in the spot market. Overall, spot transactions continue to show low volumes. Additionally, the prices that buyers are willing to accept are significantly lower than the official prices set by sellers, resulting in a stalemate. Therefore, there is a potential for larger price drops in the future. The average spot price of mainstream chips (i.e., 1Gx8 2666MT/s) fell by 0.20% from US$1.989 last week to US$1.985 this week.
NAND Flash Spot Price:
Sluggishness persists among spot market transactions after the opening in August, where buyers are maintaining their strong on-the-fence sentiment. Despite emergence of demand for partial stocking orders, the overall price trend is still shifting to a reduction due to a lack of continuity, which led to a small drop in prices for packaged dies and wafers from the spot market this week, where 512Gb TLC wafer has fallen by 0.58% in spot prices, now arriving at US$3.272.