News
At the SEMICON Taiwan 2024, Samsung’s Head of Memory Business, Jung Bae Lee, stated that as the industry enters the HBM4 era, collaboration between memory makers, foundries, and customers is becoming increasingly crucial.
Reportedly, Samsung is prepared with turnkey solutions while maintaining flexibility, allowing customers to design their own basedie (foundation die) and not restricting production to Samsung’s foundries.
As per anue, Samsung will actively collaborate with others, with speculation suggesting this may involve outsourcing orders to TSMC.
Citing sources, anue reported that SK hynix has signed a memorandum of understanding with TSMC in response to changes in the HBM4 architecture. TSMC will handle the production of SK hynix’s basedie using its 12nm process.
This move helps SK hynix maintain its leadership while also ensuring a close relationship with NVIDIA.
Jung Bae Lee further noted that in the AI era, memory faces challenges of high performance and low energy consumption, such as increasing I/O counts and faster transmission speeds. One solution is to outsource the basedie to foundries using logic processes, then integrate it with memory through Through-Silicon Via (TSV) technology to create customized HBM.
Lee anticipates that this shift will occur after HBM4, signifying increasingly close collaboration between memory makers, foundries, and customers. With Samsung’s expertise in both memory and foundry services, the company is prepared with turnkey solutions, offering customers end-to-end production services.
Still, Jung Bae Lee emphasized that Samsung’s memory division has also developed an IP solution for basedie, enabling customers to design their own chips. Samsung is committed to providing flexible foundry services, with future collaborations not limited to Samsung’s foundries, and plans to actively partner with others to drive industry transformation.
Reportedly, Samsung is optimistic about the HBM market, projecting it to reach 1.6 billion Gb this year—double the combined figure from 2016 to 2023—highlighting HBM’s explosive growth.
Address the matter, TrendForce further notes that for the HBM4 generation base die, SK hynix plans to use TSMC’s 12nm and 5nm foundry services. Meanwhile, Samsung will employ its own 4nm foundry, and Micron is expected to produce in-house using a planar process. These plans are largely finalized.
For the HBM4e generation, TrendForce anticipates that both Samsung and Micron will be more inclined to outsource the production of their base dies to TSMC. This shift is primarily driven by the need to boost chip performance and support custom designs, making further process miniaturization more critical.
Moreover, the increased integration of CoWoS packaging with HBM further strengthens TSMC’s position as it is the main provider of CoWoS services.
Read more
(Photo credit: TechNews)
News
As per its official release, SK hynix has announced that it has developed the industry’s first 16Gb DDR5 built using its 1c node, the sixth generation of the 10nm process. Reportedly, it will be ready for mass production of the 1c DDR5 within the year to start volume shipment next year.
To reduce potential errors stemming from the procedure of advancing the process and transfer the advantage of the 1b, the company claims in the release that it extended the platform of the 1b DRAM for development of 1c.
As per the press release, the operating speed of the 1c DDR5, expected to be adopted for high-performance data centers, is improved by 11% from the previous generation, to 8Gbps.
With power efficiency also improved by more than 9%, SK hynix expects adoption of 1c DRAM to help data centers reduce the electricity cost by as much as 30% at a time when advancement of AI era is leading to an increase in power consumption.
Per a report from Businesskorea, the difficulty of advancing the shrinking process for 10nm-range DRAM technology has increased with each generation.
However, with the official release this time, SK hynix has become the first in the industry to overcome these technological limitations by achieving a higher level of design completion.
Per another report from Korea JooAng Daily, this marks a win for SK hynix, as its rival Samsung Electronics had previously outpaced it in the development of the 1b DRAM, which corresponds to nodes in the 12-nanometer range.
Read more
(Photo credit: SK hynix)
News
On August 13th, as per a report from Wallstreetcn citing industry sources, it’s indicated that SK hynix has raised the price of its DDR5 DRAM by 15% to 20%. Per the sources, the price hike by hynix is primarily due to the production capacity being squeezed by HBM3/3e. Additionally, the increased orders for AI servers downstream have also strengthened SK hynix’s resolve to raise DDR5 prices.
According to industry sources cited by Economic Daily News, for Taiwanese manufacturers, Nanya Technology has recently started mass production of DDR5, just in time to benefit from this price surge. Module makers such as ADATA and Team Group are also likely to see gains from low-cost inventory.
Nanya Technology has begun shipping its 16Gb DDR5, developed using its 1B process. Nanya Technology is optimistic that the DRAM market is on a clear path to recovery. This may due to last year’s production cuts by the three major memory manufacturers—Samsung, SK hynix, and Micron—as well as the strong demand for HBM driven by generative AI. The resulting chain reaction is expected to positively impact various types of DRAM.
SK hynix previously announced that its entire HBM production capacity for 2024 has been fully booked, with almost all of its 2025 capacity also sold out. To meet customer demand, SK hynix plans to convert over 20% of its existing DRAM production lines to mass-produce HBM.
Samsung, on the other hand, is said to be actively trying to catch up with SK hynix, looking to allocate around 30% of its DRAM production capacity to HBM.
The significant adjustments by Samsung and SK hynix to their production lines have severely squeezed the capacity for DDR4 and DDR5 DRAM, potentially leading to a sharp reduction in supply and causing prices to rise. Reportedly, SK hynix’s price increase for DDR5 primarily targets contract prices.
Read more
(Photo credit: Nanya Technology)
Insights
According to TrendForce’s latest memory spot price trend report, neither did the DRAM nor NAND spot prices sees much momentum. DDR5 products are relatively stable, while the spot prices of DDR4 products continue to fall gradually due to high inventory levels. As for NAND flash, the spot market saw no apparent changes from last week at a restricted level of transactions also due to sufficient inventory. Details are as follows:
DRAM Spot Price:
The market has not shown notable changes in terms of momentum, and spot prices of DDR5 products are relatively stable. As for DDR4 products, spot prices continue to fall gradually due to high inventory levels. Overall, spot trading is quite limited in terms of volume due to the constraint imposed by weak consumer demand. The average spot price of the mainstream chips (i.e., DDR4 1Gx8 2666MT/s) dropped by 0.10% from US$1.991 last week to US$1.989 this week.
NAND Flash Spot Price:
The spot market saw no apparent changes from last week at a restricted level of transactions also due to sufficient inventory. Spot prices of 512Gb TLC wafers have risen by 1.17% this week, arriving at US$3.291.
News
Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”
According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.
On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.
As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.
DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.
In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.
JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.
In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.
MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.
On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.
MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.
Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.
MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.
This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.
As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.
MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.
In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.
Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.
At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.
MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.
Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.
MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.
Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.
Read more
(Photo credit: SK hynix)