News
South Korean memory giant SK hynix, after announcing soaring financial results in Q2 and its massive investment in Yongin Semiconductor Cluster last week, is now reportedly considering another move: US IPO for its Solidigm subsidiary.
According to the reports by Blocks & Files and Korea media outlet Hankyung, Solidigm has achieved its first profit after 12 consecutive quarters of losses. On July 25th, SK hynix announced second-quarter revenue of 16.42 trillion Korean won, a 125% year-on-year increase, setting a historical record. At the same time, profits reached their highest level since 2018. This was mainly due to strong demand for AI memory, including HBM, and overall price increases for DRAM and NAND products.
The reports stated that the rumor regarding the U.S. IPO seems to be plausible, as SK hynix had previously planned to spin off Solidigm, and the company’s recent rebound makes such a move more feasible. In addition, an IPO for Solidigm would allow SK hynix to obtain cash for part of its stake in the company and assist in covering the planned capital expenditures, according to the reports.
The company had just announced an ambitious plan of expanding its memory manufacturing capacity with an approximately 9.4 trillion won (USD 6.8 billion) investment to build an HBM fabrication plant at the Yongin Semiconductor Cluster, Korea. Construction of the fab will begin in March 2025 and is expected to be completed by May 2027. Following this, SK Hynix intends to add three more plants to the cluster.
However, the reports also pointed out that SK hynix’s success in this venture will likely depend on how the new organization is structured—such as which assets are included in Solidigm versus those retained by SK hynix—and how both entities address future technology plans. This is particularly important considering that the current roadmap for the memory giant’s NAND business at Dalian, China, including the QLC components that have contributed to Solidigm’s recent success in high-capacity enterprise SSDs, appears to conclude at 196 layers.
In 2020, SK hynix acquired Intel’s NAND and SSD division through a two-phase deal. The first phase involved the former purchasing Intel’s SSD business and NAND fabrication plant in Dalian, China, for USD 7 billion. The second phase will see SK hynix pay Intel an additional USD 2 billion in 2025 for intellectual property related to NAND flash wafer manufacturing and design, as well as for R&D employees and the Dalian fab workforce. SK hynix named the acquired business Solidigm in December, 2021, and has since developed and launched a few successful products, including the D5-P5336 61.44 TB QLC (4 bits/cell) SSD, the reports noted.
Regarding the rumor, SK hynix clarifies that Solidigm is exploring various growth strategies, but no decision has been made at this time.
Read more
(Photo credit: Solidigm)
News
Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”
According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.
On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.
As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.
DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.
In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.
JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.
In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.
MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.
On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.
MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.
Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.
MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.
This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.
As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.
MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.
In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.
Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.
At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.
MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.
Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.
MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.
Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.
Read more
(Photo credit: SK hynix)
News
Amid the wave of AI applications, the demand for high-performance memory continues to mushroom, with DRAM, represented by HBM, gaining significant traction. Meanwhile, to further meet market demand, memory manufacturers are poised to embrace a new round of DRAM technological “revolution.”
According to a report from Korean media outlet Chosun Biz, Samsung Electronics Vice President Changsik Yoo recently announced that Samsung’s next-generation DRAM technology is progressing well. In addition to the successful mass production of 1b DRAM, the development of 4F Square DRAM technology is also proceeding smoothly, with the initial sample of 4F Square DRAM set to be developed by 2025.
Industry sources cited by WeChat account DRAMeXchange indicate that the early DRAM cell structure was 8F Square, while currently commercialized DRAM mainly uses 6F Square. Compared to these two technologies, 4F Square employs a vertical channel transistor (VCT) structure, which can reduce the chip surface area by 30%.
As the cell area decreases, DRAM density and performance increase. Therefore, driven by applications like AI, 4F Square technology is gradually sought after by major storage manufacturers.
Previously, Samsung stated that many companies are working to transition their technology to 4F Square VCT DRAM, although some challenges need to be overcome, including the development of new materials like oxide channel materials and ferroelectrics.
Industry sources believe that the initial sample of Samsung’s 4F Square DRAM in 2025 might be for internal release. Another semiconductor manufacturer, Tokyo Electron, estimates that DRAM using VCT and 4F Square technology will come out between 2027 and 2028.
Furthermore, earlier media reports mentioned that Samsung plans to apply Hybrid Bonding technology to support the production of 4F Square DRAM. Hybrid Bonding is a next-generation packaging technology referring to vertically stack chips to increase cell density and thus improve performance, which will also exert an influence on the development of HBM4 and 3D DRAM.
In the era of AI, HBM, particularly HBM3e, has thrived in the memory market, prompting fierce competition among the three major DRAM manufacturers. A new race is now underway, primarily focusing on the next-generation HBM4 technology.
In April of this year, SK Hynix announced a partnership with TSMC to jointly develop HBM4. It is reported that the two companies will first work on performance improvements for the base die fitted at the bottom layer within the HBM package. To focus on the development of next-generation HBM4 technology, Samsung has established a new “HBM Development Team.”
In July, Choi Jang-seok, head of the New Business Planning Group in Samsung Electronics’ memory division, revealed that the company is developing a high-capacity HBM4 memory with a single stack of up to 48GB, expected to go into production next year. Recently, Samsung reportedly plans to use a 4nm advanced process to produce HBM4 logic die. Micron, on the other hand, plans to introduce HBM4 between 2025 and 2027 and transition to HBM4E by 2028.
Aside from manufacturing processes, DRAM manufacturers are actively exploring hybrid bonding technology for future HBM products. Compared to existing bonding processes, hybrid bonding eliminates the need for bumps between DRAM memory layers, instead directly connecting the upper and lower layers, copper to copper. This significantly improves signal transmission speed, better matching the high bandwidth requirements of AI computing.
In April of this year, Korean media outlet The Elec reported that Samsung successfully manufactured a 16-layer stacked HBM3 memory based on hybrid bonding technology, with the memory sample functioning normally. This 16-layer stacked hybrid bonding technology will be used to produce HBM4 at scale in the future. SK Hynix plans to adopt hybrid bonding in its HBM production by 2026. Micron is also developing HBM4 and is considering related technologies, including hybrid bonding, which are all under research at present.
3D DRAM (Three-dimensional dynamic random-access memory) represents a new DRAM technology with a novel memory cell structure. Unlike traditional DRAM, which places memory cells horizontally, 3D DRAM vertically stacks memory cells, greatly increasing storage capacity per unit area and improving efficiency. This makes it a key development for the next generation of DRAM.
In the memory market, 3D NAND Flash has already achieved commercial application, while 3D DRAM technology is still under research and development. However, as AI, big data, and other applications enjoy burgeoning growth, the demand for high-capacity, high-performance memory will surge, and 3D DRAM is expected to become a mainstream product in the memory market.
HBM technology has paved the way for the 3D evolution of DRAM, enabling DRAM to transition from traditional 2D to 3D. However, current HBM cannot be considered as true 3D DRAM technology. Samsung’s 4F Square VCT DRAM is closer to the concept of 3D DRAM, but it is not the only direction or goal for 3D DRAM. Memory manufacturers have more ideas and creativity in 3D DRAM.
Samsung plans to achieve the commercialization of 3D DRAM by 2030. In 2024, Samsung showcased two 3D DRAM technologies, including VCT and stacked DRAM. Samsung first introduced VCT technology, then upgraded to stacked DRAM by stacking multiple VCTs together to continuously improve DRAM capacity and performance.
Samsung states that stacked DRAM can fully utilize the Z-axis space, accommodating more memory cells in a smaller area, with a single chip capacity exceeding 100Gb. In May of this year, Samsung noted that it, along with other companies, successfully manufactured 16-layer 3D DRAM, but emphasized that it is not ready for mass production. 3D DRAM is expected to be produced using wafer-to-wafer hybrid bonding technology, and BSPDN (Backside Power Delivery Network) technology is also considered.
Regarding Micron, industry sources cited by DRAMeXchange reveal that Micron has filed for a 3D DRAM patent application different from Samsung’s, aiming to alter the shape of transistors and capacitors without placing cells.
BusinessKorea reported in June that SK Hynix achieved a manufacturing yield of 56.1% for its 5-layer stacked 3D DRAM. This means that out of around 1000 3D DRAMs produced on a single test wafer, about 561 viable devices were manufactured. The experimental 3D DRAM demonstrated characteristics similar to the currently used 2D DRAM, marking the first time SK Hynix disclosed specific numbers and features of its 3D DRAM development.
Besides, American company NEO Semiconductor is also engaging in the development of 3D DRAM. Last year, NEO Semiconductor announced the launch of the world’s first 3D DRAM prototype: 3D X-DRAM. This technology resembles 3D NAND Flash, namely increasing memory capacity by stacking layers, offering high yield, low cost, and remarkably high density.
NEO Semiconductor plans to launch the first generation of 3D X-DRAM in 2025, featuring a 230-layer stack and a core capacity of 128Gb, which is several times higher than the 16Gb capacity of 2D DRAM.
Read more
(Photo credit: SK Hynix)
News
According to a previous report from Bloomberg, Chinese 3D NAND Flash giant YMTC recently filed a lawsuit against American memory giant Micron in California, accusing Micron of infringing on 11 of its patents related to 3D NAND Flash and DRAM products. YMTC is requesting the court to order Micron to stop selling the infringing memory products in the United States and to pay patent royalties.
Established at the end of 2016 in Wuhan, YMTC is a major Chinese manufacturer of memory (DRAM) and flash memory (NAND Flash), supported by significant investments from the “Big Fund.” It has become a representative enterprise in China’s effort to build a local chip supply chain. However, in October 2022, the U.S. Department of Commerce added YMTC to the Entity List, preventing it from obtaining advanced equipment from U.S. companies to manufacture 3D NAND chips with 128 layers or more.
Before facing U.S. export controls, YMTC’s 128-layer 3D NAND chip products had already entered Apple’s supply chain and received technical and quality certification from Apple. At that time, Apple reportedly hoped to use YMTC’s chips not only for cost considerations but also to prevent flash memory from being overly concentrated in the hands of Samsung, SK Hynix, and Micron.
The report from Tom’s hardware states that YMTC’s current allegations assert that Micron’s 96-layer (B27A), 128-layer (B37R), 176-layer (B47R), and 232-layer (B58R) 3D NAND Flash products, as well as some DDR5 SDRAM products (Y2BM series), infringe on 11 of YMTC’s patents or patent applications filed in the United States.
Notably, last November, YMTC also filed a lawsuit against Micron and its subsidiaries in the U.S. District Court for the Northern District of California, accusing them of infringing on eight U.S. patents related to 3D NAND Flash. Additionally, per a report from South China Morning Post on June 7th of this year, YMTC filed a lawsuit in California, accusing the Denmark-based consulting firm Strand Consult, funded by Micron, of spreading false information that damaged YMTC’s market reputation and business relationships.
Industry sources cited by the Commercial Times also note that in recent years, China’s technological capabilities have significantly improved, and companies have been actively applying for patents domestically and internationally. With the support of the Chinese government, they have also started to frequently engage in patent litigation. Last year, Chinese courts received 5,062 technical intellectual property and monopoly cases.
Read more
(Photo credit: YMTC)