DRAM


2024-07-31

[News] SK Hynix Launches the World’s Highest-Performance GDDR7

On July 30, 2024, SK hynix announced the launch of next-generation memory product, GDDR7, with the world’s highest performance.

SK hynix explained that GDDR is characterized by the performance specifically designed for graphic processing and high-speed property, which has gaining an increasingly more traction from global AI application customers. In response to this trend, the company completed the development of the latest GDDR7 specifications in March this year, which was now officially launched and will achieve mass production in the third quarter of this year.

SK hynix’s GDDR7 features an operating speed of up to 32Gbps (32 gigabytes per second), which represents an increase of more than 60% compared to the previous generation, and can stand at 40Gbps depending on the usage environment. Built on the latest graphics card, it can support data processing speed of over 1.5TB per second, equivalent to processing 300 FHD (5GB) movies in one second.

In addition to providing faster speeds, GDDR7 boasts an energy efficiency 50% higher than the previous generation. To address chip heating issue caused by ultra-high-speed data processing, SK hynix adopted new packaging technology in the development of this product.

SK hynix’s technical team maintained the product size while increasing the heat-dissipating layers in the packaging substrate from four to six and used highly thermally conductive epoxy molding compound (EMC) in the packaging materials. As a result, the technical team successfully reduced the thermal resistance of the product by 74% compared to the previous generation.

Lee Sang-kwon, Vice President of SK hynix DRAM PP&E, said that SK hynix’s GDDR7 has achieved the highest performance of existing memory chips with excellent speed and energy efficiency, and its applications will expand from high-performance 3D graphics to AI, HPC, and autonomous driving.

Through this product, the company will further strengthen its high-end memory product line while developing into the most trustworthy AI memory solution company for customers.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from WeChat account DRAMeXchange.
2024-07-31

[News] New Solution to AI the Power Monster? CRAM Reportedly to Reduce Energy Consumption by 1,000 Times

As AI applications become more widespread, there is an urgent need to improve energy efficiency. Traditional AI processes are known as power-hungry due to the constant data transferring between logic and memory. However, according to the reports by Tom’s Hardware and Innovation News Network, researchers in the U.S. may have come up with a solution: computational random-access memory (CRAM), which is said to reduce energy consumption by AI by 1,000 times or more.

According to the reports, researchers at the University of Minnesota, after over 20 years of research, have developed a new generation of phase-change memory that can significantly reduce energy consumption in AI applications.

Citing the research, Tom’s Hardware explains that in current AI computing, data is frequently transferred between processing components (logic) and storage (memory). This constant back-and-forth movement of information can consume up to 200 times more energy than the actual computation.

However, with the so-called CRAM, data can be processed entirely within the memory array without having to leave the grid where it is stored. Computations can be performed directly within memory cells, eliminating the slow and energy-intensive data transfers common in traditional architectures.

According to Innovation News Network, machine learning inference accelerators based on CRAM could achieve energy savings of up to 1,000 times, with some applications realizing reductions of 2,500 and 1,700 times compared to conventional methods.

The reports note further that the patented technology is related to Magnetic Tunnel Junctions (MTJs), which are nanostructured devices used in hard drives, sensors, and various microelectronic systems, including Magnetic Random Access Memory (MRAM).

It is worth noting that among Taiwanese companies, NOR flash memory company Macronix may be the one with the most progress. According to a report by the Economic Daily, Macronix has been collaborating with IBM to develop the phase-change memory technology for over a decade, with AI applications as their main focus. Currently, Macronix is IBM’s sole partner for phase-change memory.

The report notes that the joint development program between Macronix and IBM is organized in three-year phases. At the end of each phase, the two companies decide whether to sign a new agreement based on the situation.

Read more

(Photo credit: npj Unconventional Computing)

Please note that this article cites information from Tom’s HardwareInnovation News Network and Economic Daily News.
2024-07-29

[News] MRDIMM/MCRDIMM to be the New Sought-Afters in Memory Field

Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”

According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.

  • JEDEC Announces Details of the DDR5 MRDIMM Standard

On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.

As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.

DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.

In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.

JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.

  • Micron’s MRDIMM DDR5 to Start Mass Shipment in 2H24

In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.

MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.

On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.

MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.

Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.

MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.

This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.

  • SK hynix to Launch MCRDIMM Products in 2H24

As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.

MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.

Source: SK hynix

In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.

Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.

At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.

  • MRDIMM Boasts a Brilliant Future

MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.

Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.

MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.

Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from Tom’s Hardware, Micron and WeChat account DRAMeXchange.

2024-07-26

[News] SK hynix Approves USD 6.8 Billion Investment for its First Fab in Yongin, Targeting Next-gen DRAMs

South Korean memory giant SK hynix announced on July 26th that it has decided to invest about 9.4 trillion won (approximately USD 6.8 billion) in building the first fab and business facilities of the Yongin Semiconductor Cluster after the board resolution today, according to its press release.

The company plans to start the construction of the fab in March next year and complete it in May 2027, while the investment period was planned to start from August 2024 to the end of 2028, SK hynix states.

The company will produce next-generation DRAMs, including HBM, at the 1st fab prepare for production of other products in line with market demand at the time of completion.

“The Yongin Cluster will be the foundation for SK hynix’s mid- to long-term growth and a place for innovation and co-prosperity that we are creating with our partners,” said Vice President Kim Young-sik, Head of Manufacturing Technology at SK hynix. “We want to contribute to revitalizing the national economy by successfully completing the large-scale industrial complex and dramatically enhancing Korea’s semiconductor technology and ecosystem competitiveness,” according to the press release.

The Yongin Cluster, which will be built on a 4.15 million square meter site in Wonsam-myeon, Yongin, Gyeonggi Province, is currently under site preparation and infrastructure construction. SK hynix has decided to build four state-of-the-art fabs that will produce next-generation semiconductors, and a semiconductor cooperation complex with more than 50 small local companies.

After the construction of the 1st fab, the company aims to complete the remaining three fabs sequentially to grow the Yongin Cluster into a “Global AI semiconductor production base,” the press release notes.

The 9.4 trillion investment approved this time included various construction costs necessary for the initial operation of the cluster, including auxiliary facilities1, business support buildings, and welfare facilities along with the 1st fab.

In addition, SK hynix plans to build a “Mini-fab2” within the first phase to help small businesses develop, demonstrate and evaluate technologies. Through the Mini-fab, the company will provide small business partners with an environment similar to the actual production site so that they can improve the technological perfection as much as possible.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from SK hynix.
2024-07-26

[News] Battle between Memory Giants Heats up in 2H24 as Samsung and SK hynix Advance in HBM3/ HBM3e

As SK hynix and Samsung are releasing their financial results on July 25th and July 31st, respectively, their progress on HBM3 and HBM3e have also been brought into spotlight. Earlier this week, Samsung is said to eventually passed NVIDIA’s qualification tests for its HBM3 chips. While the Big Three in the memory sector are now almost on the same page, the war between HBM3/ HBM3e is expected to intensify in the second half of 2024.

Samsung Takes a Big Leap

According to reports from Reuters and the Korea Economic Daily, Samsung’s HBM3 chips have been cleared by NVIDIA, which will initially be used exclusively in the AI giant’s H20, a less advanced GPU tailored for the Chinese market. Citing sources familiar with the matter, the reports note that Samsung may begin supplying HBM3 to NVIDIA as early as August.

However, as the U.S. is reportedly considering to implement new trade sanctions on China in October, looking to further limit China’s access to advanced AI chip technology, NVIDIA’s HGX-H20 AI GPUs might face a sales ban. Whether and to what extent would Samsung’s momentum be impacted remains to be seen.

SK hynix Eyes HBM3e to Account > 50% of Total HBM Shipments

SK hynix, as the current HBM market leader, has expressed its optimism in securing the throne on HBM3. According to a report by Business Korea, citing Kim Woo-hyun, vice president and chief financial officer of SK hynix, the company significantly expanded its HBM3e shipments in the second quarter as demand surged.

Moreover, SK hynix reportedly expects its HBM3e shipments to surpass those of HBM3 in the third quarter, with HBM3e accounting for more than half of the total HBM shipments in 2024.

SK hynix started mass production of the 8-layer HBM3e for NVIDIA in March, and now it is also confident about the progress on the 12-layer HBM3e. According to Business Korea, the company expects to begin supplying 12-layer HBM3e products to its customers in the fourth quarter. In addition, it projects the supply of 12-layer products to surpass that of 8-layer products in the first half of 2025.

Micron Expands at Full Throttle

Micron, on the other hand, has reportedly started mass production of 8-layer HBM3e in February, according to a previous report from Korea Joongang Daily. The company is also reportedly planning to complete preparations for mass production of 12-layer HBM3e in the second half and supply it to major customers like NVIDIA in 2025.

Targeting to achieve a 20% to 25% market share in HBM by 2025, Micron is said to be building a pilot production line for HBM in the U.S. and is considering producing HBM in Malaysia for the first time to capture more demand from the AI boom, a report by Nikkei notes. Micron’s largest HBM production facility is located in Taichung, Taiwan, where expansion efforts are also underway.

Earlier in May, a report from a Japanese media outlet The Daily Industrial News also indicated that Micron planned to build a new DRAM plant in Hiroshima, with construction scheduled to begin in early 2026 and aiming for completion of plant buildings and first tool-in by the end of 2027.

TrendForce’s latest report on the memory industry reveals that DRAM revenue is expected to see significant increases of 75% in 2024, driven by the rise of high-value products like HBM. As the market keeps booming, would Samsung come from behind and take the lead in the HBM3e battle ground? Or would SK hynix defend its throne? The progress of 12-layer HBM3e may be a key factor to watch.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Reuters and Business Korea.
  • Page 17
  • 57 page(s)
  • 284 result(s)

Get in touch with us