DRAM


2024-06-06

[News] New Standard for DDR6 Memory to Come out Soon

JEDEC (the Solid State Technology Association) recently confirmed that the long-used SO-DIMM and DIMM memory standards will be replaced by CAMM2 for DDR6 (LPDDR6 included).

According to a report from WeChat account DRAMeXchange, the minimum frequency for DDR6 memory is 8800MHz, which can be increased to 17.6GHz, with a theoretical maximum of up to 21GHz, far surpassing that of DDR4 and DDR5 memory. CAMM2 is a brand new memory standard that also supports DDR6 standard memory, making it suitable for large PC devices like desktop PC. JEDEC expects to complete the preliminary draft of the DDR6 memory standard within this year, with the official version 1.0 expected by 2Q25 at the earliest, and specific products likely coming in 4Q25 or in 2026.

LPDDR6 will adopt a new 24-bit wide channel design, with a maximum memory bandwidth of up to 38.4GB/s, significantly higher than the existing LPDDR5 standard. The maximum rate for LPDDR6 can reach 14.4Gbps and the minimum rate is 10.667Gbps, matching the highest rate of LPDDR5x and far exceeding LPDDR5’s 6.7Gbps.

It is learned that a true CAMM2-standard LPDDR6, with a 32GB specification for example, costs about USD 500, which is five times the price of LPDDR5 (SO-DIMM/DIMM) memory.

Considering market adoption, the industry believes that the new CAMM2 standard adopted by DDR6 requires large-scale replacement of existing production equipment, which will bring about a new cost structure. Meanwhile, the evolution of new standards in the existing market will face high cost issue, which will restrict the large-scale adoption of DDR6 or LPDDR6.

Currently, upstream manufacturers like Samsung, SK Hynix, and Micron already have some memory products supporting the CAMM2 standard. Among downstream brand manufacturers, Lenovo and Dell also follow up and Dell reportedly has used CAMM2 memory boards in its enterprise product line in 2023.

Read more

(Photo credit: Samsung)

Please note that this article cites information from WeChat account DRAMeXchange.

2024-06-05

[News] Samsung Announced Breakthrough for Novel Memory Technology

Recently, Samsung Electronics announced that the development of its 8nm eMRAM has almost been completed and process upgrades is underway as planned.

According to a report from WeChat account DRAMeXchange, as a new type of non-volatile memory technology based on magnetic principles, eMRAM falls under the category of embedded MRAM (Magnetoresistive random-access memory). Compared to traditional DRAM, eMRAM offers faster access speeds and higher durability. Unlike DRAM, it does not require data refreshing, and its write rate is 1000 times that of NAND.

Due to these characteristics, the industry holds a positive outlook on the potential of eMRAM, especially in scenarios that demand high performance, energy efficiency, and durability.

Samsung Electronics is one of the main producers of eMRAM and is dedicated to promoting its adoption in the automotive sector. In 2019, Samsung developed and mass-produced the industry’s first eMRAM based on 28nm FD-SOI. After achieving the production capability of 28nm eMRAM, Samsung reportedly plans to mass-produce 14nm eMRAM in 2024, 8nm in 2026, and 5nm in 2027.

Samsung is confident about the application of eMRAM in future automotive uses, stating that its product’s temperature tolerance has reached 150-160°C, which can fully meet the stringent requirements of the automotive industry for semiconductors.

In recent years, the proliferation of big data and artificial intelligence applications has generated massive memory demands and placed higher requirements for memory technologies. Against this backdrop, new memory technologies have continuously emerged, among which SCM (Storage Class Memory) is a representative, which combines high-speed read and write performance of DRAM with the persistent storage capability of NAND flash, potentially addressing issues of small capacity, volatility, and high cost associated with DRAM. Key SCM products include phase-change memory (PCM), resistive RAM (ReRAM), magnetoresistive RAM (MRAM), and nanotube RAM (NRAM).

Aside from Samsung, companies like Kioxia and ByteDance have also acted vigorously in the new memory field this year. In April, Kioxia’s CTO Hidefumi Miyajima stated that compared to competitors developing both NAND and DRAM, Kioxia is at a competitive disadvantage in terms of business diversity, making the cultivation of new memory product business like SCM a necessity. With this goal in mind, Kioxia reorganized its “Memory Technology Research Laboratory” into the “Advanced Technology Research Laboratory.”

In March, it was reported by South China Morning Post that ByteDance invested in a Chinese memory company Innostar, becoming its third-largest shareholder. Innostar focuses on the R&D of new memory technologies like ReRAM and related chip products covering three categories: high-performance industrial control/automotive-grade SoC/ASIC chip, computing-in-memory (CIM) IP, chip and system-on-memory (SoM) chip.

Read more

(Photo credit: Samsung)

Please note that this article cites information from WeChat account DRAMeXchange and South China Morning Post.

2024-06-03

[News] Heated Competition Driven by the Booming AI Market: A Quick Glance at HBM Giants’ Latest Moves, and What’s Next

To capture the booming demand of AI processors, memory heavyweights have been aggressively expanding HBM (High Bandwidth Memory) capacity, as well as striving to improve its yield and competitiveness. The latest development would be Micron’s reported new plant in Hiroshima Prefecture, Japan.

The fab, targeting to produce chips and HBM as early as 2027, is reported to manufacture DRAM with the most advanced “1γ” (gamma; 11-12 nanometers) process, using extreme ultraviolet (EUV) lithography equipment in the meantime.

Why is HBM such a hot topic, and why is it so important?

HBM: Solution to High Performance Computing; Perfectly Fitted for AI Chips

By applying 3D stacking technology, which enables multiple layers of chips to be stacked on top of each other, HBM’s TSVs (through-silicon vias) process allows for more memory chips to be packed into a smaller space, thus shortening the distance data needs to travel. This makes HBM perfectly fitted to high-performance computing applications, which requires fast data speed. Additionally, replacing GDDR SDRAM or DDR SDRAM with HBM will help control energy consumption.

Thus, it would not be surprising that AMD, the GPU heavyweight, collaborated with memory leader SK hynix to develop HBM in 2013. In 2015, AMD launched the world’s first high-end consumer GPU with HBM, named Fiji. While in 2016, NVIDIA introduced P100, its first AI server GPU with HBM.

Entering the Era of HBM3e

Years after the first AI server GPU with HBM was launched, NVIDIA has now incorporated HBM3e (the 5th generation HBM) in its Blackwell B100/ Hopper H200 models. The GPU giant’s GB200 and B100, which will also adopt HBM3e, are on the way, expected to be launched in 2H24.

The current HBM3 supply for NVIDIA’s H100 is primarily met by SK hynix. In March, it has reportedly started mass production of HBM3e, and secured the order to NVIDIA. In May, yield details regarding HBM3e have been revealed for the first time. According to Financial Times, SK hynix has achieved the target yield of nearly 80%.

On the other hand, Samsung made it into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, while received AMD MI300 certification by 1Q24. In March, Korean media Alphabiz reported that Samsung may exclusively supply its 12-layer HBM3e to NVIDIA as early as September. However, rumors have it that it failed the test with NVIDIA, though Samsung denied the claims, noting that testing proceeds smoothly and as planned.

According to Korea Joongang Daily, Micron has roused itself to catch up in the heated competition of HBM3e. Following the mass production in February, it has recently secured an order from NVIDIA for H200.

Regarding the demand, TrendForce notes that HBM3e may become the market mainstream for 2024, which is expected to account for 35% of advanced process wafer input by the end of 2024.

HBM4 Coming Soon? Major Players Gear up for Rising Demand

As for the higher-spec HBM4, TrendForce expects its potential launch in 2026. With the push for higher computational performance, HBM4 is set to expand from the current 12-layer (12hi) to 16-layer (16hi) stacks. HBM4 12hi products are set for a 2026 launch, with 16hi in 2027.

The Big Three have all revealed product roadmaps for HBM4. SK hynix, according to reports from Wccftech and TheElec, stated to commence large-scale production of HBM4 in 2026. The chip will, reportedly, be the first chip from SK hynix made through its 10-nm class Gen 6 (1c) DRAM.

As the current market leader in HBM, SK hynix shows its ambition in capacity expansion as well as industrial collaboration. According to Nikkei News, it is considering expanding the investment to Japan and the US to increase HBM production and meet customer demand.

In April, it disclosed details regarding the collaboration with TSMC, of which SK hynix plans to adopt TSMC’s advanced logic process (possibly CoWoS) for HBM4’s base die so additional functionality can be packed into limited space.

Samsung, on the other hand, claimed to introduce HBM4 in 2025, according to Korea Economic Daily. The memory heavyweight stated at CES 2024 that its HBM chip production volume will increase 2.5 times compared to last year and is projected to double again next year. In order to embrace the booming demands, the company spent KRW 10.5 billion to acquire the plant and equipment of Samsung Display located in Tianan City, South Korea, for HBM capacity expansion. It also plans to invest KRW 700 billion to 1 trillion in building new packaging lines.

Meanwhile, Micron anticipates launching 12-layer and 16-layer HBM4 with capacities of 36GB to 48GB between 2026 and 2027. After 2028, HBM4e will be introduced, pushing the maximum bandwidth beyond 2+ TB/s and increasing stack capacity to 48GB to 64GB.

Look back at history. As the market demand for AI chips keeps its momentum, GPU companies tend to diversify their sources, while memory giants vie for their favor by improving yields and product competitiveness.

In the era of HBM3, the supply for NVIDIA’s H100 solution is primarily met by SK hynix at first. Afterwards, Samsung’s entry into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment. This trend of diversifying suppliers may continue in HBM4. Who would be able to claim the lion’s share in the next-gen HBM market? Time will tell sooner or later.

Read more

(Photo credit: Samsung)

2024-05-30

[News] SK Hynix Disclosed Details Regarding HBM4e, Reportedly Integrating Computing and Caching Functionalities

As the demand for AI chips keeps booming, memory giants have been aggressive in their HBM roadmaps. SK hynix, with its leading market position in HBM3e, has now revealed more details regarding HBM4e. According to reports by Wccftech and ET News, SK hynix plans to further distinguish itself by introducing an HBM variant capable of supporting multiple functionalities including computing, caching, and network memory.

While this concept is still in the early stages, SK hynix has begun acquiring semiconductor design IPs to support its objectives, the aforementioned reports noted.

According to ET News, the memory giant intends to establish the groundwork for a versatile HBM with its forthcoming HBM4 architecture. The company reportedly plans to integrate a memory controller onboard, paving the way for new computing capabilities with its 7th-generation HBM4e memory.

By employing SK hynix’s technique, the package will become a unified unit. This will not only ensure faster transfer speeds due to significantly reduced structural gaps but also lead to higher power efficiencies, according to the reports.

Previously in April, SK hynix announced that it has been collaborating with TSMC to produce next-generation HBM and enhance logic and HBM integration through advanced packaging technology. The company plans to proceed with the development of HBM4 slated to be mass produced from 2026, in the initiative.

As more details relating to HBM4 have been revealed now, the memory heavyweight seems to extend its leading market position in HBM3 by addressing the semiconductor aspect of the HBM structure, Wccftech said.

According to TrendForce’s analysis earlier, as of early 2024, the current landscape of the HBM market is primarily focused on HBM3. NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, signaling the next step in memory technology. The current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix, leading to a supply shortfall in meeting burgeoning AI market demands.

In late May, SK hynix has disclosed yield details regarding HBM3e for the first time. According to a report from the Financial Times, the memory giant has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from Wccftech and ET News.
2024-05-29

[Insights] Memory Spot Price Update: Price Dropped Further; NAND Demand Momentum Expected to Pick Up in Q3

According to TrendForce’s latest memory spot price trend report, the weak demand is putting more and more pressure on sellers, causing spot prices of modules and chips continue to record a sharper fall than the week before. Meanwhile, regarding NAND Flash prices, the spot market continued to drop this week, while suppliers are thus hoping that the traditional peak season (3Q24) would generate additional demand. Details are as follows:

DRAM Spot Price:

Trading activities have further slowed down in the spot market compared with last week. Since it is now near the end of May, the weak demand situation is putting more and more pressure on sellers, and spot prices of modules and chips continue to fall. The average spot price of mainstream chips (i.e., DDR4 1Gx8 2666MT/s) has dropped by 0.42% from US$1.917 last week to US$1.909 this week.

NAND Flash Spot Price:

The spot market is seeing activities of truncation due to suppliers’ pressure in funds yielded by their excessive inventory, and the chaotic negotiation of prices has been widened in scale, though inquiries on market prices and transactions are maintained on the lesser end under lethargic demand. Suppliers are thus hoping that the traditional peak season (3Q24) would generate additional demand, and prompt the market to get rid of further inventory. Spot prices, as a result, continued to drop this week. 512Gb TLC wafers have dropped by 2.79% in spot prices this week, arriving at US$3.479.

 

  • Page 28
  • 57 page(s)
  • 284 result(s)

Get in touch with us