Micron


2024-06-06

[News] New Standard for DDR6 Memory to Come out Soon

JEDEC (the Solid State Technology Association) recently confirmed that the long-used SO-DIMM and DIMM memory standards will be replaced by CAMM2 for DDR6 (LPDDR6 included).

According to a report from WeChat account DRAMeXchange, the minimum frequency for DDR6 memory is 8800MHz, which can be increased to 17.6GHz, with a theoretical maximum of up to 21GHz, far surpassing that of DDR4 and DDR5 memory. CAMM2 is a brand new memory standard that also supports DDR6 standard memory, making it suitable for large PC devices like desktop PC. JEDEC expects to complete the preliminary draft of the DDR6 memory standard within this year, with the official version 1.0 expected by 2Q25 at the earliest, and specific products likely coming in 4Q25 or in 2026.

LPDDR6 will adopt a new 24-bit wide channel design, with a maximum memory bandwidth of up to 38.4GB/s, significantly higher than the existing LPDDR5 standard. The maximum rate for LPDDR6 can reach 14.4Gbps and the minimum rate is 10.667Gbps, matching the highest rate of LPDDR5x and far exceeding LPDDR5’s 6.7Gbps.

It is learned that a true CAMM2-standard LPDDR6, with a 32GB specification for example, costs about USD 500, which is five times the price of LPDDR5 (SO-DIMM/DIMM) memory.

Considering market adoption, the industry believes that the new CAMM2 standard adopted by DDR6 requires large-scale replacement of existing production equipment, which will bring about a new cost structure. Meanwhile, the evolution of new standards in the existing market will face high cost issue, which will restrict the large-scale adoption of DDR6 or LPDDR6.

Currently, upstream manufacturers like Samsung, SK Hynix, and Micron already have some memory products supporting the CAMM2 standard. Among downstream brand manufacturers, Lenovo and Dell also follow up and Dell reportedly has used CAMM2 memory boards in its enterprise product line in 2023.

Read more

(Photo credit: Samsung)

Please note that this article cites information from WeChat account DRAMeXchange.

2024-06-03

[News] Heated Competition Driven by the Booming AI Market: A Quick Glance at HBM Giants’ Latest Moves, and What’s Next

To capture the booming demand of AI processors, memory heavyweights have been aggressively expanding HBM (High Bandwidth Memory) capacity, as well as striving to improve its yield and competitiveness. The latest development would be Micron’s reported new plant in Hiroshima Prefecture, Japan.

The fab, targeting to produce chips and HBM as early as 2027, is reported to manufacture DRAM with the most advanced “1γ” (gamma; 11-12 nanometers) process, using extreme ultraviolet (EUV) lithography equipment in the meantime.

Why is HBM such a hot topic, and why is it so important?

HBM: Solution to High Performance Computing; Perfectly Fitted for AI Chips

By applying 3D stacking technology, which enables multiple layers of chips to be stacked on top of each other, HBM’s TSVs (through-silicon vias) process allows for more memory chips to be packed into a smaller space, thus shortening the distance data needs to travel. This makes HBM perfectly fitted to high-performance computing applications, which requires fast data speed. Additionally, replacing GDDR SDRAM or DDR SDRAM with HBM will help control energy consumption.

Thus, it would not be surprising that AMD, the GPU heavyweight, collaborated with memory leader SK hynix to develop HBM in 2013. In 2015, AMD launched the world’s first high-end consumer GPU with HBM, named Fiji. While in 2016, NVIDIA introduced P100, its first AI server GPU with HBM.

Entering the Era of HBM3e

Years after the first AI server GPU with HBM was launched, NVIDIA has now incorporated HBM3e (the 5th generation HBM) in its Blackwell B100/ Hopper H200 models. The GPU giant’s GB200 and B100, which will also adopt HBM3e, are on the way, expected to be launched in 2H24.

The current HBM3 supply for NVIDIA’s H100 is primarily met by SK hynix. In March, it has reportedly started mass production of HBM3e, and secured the order to NVIDIA. In May, yield details regarding HBM3e have been revealed for the first time. According to Financial Times, SK hynix has achieved the target yield of nearly 80%.

On the other hand, Samsung made it into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, while received AMD MI300 certification by 1Q24. In March, Korean media Alphabiz reported that Samsung may exclusively supply its 12-layer HBM3e to NVIDIA as early as September. However, rumors have it that it failed the test with NVIDIA, though Samsung denied the claims, noting that testing proceeds smoothly and as planned.

According to Korea Joongang Daily, Micron has roused itself to catch up in the heated competition of HBM3e. Following the mass production in February, it has recently secured an order from NVIDIA for H200.

Regarding the demand, TrendForce notes that HBM3e may become the market mainstream for 2024, which is expected to account for 35% of advanced process wafer input by the end of 2024.

HBM4 Coming Soon? Major Players Gear up for Rising Demand

As for the higher-spec HBM4, TrendForce expects its potential launch in 2026. With the push for higher computational performance, HBM4 is set to expand from the current 12-layer (12hi) to 16-layer (16hi) stacks. HBM4 12hi products are set for a 2026 launch, with 16hi in 2027.

The Big Three have all revealed product roadmaps for HBM4. SK hynix, according to reports from Wccftech and TheElec, stated to commence large-scale production of HBM4 in 2026. The chip will, reportedly, be the first chip from SK hynix made through its 10-nm class Gen 6 (1c) DRAM.

As the current market leader in HBM, SK hynix shows its ambition in capacity expansion as well as industrial collaboration. According to Nikkei News, it is considering expanding the investment to Japan and the US to increase HBM production and meet customer demand.

In April, it disclosed details regarding the collaboration with TSMC, of which SK hynix plans to adopt TSMC’s advanced logic process (possibly CoWoS) for HBM4’s base die so additional functionality can be packed into limited space.

Samsung, on the other hand, claimed to introduce HBM4 in 2025, according to Korea Economic Daily. The memory heavyweight stated at CES 2024 that its HBM chip production volume will increase 2.5 times compared to last year and is projected to double again next year. In order to embrace the booming demands, the company spent KRW 10.5 billion to acquire the plant and equipment of Samsung Display located in Tianan City, South Korea, for HBM capacity expansion. It also plans to invest KRW 700 billion to 1 trillion in building new packaging lines.

Meanwhile, Micron anticipates launching 12-layer and 16-layer HBM4 with capacities of 36GB to 48GB between 2026 and 2027. After 2028, HBM4e will be introduced, pushing the maximum bandwidth beyond 2+ TB/s and increasing stack capacity to 48GB to 64GB.

Look back at history. As the market demand for AI chips keeps its momentum, GPU companies tend to diversify their sources, while memory giants vie for their favor by improving yields and product competitiveness.

In the era of HBM3, the supply for NVIDIA’s H100 solution is primarily met by SK hynix at first. Afterwards, Samsung’s entry into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment. This trend of diversifying suppliers may continue in HBM4. Who would be able to claim the lion’s share in the next-gen HBM market? Time will tell sooner or later.

Read more

(Photo credit: Samsung)

2024-05-29

[News] LPDDR6’s Bandwidth Expected to be Increased over 100%

Currently, the issue of low power consumption remains a key concern in the industry. According to a recent report by the International Energy Agency (IEA), given that an average Google search requires 0.3Wh and each request to OpenAI’s ChatGPT consumes 2.9Wh, the 9 billion searches conducted daily would require an additional 10 terawatt-hours (TWh) of electricity annually. Based on the projected sales of AI servers, AI industry might see exponential growth in 2026, with power consumption needs at least ten times that of last year.

Ahmad Bahai, CTO of Texas Instruments, per a previous report from Business Korea, stated that recently, in addition to the cloud, AI services have also shifted to mobile and PC devices, leading to a surge in power consumption, and hence, this will be a hot topic.

In response to market demands, the industry is actively developing semiconductors with lower power consumption. On memory products, the development of LPDDR and related products such as Low Power Compression Attached Memory Module (LPCAMM) is accelerating. These products are particularly suitable for achieving energy conservation in mobile devices with limited battery capacity. Additionally, the expansion of AI applications in server and automotive fields is driving the increased use of LPDDR to reduce power consumption.

In terms of major companies, Micron, Samsung Electronics, and SK Hynix are speeding up the development of the next generation of LPDDR. Recently, Micron announced the launch of Crucial LPCAMM2. Compared to existing modules, this product is 64% smaller and 58% more power-efficient. As a low-power dedicated packaging module that includes several latest LPDDR products (LPDDR5X), it is a type of LPCAMM. LPCAMM was first introduced by Samsung Electronics last year, and it is expected to enjoy significant market growth this year.

Currently, the Joint Electron Device Engineering Council (JEDEC) plans to complete the development of LPDDR6 specifications within this year. According to industry news cited by the Korean media BusinessKorea, LPDDR6 is expected to start commercialization next year. The industry predicts that LPDDR6’s bandwidth may more than double that of previous generation.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from WeChat account DRAMeXchange.

2024-05-28

[News] Micron Reportedly Set to Build New DRAM Plant in Hiroshima, Japan, Operational Expected by End of 2027

According to a report from a Japanese media outlet The Daily Industrial News, it reported that Micron Technology plans to build a new plant in Hiroshima Prefecture, Japan, for the production of DRAM chips, aiming to begin operations as early as the end of 2027.

The report estimates the total investment to be between JPY 600 billion and 800 billion (roughly USD 5.1 billion). Construction of the new plant is scheduled to begin in early 2026, with the installation of extreme ultraviolet (EUV) lithography equipment.

The Japanese government has approved subsidies of up to JPY 192 billion (roughly USD 1.3 billion) to support Micron’s production of next-generation chips at its Hiroshima plant. The Ministry of Economy, Trade and Industry stated last year that this funding would help Micron incorporate ASML’s EUV equipment, with these chips being crucial for powering generative AI, data centers, and autonomous driving technology.

Micron initially planned to have the new plant operational by 2024, but this schedule has evidently been adjusted due to unfavorable market conditions. Micron, which acquired Japanese DRAM giant Elpida in 2013, employs over 4,000 engineers and technicians in Japan.

Beyond 2025, Japan is set to witness the emergence of several new plants, including Micron Technology’s new 1-gamma (1γ) DRAM production facility in Hiroshima Prefecture.

JSMC, a foundry subsidiary of Powerchip Semiconductor Manufacturing Corporation (PSMC), is collaborating with Japan’s financial group SBI to complete construction by 2027 and begin chip production thereafter.

Additionally, Japanese semiconductor startup Rapidus plans to commence production of 2-nanometer chips in Hokkaido by 2027.

Japan’s resurgence in the semiconductor arena is palpable, with the Ministry of Economy, Trade, and Industry fostering multi-faceted collaborations with the private sector. With a favorable exchange rate policy aiding factory construction and investments, the future looks bright for exports.

However, the looming shortage of semiconductor talent in Japan is a concern. In response, there are generous subsidy programs for talent development.

Read more

(Photo credit: Micron)

Please note that this article cites information from The Daily Industrial News.

2024-05-24

[News] Reasons for Samsung’s HBM Chips Failing Nvidia Tests Revealed, Reportedly Due to Heat and Power Consumption Issues

Samsung’s latest high bandwidth memory (HBM) chips have reportedly failed Nvidia’s tests, while the reasons were revealed for the first time. According to the latest report by Reuters, the failure was said to be due to issues with heat and power consumption.

Citing sources familiar with the matter, Reuters noted that Samsung’s HBM3 chips, as well as its next generation HBM3e chips, may be affected, which the company and its competitors, SK hynix and Micron, plan to launch later this year.

In response to the concerns raising by heat and power consumption regarding HBM chips, Samsung stated that its HBM testing proceeds as planned.

In an offical statement, Samsung noted that it is in the process of optimizing products through close collaboration with customers, with testing proceeding smoothly and as planned. The company said that HBM is a customized memory product, which requires optimization processes in tandem with customers’ needs.

According to Samsung, the tech giant is currently partnering closely with various companies to continuously test technology and performance, and to thoroughly verify the quality and performance of HBM.

Nvidia, on the other hand, declined to comment.

As Nvidia currently dominates the global GPU market with an 80% lion’s share for AI applications, meeting Nvidia’s stardards would doubtlessly be critical for HBM manufacturers.

Reuters reported that Samsung has been attempting to pass Nvidia’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.

According to TrendForce’s analysis earlier, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix. SK hynix has been providing HBM3 chips to Nvidia since 2022, Reuters noted.

According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.

Another US memory giant, Micron, stated in February that its HBM3e consumes 30% less power than its competitors, meeting the demands of generative AI applications. Moreover, the company’s 24GB 8H HBM3e will be part of NVIDIA’s H200 Tensor Core GPUs, breaking the previous exclusivity of SK hynix as the sole supplier for the H100.

Considering major competitors’ progress on HBM3e, if Samsung fails to meet Nvidia’s requirements, the industry and investors may be more concerned on whether the Korean tech heavyweight would further fall behind its rivals in the HBM market.

Please note that this article cites information from Reuters and Financial Times.

(Photo credit: Samsung)

  • Page 11
  • 27 page(s)
  • 134 result(s)

Get in touch with us