Micron


2024-08-01

[News] Samsung’s 8-layer HBM3e to Start Mass Production in Q3, Driving HBM Sales to Soar 3-5 Times in 2H24

Samsung Electronics, which has been surround by concerns that its HBM3e products are still struggling to pass NVIDIA’s qualifications, has confirmed in its second quarter earnings call that the company’s fifth-generation 8-layer HBM3e is currently undergoing customer valuation, and is scheduled to enter mass production in the third quarter, according to a report by Business Korea.

TrendForce notes that Samsung’s recent progress on HBM3e qualification seems to be solid, and we can soon expect both 8hi and 12hi to be qualified in the near future. The company is eager to gain higher HBM market share from SK hynix so its 1alpha capacity has reserved for HBM3e. TrendForce believes that Samsung is going to be a very important supplier on HBM category.

Driven by the momentum, the report from Business Korea, citing an official speaking at the conference call on July 31st, states that the share of HBM3e chips within Samsung’s HBMs is anticipated to surpass the mid-10 percent range in the third quarter. Moreover, it is projected to speedily grow to 60% by the fourth quarter.

According to Samsung, its HBM sales in the second quarter already grew by around 50% from the previous quarter. Being ambitious about its HBM3 and HBM3e sales, Samsung projects its HBM sales will increase three to five times in the second half of 2024, driven by a steep rise of about two times each quarter.

Samsung has already taken a big leap on HBM as its HBM3 chips are said to have been cleared by NVIDIA last week. According to a previous report by Reuters, Samsung’s HBM3 will initially be used exclusively in the AI giant’s H20, which is tailored for the Chinese market.

On the other hand, the South Korean memory giant notes that it has completed the preparations for volume production of its 12-layer HBM3e chips. The company plans to expand the supply in the second half of 2024 to meet the schedules requested by multiple customers, according to Business Korea. The progress of its sixth-generation HBM4 is also on track, scheduled to begin shipping in the second half of 2025, Business Korea notes.

Samsung Electronics reported higher-than-expected financial results in the second quarter, with a six-fold year-on-year increase in net income, soaring from KRW 1.55 trillion won (USD 1.12 billion) to KRW 9.64 trillion (USD 6.96 billion), as demand for its advanced memory chips that are crucial for AI training remained strong.

SK hynix, as the current HBM market leader, has expressed its optimism in securing the throne as well. The company reportedly expects its HBM3e shipments to surpass those of HBM3 in the third quarter, with HBM3e accounting for more than half of the total HBM shipments in 2024. In addition, it expects to begin supplying 12-layer HBM3e products to customers in the fourth quarter.

Micron, on the other hand, has reportedly started mass production of 8-layer HBM3e as early as in February. The company reportedly plans to complete preparations for mass production of 12-layer HBM3e in the second half and supply it to major customers like NVIDIA in 2025.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Business Korea and Reuters.
2024-08-01

[News] US Reportedly Weighs Stricter Limits on AI Memory Access for China

According to a report from Bloomberg, the US is reportedly considering new measures and could unilaterally impose restrictions on China as early as late August. These measures would limit China’s access to AI memory and related equipment capable of producing them.

Moreover, another report from Reuters further indicates that US allies, including semiconductor equipment manufacturers from Japan, the Netherlands, and South Korea—such as major Dutch semiconductor equipment maker ASML and Tokyo Electron—will not be affected in their shipments. The report also notes that countries whose exports will be impacted include Israel, Taiwan, Singapore, and Malaysia. 

Bloomberg, citing sources, revealed that the purpose of these measures is to prevent major memory manufacturers like Micron, SK hynix, and Samsung Electronics from selling high-bandwidth memory (HBM) to China.

These three companies dominate the global HBM market. Reportedly, regarding this matter, Micron declined to comment, while Samsung and SK hynix did not immediately respond to requests for comment. 

Bloomberg’s source also emphasized that the US has yet made a final decision. The source also state that if implemented, the new measures would cover chips such as HBM2, HBM3, and HBM3e, as well as the equipment needed to manufacture these chips.

The source further revealed that Micron will essentially not be affected by the new regulations, as Micron stopped exporting HBM to China after China banned Micron’s memory from being used in critical infrastructure in 2023.

Reportedly, it is still unclear what methods the US will use to restrict South Korean companies. One possibility is the Foreign Direct Product Rule (FDPR). Under this rule, if a foreign-made product uses any US technology, even just a small amount, the US can impose restrictions.

Both SK hynix and Samsung are said to be relying on chip design software and equipment from US companies such as Cadence Design Systems and Applied Materials.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from Bloomberg and Reuters .

2024-07-31

[News] Innolux Confirms Sale of Tainan Plant 4, with Micron & TSMC Reportedly in the Bidding Stage

After announcing the end of eight consecutive quarters of losses on July 30, according to a report from Economic Daily News, Innolux’s board of directors decided to authorize Chairman Jim Hung to handle real estate matters, confirming the rumors that the buildings at its 4th Plant in Tainan (5.5-generation LCD panel plant), which was closed last year, will be sold.

It is reported that two buyers, Micron and TSMC, are still in the bidding stage. Regardless of who wins the bid, Innolux will gain significant non-operating income.

According to Innolux’s announcement, to boost company operations and future development momentum, as well as to enhance operating funds, they plan to dispose of the TAC plant-related real estate at the Southern Taiwan Science Park (STSP) D section. Per a report from anue, the STSP D section refers to the 5.5-generation LCD panel plant that was closed last year.

Innolux has been promoting the transformation of its fully depreciated old plants. The 3.5-generation line at the Tainan facility has been repurposed for advanced packaging with Fan-Out Panel Level Packaging (FOPLP), and the 4-generation line has been converted to produce X-ray sensors (through Raystar Optronics), both of which are related to semiconductor products.

Regarding the 4th Plant developments at Tainan, as per a previous report from the Economic Daily News, Innolux stated on June 16 that, based on flexible strategic planning principles, the company continues to optimize production configurations and enhance overall operational efficiency. Some production lines and products are being adjusted to streamline and strengthen the group’s layout and development.

Read more

(Photo credit: Innolux)

Please note that this article cites information from Innolux and Economic Daily News.

2024-07-30

[News] Intel Hires Micron Executive to Lead Its Foundry Business

According to a report from Commercial Times, after suffering a multi-billion-dollar loss in its foundry business, Intel has recruited Naga Chandrasekaran, a veteran responsible for process technology development at Micron, as its Chief Operating Officer.

Intel is reportedly facing setbacks in developing chip manufacturing. After experiencing a staggering USD 7 billion loss in its foundry business in 2023, the company incurred an additional USD 2.5 billion loss in the first quarter of this year.

Thus,to drive the growth of its foundry business, Intel has recruited Naga Chandrasekaran from Micron, who will oversee all of Intel’s manufacturing operations and report directly to CEO Pat Gelsinger.

Chandrasekaran’s appointment will take effect on August 12. He will oversee Intel Foundry’s global manufacturing operations and strategic planning, including assembly and test manufacturing, wafer fabrication, and supply chain management. Essentially, Chandrasekaran will be responsible for all of Intel’s manufacturing activities.

In the announcement of the employment, Intel CEO Pat Gelsinger noted, “Naga is a highly accomplished executive whose deep semiconductor manufacturing and technology development expertise will be a tremendous addition to our team.”

“As we continue to build a globally resilient semiconductor supply chain and create the world’s first systems foundry for the AI era, Naga’s leadership will help us to accelerate our progress and capitalize on the significant long-term growth opportunities ahead,”  Gelsinger said.

As per a report from tom’s hardware, Chandrasekaran has spent over 20 years at Micron, holding various management positions. Most recently, he led global technology development and engineering focused on scaling memory devices, advanced packaging, and emerging technology solutions. His extensive background encompasses process and equipment development, device technology, and mask technology.

He will replace Keyvan Esfarjani, who is set to retire at the end of the year. Esfarjani, who has served at Intel for nearly 30 years, will remain with the company to assist with the transition. He has made significant contributions to Intel’s global supply chain resilience and manufacturing operations.

On the other hand, in an attempt to narrow down the gap with TSMC, Intel is also said to be recruiting the foundry giant’s senior engineers for its foundry division, according to a report by Commercial Times.

Read more

(Photo credit: Intel)

Please note that this article cites information from Commercial TimesIntel and tom’s hardware.

2024-07-29

[News] MRDIMM/MCRDIMM to be the New Sought-Afters in Memory Field

Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”

According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.

  • JEDEC Announces Details of the DDR5 MRDIMM Standard

On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.

As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.

DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.

In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.

JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.

  • Micron’s MRDIMM DDR5 to Start Mass Shipment in 2H24

In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.

MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.

On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.

MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.

Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.

MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.

This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.

  • SK hynix to Launch MCRDIMM Products in 2H24

As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.

MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.

Source: SK hynix

In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.

Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.

At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.

  • MRDIMM Boasts a Brilliant Future

MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.

Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.

MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.

Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from Tom’s Hardware, Micron and WeChat account DRAMeXchange.

  • Page 4
  • 27 page(s)
  • 134 result(s)

Get in touch with us