DRAM


2024-08-19

[News] Samsung Reportedly Bets on CXL Memory in the AI Race

According to a report from Nikkei, Samsung Electronics, currently lagging behind SK hynix in the HBM market, is said to be betting on the next-generation CXL memory, with shipments expected to begin in the second half of this year, while anticipating the CXL memory to become the next rising star in AI.

CXL is a cache-coherent interconnect for memory expansion, which may maintain memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance.

The CXL module stacks DRAM layers and connects different semiconductor devices like GPUs and CPUs, expanding server memory capacity up to tenfold.

Choi Jang-seok, head of Samsung Electronics’ memory division, explained that CXL technology is comparable to merging wide roads, enabling the efficient transfer of large volumes of data.

As tech companies rush to develop AI models, existing data centers are gradually becoming unable to handle the enormous data processing demands.

As a result, companies are beginning to build larger-scale data centers, but this also significantly increases power consumption. On average, the energy required for a general AI to answer user queries is about ten times that of a traditional Google search.

Choi further highlighted that incorporating CXL technology allows for server expansion without the need for physical growth.

In 2021, Samsung became one of the first companies in the world to invest in the development of CXL. This June, Samsung announced that its CXL infrastructure had received certification from Red Hat.

Additionally, Samsung is a member of the CXL Consortium, which is composed of 15 tech companies, with Samsung being the only memory manufacturer among them. This positions Samsung to potentially gain an advantage in the CXL market.

While HBM remains the mainstream memory used in AI chipsets today, Choi Jang-seok anticipates that the CXL market will take off starting in 2027.

Since the surge in demand for NVIDIA’s AI chips, the HBM market has rapidly expanded. SK hynix, which was the first to develop HBM in 2013, has since secured the majority of NVIDIA’s orders, while Samsung has lagged in HBM technology.

Seeing Samsung’s bet on CXL, SK Group Chairman Chey Tae-won remarked that SK Hynix should not settle for the status quo and immediately start seriously considering the next generation of profit models.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Nikkei.
2024-08-16

[News] Booming AI Demand Boosts Q2 Profit for South Korea’s Top 500 Companies to Double

As global tech giants race to develop AI infrastructure, according to a report from Yonhap News Agency, South Korea’s top 500 companies, driven by semiconductor leaders like Samsung and SK Hynix, have experienced a significant profit surge in the second quarter, more than doubling compared to the same period last year.

Reportedly, as per data released by the corporate evaluation website CEO Score on August 15th, among South Korea’s top 500 companies by revenue, 334 companies have reported their second-quarter earnings as of August 14th.

The combined net profit of these companies reached KRW 59.4 trillion (approximately USD 43.6 billion), marking a 107.1% increase compared to the KRW 28.7 trillion recorded during the same period last year, with their profits more than doubling year-over-year.

Their revenue in total, on the other hand, amounted to KRW 779.5 trillion, reflecting a 7% year-on-year growth from KRW 728.6 trillion during the same period last year.

This significant growth was driven by the booming HBM demand from tech giants like NVIDIA, the report notes.

According to the Q2 performance report released by Samsung Electronics, the company’s operating profit reached KRW 10.44 trillion (approximately USD 7.5 billion), surging from the KRW 668.5 billion recorded in the same period last year.

Thus, per the report, this surge has solidified Samsung’s position as the most profitable company among South Korea’s top 500 enterprises in the second quarter.

On the other hand, SK Hynix also turned a profit in the second quarter, recovering from a loss of KRW 2.9 trillion  in the same period last year, with an operating profit of KRW 5.5 trillion.

Reportedly, this strong performance helped SK Hynix to become South Korea’s second most profitable company, surpassing automotive giants Hyundai Motor and Kia Corp., which reported operating profits of KRW 4.3 trillion and KRW3.6 trillion in Q2,  respectively.

Meanwhile, SK On, the battery manufacturing arm of SK Group, recorded an operating loss of KRW 460.2 billion in the second quarter, marking the worst quarterly performance in the company’s history, dragged down by the global cooling demand for electric vehicles.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Yonhap News Agency.
2024-08-16

[News] 3D DRAM with Built-in AI Processing – a New Tech Potentially Replace Existing HBM

NEO Semiconductor, a company focused on 3D DRAM and 3D NAND memory, has unveiled its latest 3D X-AI chip technology, which could potentially replace the existing HBM used in AI GPU accelerators.

Reportedly, this 3D DRAM comes with built-in AI processing capabilities, enabling processing and generation without the need for mathematical output. When large amounts of data are transferred between memory and processors, it can reduce data bus issues, thereby enhancing AI performance and reducing power consumption.

The 3D X-AI chip has a underlying neuron circuit layer that can process data stored in 300 memory layers on the same chip. NEO Semiconductor states that with 8,000 neutron circuits performing AI processing in memory, the 3D memory performance can be increased by 100 times, with memory density 8 times higher than current HBM. By reducing the amount of data processed in the GPU, power consumption can be reduced by 99%.

A single 3D X-AI die contains 300 layers of 3D DRAM cells and one layer of neural circuits with 8,000 neurons. It also has a capacity of 128GB, with each chip supporting up to 10 TB/s of AI processing capability. Using 12 3D X-AI dies stacked with HBM packaging can achieve 120 TB/s processing throughput. Thus, NEO estimates that this configuration may eventually result in a 100-fold performance increase.

Andy Hsu, Founder & CEO of NEO Semiconductor, noted that current AI chips waste significant amounts of performance and power due to architectural and technological inefficiencies. The existing AI chip architecture stores data in HBM and relies on a GPU for all calculations.

He further claimed that the separation of data storage and processing architecture has made the data bus an unavoidable performance bottleneck, leading to limited performance and high power consumption during large data transfers.

The 3D X-AI, as per Hsu, can perform AI processing within each HBM chip, which may drastically reduce the data transferred between HBM and the GPU, thus significantly improving performance and reducing power consumption.

Many companies are researching technologies to increase processing speed and communication throughput. As semiconductor speeds and efficiencies continue to rise, the data bus transferring information between components will become a bottleneck. Therefore, such technologies will enable all components to accelerate together.

As per a report from tom’s hardware, companies like TSMC, Intel, and Innolux are already exploring optical technologies, looking for faster communications within the motherboard. By shifting some AI processing from the GPU to the HBM, NEO Semiconductor may reduce the workload and potentially achieve better efficiency than current power-hungry AI accelerators.

Read more

(Photo credit: NEO Semiconductor)

Please note that this article cites information from NEO Semiconductor and tom’s hardware.

2024-08-15

[News] Samsung Likely Emerges as the Pacemaker for the AI Market if It Secures HBM3e Supply to NVIDIA

Samsung Electronics, which has been struggling at the final stage of its HBM3e qualification with NVIDIA, may unexpectedly emerge as the pacemaker for the AI ecosystem, as the company may somehow ease the cost pressure for building AI servers by balancing the market, as well as alleviating the tight HBM supply, according to a recent report by Korean media outlet Invest Chosun.

Samsung, in its second quarter earnings call, has confirmed that the company’s fifth-generation 8-layer HBM3e is undergoing customer valuation. The product is reportedly to enter mass production as soon as the third quarter.

Invest Chosun analyzes that while there is growing anticipation that NVIDIA could come up with a conclusion regarding Samsung’s HBM3e verification, the market’s attitude towards AI has also been gradually shifting in the meantime, as the main concern now is that semiconductors are becoming too expensive.

The report, citing remarks from a consultant, notes that the price of an NVIDIA chip may cost tens of thousands of dollars each, leading to concerns that the industry’s overall investment capex cycle might not last more than three years.

In addition, the report highlights that the cost of building an AI server for learning is about 40 times that of a standard server, with over 80% attributed to NVIDIA’s AI accelerators. Due to the cost pressure, big techs have been closely examining the cost structure for building AI servers.

Therefore, NVIDIA has to take its customers’ budgets into consideration when planning its roadmap. The move has also sparked speculation that NVIDIA, which is prompted to lower product prices, might compromise to bring Samsung onboard as an HBM3e supplier, the report states.

Citing an industry insider, the report highlights the dilemma of NVIDIA and its HBM suppliers. As the AI giant tries to shorten its product cycle, releasing the Blackwell (B100) series just two years after the Hopper (H100), HBM suppliers have been struggling except for SK hynix, as the company is the only one with the most experience.

If Samsung doesn’t join the HBM lineup, the overall supply of NVIDIA’s AI accelerators could be limited, driving prices even higher, the report suggests.

Under this backdrop, Samsung may have taken on the role of pacemaker in the AI semiconductor market, as it may help balance the market during a time when there are concerns about overheating in the AI industry. Also, if it is able to form a strong collaboration with NVIDIA by supplying 8-layer HBM3e, its technological gap with competitors will noticeably narrow.

TrendForce notes that Samsung’s recent progress on HBM3e qualification seems to be solid, and we can soon expect both 8hi and 12hi to be qualified in the near future. The company is eager to gain higher HBM market share from SK hynix so its 1alpha capacity has reserved for HBM3e. TrendForce believes that Samsung is going to be a very important supplier on HBM category.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Invest Chosun.
2024-08-14

[Insights] Memory Spot Price Update: NAND Prices for Package Dies and Wafers Dropped Slightly due to Slow Transactions

According to TrendForce’s latest memory spot price trend report, regarding DRAM spot prices, demand has yet to show improvement, leading to increased inventory pressure on suppliers, which indicates the potential for larger price drops in the future. As for NAND flash, the overall price trend is still shifting to a reduction, which led to a small drop in prices for packaged dies and wafers from the spot market. Details are as follows:

DRAM Spot Price:

Continuing from last week, demand has yet to show improvement, leading to increased inventory pressure on suppliers. Consequently, suppliers are more willing to offer price concessions in the spot market. Overall, spot transactions continue to show low volumes. Additionally, the prices that buyers are willing to accept are significantly lower than the official prices set by sellers, resulting in a stalemate. Therefore, there is a potential for larger price drops in the future. The average spot price of mainstream chips (i.e., 1Gx8 2666MT/s) fell by 0.20% from US$1.989 last week to US$1.985 this week.

NAND Flash Spot Price:

Sluggishness persists among spot market transactions after the opening in August, where buyers are maintaining their strong on-the-fence sentiment. Despite emergence of demand for partial stocking orders, the overall price trend is still shifting to a reduction due to a lack of continuity, which led to a small drop in prices for packaged dies and wafers from the spot market this week, where 512Gb TLC wafer has fallen by 0.58% in spot prices, now arriving at US$3.272.

 

  • Page 13
  • 57 page(s)
  • 284 result(s)

Get in touch with us