Semiconductors


2024-07-31

[Insights] Memory Spot Price Update: NAND Spot Prices Lacks Momentum due to Absent July Stocking Demand

According to TrendForce’s latest memory spot price trend report, neither did the DRAM nor NAND spot prices sees much momentum last week. Spot prices of DDR4 and DDR5 products didn’t show significant fluctuations as the market has not seen a demand uptick. As for NAND flash, the wave of stocking demand during July in response with the peak season in the third quarter of each year didn’t appear. Details are as follows:

DRAM Spot Price:

In the spot market, the overall trading volume has fallen further because the demand for consumer electronics has yet to rebound, and Taiwan’s spot trading was suspended for two days (from July 24th to 25th) due to a typhoon. The spot market as a whole has not seen a demand uptick compared to the previous week, and buyers are mostly waiting for further developments. Consequently, spot prices of DDR4 and DDR5 products have not shown significant fluctuations. The average spot price of the mainstream chips (i.e., DDR4 1Gx8 2666MT/s) dropped by 0.35% from US$2 last week to US$1.993 this week.

NAND Flash Spot Price:

The spot market would usually generate a wave of stocking demand during July in response with the peak season in the third quarter of each year, but has been rather sluggish this year due to the sufficient extent of inventory among end clients, as well as enervated market demand. A small number of spot traders were attempting to lower their quotations tentatively last week in the hope of revitalizing buyers’ demand, which was proven to be quite ineffective. Generally speaking, recent spot market prices have been somewhat lethargic alongside a continuous shrinkage of transactions. Spot price of 512Gb TLC wafers remains unchanged this week at US$3.253.

2024-07-31

[News] Apple Reportedly Adopts Google’s Chips to Train its AI Models instead of NVIDIA’s GPUs

Apple’s latest technical document reveals that the two main AI models behind Apple Intelligence are trained using Google’s Tensor Processing Units (TPUs) instead of NVIDIA GPUs. According to a report from Commercial Times, this suggests that the demand for NVIDIA chips has outstripped supply, prompting some tech giants to seek alternatives.

Apple first introduced an AI technical document in June, briefly stating that its AI models were trained using TPUs. The latest technical document, which spans 47 pages, provides a detailed explanation of how Apple’s foundational models (AFM) and AFM servers are trained in Cloud TPU Clusters. This indicates that Apple rents cloud servers from cloud service providers to train its AI models.

In the document, Apple stated: “This system allows us to train the AFM models efficiently and scalably, including AFM-on-device, AFM-server, and larger models.”

Apple further mentioned that the on-device AFM models for iPhones and iPads are trained using a total of 2,048 TPUv5p chips, which are currently the most advanced TPU chips on the market. The AFM servers are trained using a total of 8,192 TPUv4 chips.

Google initially launched TPUs in 2015 for internal training use only and started offering TPU rental services to external clients in 2017. These TPUs are currently the most mature custom chips used for AI training. According to Google’s official website, the rental cost of their most advanced TPUs is approximately USD 2 per hour based on a three-year contract.

Though NVIDIA’s GPUs are currently dominating the high-end AI chip market, the enormous number of chips required for AI model training has led to a severe shortage. This is because major tech companies like OpenAI, Microsoft, Google, Meta, Oracle, and Tesla all use NVIDIA chips to develop their AI technologies.

Since the rise of ChatGPT at the end of 2022, which spurred the generative AI market, Silicon Valley tech giants have been racing to invest in AI research and development. In contrast, Apple has lagged behind its competitors and now has to intensify its efforts to bolster Apple Intelligence. On July 29th, Apple released a preview version of Apple Intelligence for certain devices.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times and Apple.

2024-07-31

[News] Samsung’s Q2 Profits Soar to USD 7.5 Billion, Seeing Strong Demand for HBM, DDR5 and Server SSD in 2H24

Samsung Electronics announced its financial results for the second quarter today (July 31st), posting KRW 74.07 trillion in consolidated revenue and operating profit of KRW 10.44 trillion (approximately USD 7.5 billion). The memory giant’s strong performance can be contributed to favorable memory market conditions, which drove higher average sales price (ASP), while robust sales of OLED panels also contributed to the results, according to its press release.

In early July, the company estimated a 15-fold increase YoY in second-quarter operating profit, which was expected to jump 1,452 per cent to KRW 10.4 trillion in preliminary numbers for the April-June quarter, the highest since the third quarter of 2022. The actual results are in line with its earlier projection.

Samsung’s DS Division posted KRW 28.56 trillion in consolidated revenue and KRW 6.45 trillion in operating profit for the second quarter, posting a 23.4% and 2377% QoQ growth, respectively.

Strong Demand for HBM, DDR5 and Server SSDs to Extend in Second Half on AI Applications

Regarding current market conditions, Samsung notes that driven by the strong demand for HBM as well as conventional DRAM and server SSDs, the memory market as a whole continued its recovery. This increased demand is a result of the continued AI investments by cloud service providers and growing demand for AI from businesses for their on-premise servers.

However, Samsung observes that PC demand was relatively weak, while demand for mobile products remained solid on the back of increased orders from Chinese original equipment manufacturer (OEM) customers. Demand from server applications continued to be robust.

Samsung projects that in the second half of 2024, AI servers are expected to take up a larger portion of the market as major cloud service providers and enterprises expand their AI investments. As AI servers equipped with HBM also feature high content-per-box with regards to conventional DRAM and SSDs, demand is expected to remain strong across the board from HBM and DDR5 to server SSDs.

In response to the heating market demand, Samsung plans to actively expand capacity to increase the portion of HBM3e sales. High-density products will be another major focus, such as server modules based on the 1b-nm 32Gb DDR5 in server DRAM.

Samsung has already taken a big leap on HBM as its HBM3 chips are said to have been cleared by NVIDIA last week, which will initially be used exclusively in the AI giant’s H20, a less advanced GPU tailored for the Chinese market.

For NAND, the company plans to increase sales by strengthening the supply of triple-level cell (TLC) SSDs, which are still a majority portion of AI demand, and will address customer demand for quad-level cell (QLC) products, which are optimized for all applications, including server PC and mobile.

The ramping of HBM and server DRAM production and sales is likely to further constrain conventional bit supply in both DRAM and NAND, Samsung notes.

Read more

Please note that this article cites information from Samsung.
2024-07-31

[News] SK Hynix Launches the World’s Highest-Performance GDDR7

On July 30, 2024, SK hynix announced the launch of next-generation memory product, GDDR7, with the world’s highest performance.

SK hynix explained that GDDR is characterized by the performance specifically designed for graphic processing and high-speed property, which has gaining an increasingly more traction from global AI application customers. In response to this trend, the company completed the development of the latest GDDR7 specifications in March this year, which was now officially launched and will achieve mass production in the third quarter of this year.

SK hynix’s GDDR7 features an operating speed of up to 32Gbps (32 gigabytes per second), which represents an increase of more than 60% compared to the previous generation, and can stand at 40Gbps depending on the usage environment. Built on the latest graphics card, it can support data processing speed of over 1.5TB per second, equivalent to processing 300 FHD (5GB) movies in one second.

In addition to providing faster speeds, GDDR7 boasts an energy efficiency 50% higher than the previous generation. To address chip heating issue caused by ultra-high-speed data processing, SK hynix adopted new packaging technology in the development of this product.

SK hynix’s technical team maintained the product size while increasing the heat-dissipating layers in the packaging substrate from four to six and used highly thermally conductive epoxy molding compound (EMC) in the packaging materials. As a result, the technical team successfully reduced the thermal resistance of the product by 74% compared to the previous generation.

Lee Sang-kwon, Vice President of SK hynix DRAM PP&E, said that SK hynix’s GDDR7 has achieved the highest performance of existing memory chips with excellent speed and energy efficiency, and its applications will expand from high-performance 3D graphics to AI, HPC, and autonomous driving.

Through this product, the company will further strengthen its high-end memory product line while developing into the most trustworthy AI memory solution company for customers.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from WeChat account DRAMeXchange.
2024-07-31

[News] New Solution to AI the Power Monster? CRAM Reportedly to Reduce Energy Consumption by 1,000 Times

As AI applications become more widespread, there is an urgent need to improve energy efficiency. Traditional AI processes are known as power-hungry due to the constant data transferring between logic and memory. However, according to the reports by Tom’s Hardware and Innovation News Network, researchers in the U.S. may have come up with a solution: computational random-access memory (CRAM), which is said to reduce energy consumption by AI by 1,000 times or more.

According to the reports, researchers at the University of Minnesota, after over 20 years of research, have developed a new generation of phase-change memory that can significantly reduce energy consumption in AI applications.

Citing the research, Tom’s Hardware explains that in current AI computing, data is frequently transferred between processing components (logic) and storage (memory). This constant back-and-forth movement of information can consume up to 200 times more energy than the actual computation.

However, with the so-called CRAM, data can be processed entirely within the memory array without having to leave the grid where it is stored. Computations can be performed directly within memory cells, eliminating the slow and energy-intensive data transfers common in traditional architectures.

According to Innovation News Network, machine learning inference accelerators based on CRAM could achieve energy savings of up to 1,000 times, with some applications realizing reductions of 2,500 and 1,700 times compared to conventional methods.

The reports note further that the patented technology is related to Magnetic Tunnel Junctions (MTJs), which are nanostructured devices used in hard drives, sensors, and various microelectronic systems, including Magnetic Random Access Memory (MRAM).

It is worth noting that among Taiwanese companies, NOR flash memory company Macronix may be the one with the most progress. According to a report by the Economic Daily, Macronix has been collaborating with IBM to develop the phase-change memory technology for over a decade, with AI applications as their main focus. Currently, Macronix is IBM’s sole partner for phase-change memory.

The report notes that the joint development program between Macronix and IBM is organized in three-year phases. At the end of each phase, the two companies decide whether to sign a new agreement based on the situation.

Read more

(Photo credit: npj Unconventional Computing)

Please note that this article cites information from Tom’s HardwareInnovation News Network and Economic Daily News.
  • Page 89
  • 320 page(s)
  • 1598 result(s)

Get in touch with us