Semiconductors


2024-08-19

[News] Samsung Reportedly Bets on CXL Memory in the AI Race

According to a report from Nikkei, Samsung Electronics, currently lagging behind SK hynix in the HBM market, is said to be betting on the next-generation CXL memory, with shipments expected to begin in the second half of this year, while anticipating the CXL memory to become the next rising star in AI.

CXL is a cache-coherent interconnect for memory expansion, which may maintain memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance.

The CXL module stacks DRAM layers and connects different semiconductor devices like GPUs and CPUs, expanding server memory capacity up to tenfold.

Choi Jang-seok, head of Samsung Electronics’ memory division, explained that CXL technology is comparable to merging wide roads, enabling the efficient transfer of large volumes of data.

As tech companies rush to develop AI models, existing data centers are gradually becoming unable to handle the enormous data processing demands.

As a result, companies are beginning to build larger-scale data centers, but this also significantly increases power consumption. On average, the energy required for a general AI to answer user queries is about ten times that of a traditional Google search.

Choi further highlighted that incorporating CXL technology allows for server expansion without the need for physical growth.

In 2021, Samsung became one of the first companies in the world to invest in the development of CXL. This June, Samsung announced that its CXL infrastructure had received certification from Red Hat.

Additionally, Samsung is a member of the CXL Consortium, which is composed of 15 tech companies, with Samsung being the only memory manufacturer among them. This positions Samsung to potentially gain an advantage in the CXL market.

While HBM remains the mainstream memory used in AI chipsets today, Choi Jang-seok anticipates that the CXL market will take off starting in 2027.

Since the surge in demand for NVIDIA’s AI chips, the HBM market has rapidly expanded. SK hynix, which was the first to develop HBM in 2013, has since secured the majority of NVIDIA’s orders, while Samsung has lagged in HBM technology.

Seeing Samsung’s bet on CXL, SK Group Chairman Chey Tae-won remarked that SK Hynix should not settle for the status quo and immediately start seriously considering the next generation of profit models.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Nikkei.
2024-08-19

[News] Samsung Reportedly to Bring in High-NA EUV Machine as soon as Year-End, as SK hynix Targets 2026

As semiconductor giants, starting with Intel and TSMC, have been bringing in ASML’s High-NA EUV (high-numerical aperture extreme ultraviolet) equipment to accelerate the development in advanced nodes, the elite group has now reportedly been added two new members: Samsung and SK hynix.

According to the reports by Korean media outlet Sedaily and ZDNet, Samsung Electronics’ semiconductor (DS) division is said to bring in High-NA EUV equipment as early as the end of 2024. SK hynix’s High-NA equipment, which is expected to be applied to the mass production of advanced DRAM, will reportedly be introduced in 2026.

Samsung to Introduce First High-NA EUV Machine as soon as Year-End, Eyeing Full Commercialization by 2027

Sedaily, citing industry sources on August 13th, notes that Samsung is expected to begin bringing in its first High-NA EUV equipment, ASML’s EXE:5000, between the end of this year and the first quarter of next year. It is worth noting that Samsung’s first High-NA EUV equipment is likely to be used for foundry operations, the report reveals.

Among the semiconductor heavyweights which have been advancing in the foundry business, Intel is the first to order new High-NA EUV machines from ASML. In May, Intel was said to have secured its first batch of the new High-NA EUV lithography equipment from ASML, which the company will allegedly use on its 18A (1.8nm) and 14A (1.4nm) nodes.

TSMC, on the other hand, is more concerned on the new machine’s expensiveness, as it might be priced at as much as EUR 350 million (roughly USD 380 million) per unit, according to a previous report by Bloomberg. However, the report, citing ASML’s spokesperson, confirmed that the Dutch chip equipment giant will ship High-NA EUV equipment to TSMC by the end of this year.

Now, following its two major rivals in the foundry sector, Samsung, by introducing High-NA EUV equipment as soon as year-end, aims to boost its competitive edge in the advanced nodes.

As the installation process is quite time-consuming, Samsung aims for the full commercialization of High-NA by 2027, supported by its efforts to build the related ecosystem, the report says.

According to the report, Samsung is working with electronic design automation (EDA) companies to design new types of masks, including curved (curvilinear) circuits for High-NA EUV that improve the sharpness of the printed circuits on wafers. This collaboration includes companies like Synopsys, a global leader in semiconductor EDA tools.

SK hynix’s High-NA EUV Reportedly to be Applied to 0a DRAM Production

According to the report by Sedaily, ASML has produced eight EXE:5000 High-NA EUV units currently, as Intel has the lion’s share by securing multiple units. Samsung is said to be the last customer to place the order for ASML’s first batch of units.

On the other hand, SK hynix, Samsung’s major rival in the memory sector, is reported to bring in ASML’s next generation of High-NA EUV machine, the EXE:5200, in 2026, ZDNet suggests.

Citing industry sources on August 16th, ZDNet notes that the HBM giant has been expanding the personnel dedicated to High-NA EUV development within the company.

Although specific plans, such as the fab where the equipment will be installed or the direction of additional investment, have not been disclosed, it is expected that the technology could be applied to mass production in 0a (single-digit nanometer) DRAM as early as possible, the report indicates.

Read more

(Photo credit: ASML)

Please note that this article cites information from SedailyZDNet and Bloomberg.
2024-08-16

[News] Booming AI Demand Boosts Q2 Profit for South Korea’s Top 500 Companies to Double

As global tech giants race to develop AI infrastructure, according to a report from Yonhap News Agency, South Korea’s top 500 companies, driven by semiconductor leaders like Samsung and SK Hynix, have experienced a significant profit surge in the second quarter, more than doubling compared to the same period last year.

Reportedly, as per data released by the corporate evaluation website CEO Score on August 15th, among South Korea’s top 500 companies by revenue, 334 companies have reported their second-quarter earnings as of August 14th.

The combined net profit of these companies reached KRW 59.4 trillion (approximately USD 43.6 billion), marking a 107.1% increase compared to the KRW 28.7 trillion recorded during the same period last year, with their profits more than doubling year-over-year.

Their revenue in total, on the other hand, amounted to KRW 779.5 trillion, reflecting a 7% year-on-year growth from KRW 728.6 trillion during the same period last year.

This significant growth was driven by the booming HBM demand from tech giants like NVIDIA, the report notes.

According to the Q2 performance report released by Samsung Electronics, the company’s operating profit reached KRW 10.44 trillion (approximately USD 7.5 billion), surging from the KRW 668.5 billion recorded in the same period last year.

Thus, per the report, this surge has solidified Samsung’s position as the most profitable company among South Korea’s top 500 enterprises in the second quarter.

On the other hand, SK Hynix also turned a profit in the second quarter, recovering from a loss of KRW 2.9 trillion  in the same period last year, with an operating profit of KRW 5.5 trillion.

Reportedly, this strong performance helped SK Hynix to become South Korea’s second most profitable company, surpassing automotive giants Hyundai Motor and Kia Corp., which reported operating profits of KRW 4.3 trillion and KRW3.6 trillion in Q2,  respectively.

Meanwhile, SK On, the battery manufacturing arm of SK Group, recorded an operating loss of KRW 460.2 billion in the second quarter, marking the worst quarterly performance in the company’s history, dragged down by the global cooling demand for electric vehicles.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Yonhap News Agency.
2024-08-16

[News] China’s AMEC Sues U.S. for Blacklisting it as a Chinese Military Company

According to a report by Nikkei, Chinese semiconductor equipment manufacturer Advanced Micro-Fabrication Equipment Inc. (AMEC) has announced that it has filed a lawsuit against the U.S. Department of Defense (DOD) in a U.S. court over being blacklisted as a Chinese military-industrial company.

Reportedly, in January, AMEC was placed on the U.S. Department of Defense’s list of Chinese military-industrial enterprises operating in the United States.

Thus, the company argues that this action violates procedural due process and has severely harmed its reputation. AMEC asserts that it has never engaged in any military-related activities.

Addressing the matter, neither AMEC nor the U.S. Department of Defense has commented on the matter.

The lawsuit comes days after the Financial Times reported that the U.S. Department of Defense planned to remove Chinese automotive LiDAR manufacturer Hesai Technology from its export control blacklist.

At that time, per Nikkie’s report, Hesai had sued the DOD in May and its CEO, David Li, pointed out that allegations of military ties are ridiculous.

AMEC stated that it was previously listed as a Chinese military-industrial enterprise in January 2021 but was removed from the list in June of the same year after requesting the U.S. Department of Defense to provide sufficient facts and evidence. The CEO reportedly expressed shock at AMEC’s re-inclusion on the blacklist, calling it a mistake and baseless.

AMEC specializes in chip equipment with a focus on etching processes. The company reported first-quarter revenue of CNY 1.6 billion (approximately 223 million USD), a 31% increase compared to the same period in 2023.

Read more

(Photo credit: iStock)

Please note that this article cites information from Nikkei and the Financial Times.

2024-08-16

[News] 3D DRAM with Built-in AI Processing – a New Tech Potentially Replace Existing HBM

NEO Semiconductor, a company focused on 3D DRAM and 3D NAND memory, has unveiled its latest 3D X-AI chip technology, which could potentially replace the existing HBM used in AI GPU accelerators.

Reportedly, this 3D DRAM comes with built-in AI processing capabilities, enabling processing and generation without the need for mathematical output. When large amounts of data are transferred between memory and processors, it can reduce data bus issues, thereby enhancing AI performance and reducing power consumption.

The 3D X-AI chip has a underlying neuron circuit layer that can process data stored in 300 memory layers on the same chip. NEO Semiconductor states that with 8,000 neutron circuits performing AI processing in memory, the 3D memory performance can be increased by 100 times, with memory density 8 times higher than current HBM. By reducing the amount of data processed in the GPU, power consumption can be reduced by 99%.

A single 3D X-AI die contains 300 layers of 3D DRAM cells and one layer of neural circuits with 8,000 neurons. It also has a capacity of 128GB, with each chip supporting up to 10 TB/s of AI processing capability. Using 12 3D X-AI dies stacked with HBM packaging can achieve 120 TB/s processing throughput. Thus, NEO estimates that this configuration may eventually result in a 100-fold performance increase.

Andy Hsu, Founder & CEO of NEO Semiconductor, noted that current AI chips waste significant amounts of performance and power due to architectural and technological inefficiencies. The existing AI chip architecture stores data in HBM and relies on a GPU for all calculations.

He further claimed that the separation of data storage and processing architecture has made the data bus an unavoidable performance bottleneck, leading to limited performance and high power consumption during large data transfers.

The 3D X-AI, as per Hsu, can perform AI processing within each HBM chip, which may drastically reduce the data transferred between HBM and the GPU, thus significantly improving performance and reducing power consumption.

Many companies are researching technologies to increase processing speed and communication throughput. As semiconductor speeds and efficiencies continue to rise, the data bus transferring information between components will become a bottleneck. Therefore, such technologies will enable all components to accelerate together.

As per a report from tom’s hardware, companies like TSMC, Intel, and Innolux are already exploring optical technologies, looking for faster communications within the motherboard. By shifting some AI processing from the GPU to the HBM, NEO Semiconductor may reduce the workload and potentially achieve better efficiency than current power-hungry AI accelerators.

Read more

(Photo credit: NEO Semiconductor)

Please note that this article cites information from NEO Semiconductor and tom’s hardware.

  • Page 28
  • 274 page(s)
  • 1370 result(s)

Get in touch with us