Articles


2024-03-29

[News] US Reportedly Targets Key Chip Manufacturing Equipment, Urges Allies to Tighten Maintenance Services

The US government, according to a report from Reuters, is asking allies to stop domestic companies from servicing certain chip-making tools for Chinese customers, a U.S. Commerce department official said on March 27th.

“We’re pushing for not servicing of those key components and these are the discussions we are having with our allies,” stated export controls chief Alan Estevez, as reported by Reuters during an annual conference. “We are working with our allies to determine what is important to service and what is not important to service,” hinting that the US is not proposing restrictions on non-core components that Chinese firms can repair themselves.

The recent trigger for heightened vigilance in the US was Huawei’s launch of a new 5G smartphone in August last year, equipped with domestically manufactured advanced 7-nanometer chips from China. According to a recent Bloomberg report, the chips supplied to Huawei by its partner SMIC are manufactured using equipment from US suppliers such as Applied Materials and Dutch company ASML.

Since then, the US has reportedly been increasing pressure on allies such as the Netherlands, Germany, South Korea, and Japan, urging them to further tighten restrictions on China’s access to advanced chip technology.

Additionally, the US has restricted equipment suppliers like Applied Materials from providing maintenance services for entities in China subject to sanctions. However, neither the Netherlands nor Japan has implemented similar maintenance bans on their domestic companies, prompting the US to encourage allied firms to follow suit.

Gina Raimondo, the US Secretary of Commerce, previously responded by stating that the US will take “as strong and effective action as possible” to uphold national security interests.

Companies that have been listed on the Entity List by the US Department of Commerce include Huawei, SMIC (Semiconductor Manufacturing International Corporation), and Shanghai Micro Electronics. Additionally, China’s other major memory manufacturer, Yangtze Memory Technology Corp, was added to this restriction list in 2022.

Read more

(Photo credit: iStock)

Please note that this article cites information from Reuters and Bloomberg .

2024-03-29

[News] Sales Decline Leads to Sharp Profit Drop for SMIC in 2023

SMIC, the leading semiconductor foundry in China, announced its financial results on March 28th. The company’s revenue for 2023 was USD 6.32 billion, a decrease of 13.1% compared to the previous year. However, the net profit for the full year plummeted by a staggering 50.4%, dropping to USD 903 million.

According to SMIC’s financial report cited by Sina Finance, SMIC’s gross profit margin for 2023 was 19.3%, with an average annual utilization rate of 75%, essentially meeting the initial guidance for the year. As of the end of 2023, SMIC’s total assets amounted to USD 47.8 billion. The asset structure remained robust, with an equivalent monthly capacity of 806,000 wafers for 8-inch production lines

Last year, SMIC’s revenue share from the China region increased from 74.2% to 80.1%, while revenue from the US region decreased from 20.8% to 16.4%, and revenue from the Europe and Asia region fell to 3.5%.

In terms of application categories, revenue from smartphone chip manufacturing dropped from 27% to 26.7%, while revenue from computer and tablet segments increased from 17.5% to 26.7%, and revenue from IoT and wearable devices decreased from 18% to 12.1%.

SMIC produced 6.074 million wafers last year, a 19.1% decrease year-on-year, with inventory increasing by 40.1% to 724,000 wafers. 8-inch wafers accounted for 26.3% of revenue, while 12-inch wafers accounted for 73.7%.

SMIC also announced its guidance for 2024, aiming for sales revenue growth not lower than the industry average, with a single-digit increase expected mid-year.

Additionally, the company plans to continue advancing its announced 12-inch plant and capacity construction projects in 2024, with capital expenditure expected to remain roughly the same as the previous year.

SMIC pointed out that the decline in revenue in 2023 was primarily due to a decrease in wafer sales volume. Additionally, the decrease in gross profit was attributed to lower capacity utilization, reduced wafer sales, and changes in product mix. Moreover, the group is in a high investment phase, resulting in higher depreciation compared to 2022.


Regarding the development of China’s foundry industry, TrendForce previously reported that from 2023 to 2027, propelled by policies and incentives promoting local production and IC development, China’s mature process capacity is anticipated to grow from 29% this year to 33% by 2027. Leading the charge are giants like SMIC, HuaHong Group, and Nexchip. Globally, the ratio of mature (>28nm) to advanced (<16nm) processes is projected to hover around 7:3.

Read more

(Photo credit: SMIC)

Please note that this article cites information from Sina Finance.

2024-03-28

[News] Addressing AI PC Chip Solutions, Intel and Qualcomm Reportedly Unveil Respective Strategies

With the rapid advancement of AI-powered PC chips, industry giants like Intel, AMD, and Qualcomm, alongside various brands, are optimistic about the inaugural year of AI PCs entering the market.

According to a report from Commercial Times, chip manufacturers are showcasing their AI PC chip solutions, with newcomer Qualcomm partnering with Google to launch Snapdragon X expected mid-year, while Intel leveraging both hardware and software resources.

Per the same report citing sources, laptop brands are beginning to plan AI PC-related products for the second half of the year. Recently, companies like Dell, Lenovo, and HP have held internal meetings with the Taiwan supply chain. In addition to contract manufacturers, IC design is also a key focus, with companies like MediaTek and Realtek being actively engaged.

Reportedly, each company currently has its own perspective on AI PC, with many opting to integrate AI accelerator chips. However, Microsoft and Intel have jointly defined AI PC as requiring NPU, CPU, and GPU, along with support for Microsoft’s Copilot. They are also incorporating a physical Copilot key directly on the keyboard and become the standard setters.

To adapt to significant changes in software and hardware, Intel is expanding its ecosystem. In addition to AI application software, they are incorporating Independent Hardware Vendors (IHVs) into their AI PC acceleration program.

This collaboration assists IHV partners in preparing, optimizing, and leveraging hardware opportunities in AI PC applications. Support is provided from the early stages of hardware solutions and platform development, offering numerous opportunities for IC design companies in Taiwan to enter Intel’s supply chain during the nascent stage of AI PC.

Reportedly, Qualcomm is rumored to maintain its partnership with Google as it ventures into the AI PC market this year with Snapdragon X Elite. Qualcomm and Google have previously collaborated closely in the realm of Android smartphones, with many devices equipped with Snapdragon chipsets already using Google software.

Intel estimates that by the end of this year, the market will introduce over 300 AI acceleration applications, further advancing its AI software framework and enhancing the developer ecosystem. Intel further predicts that by the end of 2025, there will be over 100 million PCs shipped with AI accelerators, indicating immense opportunities in the AI PC market. However, competition is fierce, and success in this market requires innovative products that are differentiated and meet user needs. With both Intel and Qualcomm unveiling unique strategies, the AI PC market is poised for significant developments.

For AI PC, TrendForce believes that due to the high costs of upgrading both software and hardware, early development will be focused on high-end business users and content creators. This targeted group has a strong demand for leveraging AI processing capabilities to improve productivity efficiency and can also benefit immediately from related applications, making them the first-generation primary users.

The emergence of AI PCs is not expected to necessarily stimulate additional PC purchase demand. Instead, most upgrades to AI PC devices will occur naturally as part of the business equipment replacement cycle projected for 2024.

Nevertheless, looking to the long term, the potential development of more diverse AI tools—along with a price reduction—may still lead to a higher adoption rate of consumer AI PCs.

Read more

(Photo credit: Intel)

Please note that this article cites information from Commercial Times and Intel.

2024-03-28

[News] AI Confronts an ‘Energy Crisis’? Electricity May Have Emerged as a Challenge

Could AI Be Heading Towards an “Energy Crisis”? Speculation suggests that a Microsoft engineer involved in the GPT-6 training cluster project has warned that deploying over 100,000 H100 GPUs in a single state might trigger a collapse of the power grid. Despite signs of OpenAI’s progress in training GPT-6, the availability of electricity could emerge as a critical bottleneck.

Kyle Corbitt, co-founder and CEO of AI startup OpenPipe, revealed in a post on social platform X that he recently spoke with a Microsoft engineer responsible for the GPT-6 training cluster project. The engineer complained that deploying InfiniBand-level links between GPUs across regions has been a painful task.

Continuing the conversation, Corbitt asked, “why not just colocate the cluster in one region?” The Microsoft engineer replied, “Oh yeah, we tried that first. We can’t put more than 100K H100s in a single state without bringing down the power grid.”

At the just-concluded CERAWeek 2024, attended by top executives from the global energy industry, discussions revolved around the advancement of AI technology in the sector and the significant demand for energy driven by AI.

As per a report from Bloomberg, during his speech, Toby Rice, chief of the largest US natural gas driller, EQT Corp., cited a forecast predicting AI could gobble up more power than households by 2030.

Additionally, Sam Altman from OpenAI has expressed concerns about the energy, particularly electricity, demands of AI. Per a report from Reuters, at the Davos Forum earlier this year, he stated that AI’s development requires breakthroughs in energy, as AI is expected to bring about significantly higher electricity demands than anticipated.

According to a report by The New Yorker on March 9th citing data of Alex de Vries, a data expert at the Dutch National Bank, it has indicated that ChatGPT consumes over 500,000 kilowatt-hours of electricity daily to process around 200 million user requests, equivalent to over 17,000 times the daily electricity consumption of an average American household. As for search giant Google, if it were to use AIGC for every user search, its annual electricity consumption would increase to around 29 billion kilowatt-hours, surpassing the annual electricity consumption of countries like Kenya and Guatemala.

Looking back at 2022, when AI hadn’t yet sparked such widespread enthusiasm, data centers in China and the United States respectively accounted for 3% and 4% of their respective total societal electricity consumption.

As global computing power gradually increases, a March 24th research report from Huatai Securities predicts that by 2030, the total electricity consumption of data centers in China and the United States will reach approximately 0.95/0.65 trillion kilowatt-hours and 1.7/1.2 trillion kilowatt-hours respectively, representing over 3.5 times and 6 times that of 2022. In an optimistic scenario, by 2030, the AI electricity consumption in China/US will account for 20%/31% of the total societal electricity consumption in 2022.

Read more

(Photo credit: Taiwan Business Topics)

Please note that this article cites information from Kyle Corbitt’s X Account, BloombergReuters and Liberty Times Net.

2024-03-28

[News] Memory Manufacturers Vie for HBM3e Market

Recently, South Korean media Alphabiz reported that Samsung may exclusively supply 12-layer HBM3e to NVIDIA.

The report indicates NVIDIA is set to commence large-scale purchases of Samsung Electronics’ 12-layer HBM3e as early as September, who will exclusively provide the 12-layer HBM3e to NVIDIA.

NVIDIA CEO Jensen Huang, as per Alphabiz reported, left his signature “Jensen Approved” on a physical 12-layer HBM3e product from Samsung Electronics at GTC 2024, which seems to suggest NVIDIA’s recognition of Samsung’s HBM3e product.

HBM is characterized by its high bandwidth, high capacity, low latency, and low power consumption. With the surge in artificial intelligence (AI) industry, the acceleration of AI large-scale model applications has driven the continuous growth of demand in high-performance memory market.

According to TrendForce’s data, HBM market value accounted for approximately 8.4% of the overall DRAM industry in 2023, and this percentage is projected to expand to 20.1% by the end of 2024.

Senior Vice President Avril Wu notes that by the end of 2024, the DRAM industry is expected to allocate approximately 250K/m (14%) of total capacity to producing HBM TSV, with an estimated annual supply bit growth of around 260%.

HBM3e: Three Major Original Manufacturers Kick off Fierce Rivalry

Following the debut of the world’s first TSV HBM product in 2014, HBM memory technology has now iterated to HBM3e after nearly 10 years of development.

From the perspective of original manufacturers, competition in the HBM3e market primarily revolves around Micron, SK Hynix, and Samsung. It is reported that these three major manufacturers already provided 8-hi (24GB) samples in late July, mid-August, and early October 2023, respectively. It is worth noting that this year, they have kicked off fierce competition in the HBM3e market by introducing latest products.

On February 27th, Samsung announced the launch of its first 12-layer stacked HBM3e DRAM–HBM3e 12H, which marks Samsung’s largest-capacity HBM product to date, boasting a capacity of up to 36GB. Samsung stated that it has begun offering samples of the HBM3e 12H to customers and anticipates starting mass production in the second half of this year.

In early March, Micron announced that it had commenced mass production of its HBM3e solution. The company stated that the NVIDIA H200 Tensor Core GPU will adopt Micron’s 8-layer stacked HBM3e memory with 24GB capacity and shipments are set to begin in the second quarter of 2024.

On March 19th, SK Hynix announced the successful large-scale production of its new ultra-high-performance memory product, HBM3e, designed for AI applications. This achievement symbolizes the world’s first supply of DRAM’s highest-performance HBM3e in existence to customers.

A previous report from TrendForce has indicated that, starting in 2024, the market’s attention will shift from HBM3 to HBM3e, with expectations for a gradual ramp-up in production through the second half of the year, positioning HBM3e as the new mainstream in the HBM market.

TrendForce reports that SK hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter. With Samsung having already made significant strides in HBM3 and its HBM3e validation expected to be completed soon, the company is poised to significantly narrow the market share gap with SK Hynix by the end of the year, reshaping the competitive dynamics in the HBM market.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from DRAMeXchange.

 

  • Page 156
  • 390 page(s)
  • 1950 result(s)

Get in touch with us