Articles


2024-05-08

[News] Apple Unveiled M4 Chip for AI, Heralding a New Era of AI PC

On May 7 (The US time), Apple launched its latest self-developed computer chip, M4, which is integrated into the new iPad Pro as its debut platform. M4 allegedly boasts Apple’s fastest-ever neural engine, capable of performing up to 380 trillion operations per second, surpassing the neural processing units of any AI PC available today.

Apple stated that the neural engine, along with the next-generation machine learning accelerator in the CPU, high-performance GPU, and higher-bandwidth unified memory, makes the M4 an extremely powerful AI chip.

  • Teardown of M4 Chip

Internally, M4 consists of 28 billion transistors, slightly more than M3. In terms of process node, the chip is built on the second-generation 3nm technology, functioning as a system-on-chip (SoC) that further enhances the efficiency of Apple’s chips.

Reportedly, M4 utilizes the second-generation 3nm technology in line with TSMC’s previously introduced N3E process. According to TSMC, while N3E’s density isn’t as high as N3B, it offers better performance and power characteristics.

On core architecture, the new CPU of M4 chip features up to 10 cores, comprising 4 performance cores and 6 efficiency cores, which is 2 more efficiency cores compared to M3.

The new 10-core GPU builds upon the next-generation GPU architecture introduced with M3 and brings dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to the iPad for the first time. M4 significantly improves professional rendering performance in applications like Octane, now 4 times faster than the M2.

Compared to the powerful M2 in the previous iPad Pro generation, M4 boasts a 1.5x improvement in CPU performance. Whether processing complex orchestral files in Logic Pro or adding demanding effects to 4K videos in LumaFusion, M4 can enhance the performance of the entire professional workflow.

As to memory, the M4 chip adopts faster LPDDR5X, achieving a unified memory bandwidth of 120GB/s. LPDDR5X is a mid-term update of the LPDDR5 standard, offering higher memory clock speeds up to 6400 MT/s. Currently, LPDDR5X speed reaches up to 8533 MT/s, although the memory clock speed of M4 only reaches approximately 7700 MT/s.

Data from the industry shows that Apple M3 features up to 24GB of memory, but there is no further data indicating whether Apple will address memory expansion. The new iPad Pro models will be equipped with 8GB or 16GB of DRAM, depending on the specific model.

The new neural network engine integrated in M4 chip has 16 cores, capable of running at a speed of 380 trillion operations per second, which is 60 times faster than the first neural network engine on the Apple A11 Bionic chip.

Additionally, M4 chip adopts a revolutionary display engine designed with cutting-edge technology, achieving astonishing precision, color accuracy, and brightness uniformity on the Ultra Retina XDR display, which combines the light from two OLED panels to create the most advanced display.

Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, stated that M4’s high-efficiency performance and its innovative display engine enable the iPad Pro’s slim design and groundbreaking display. Fundamental improvements in the CPU, GPU, neural engine, and memory system make M4 a perfect fit for the latest AI-driven applications. Overall, this new chip makes the iPad Pro the most powerful device of its kind.

  • 2024 Marks the First Year of AI PC Era

Currently, AI has emerged as a superstar worldwide. Apart from markets like servers, the consumer market is embracing a new opportunity–AI PC.

Previously, TrendForce anticipated 2024 to mark a significant expansion in edge AI applications, leveraging the groundwork laid by AI servers and branching into AI PCs and other terminal devices.  Edge AI applications with rigorous requirements will return to AI PC to dispersing the workload of AI servers and expand the possibility of AI usage scale. However, the definition of AI PC remains unclear.

According to Apple, the neural engine in M4 is Apple’s most powerful neural engine to date, outperforming any neural processing unit in any AI PC available today. Tim Millet, Vice President of Apple Platform Architecture, stated that M4 provides the same performance as M2 while using only half the power. Compared to the next-generation PC chips of various lightweight laptops, M4 delivers the same performance with only 1/4 of the power consumption.

Meanwhile, frequent developments from other major players suggest an increasingly fierce competition in AI PC sector, and the industry also holds high expectations for AI PC. Microsoft regarded 2024 as the “Year of AI PC.” Based on the estimated product launch timeline of PC brand manufacturers, Microsoft predicts that half of commercial computers will be AI PCs in 2026.

Intel has once emphasized that AI PC will be a turning point for the revival of the PC industry. In the industry highlights of 2024, AI PC will play a crucial role. Pat Gelsinger from Intel previously stated on a conference that driven by the demand for AI PC and the update cycles of Windows, customers continue to add processor orders to Intel. As such, Intel’s AI PC CPU shipments in 2024 are expected to exceed the original target of 40 million units.

TrendForce posited AI PCs are expected to meet Microsoft’s benchmark of 40 TOPS in computational power. With new products meeting this threshold expected to ship in late 2024, significant growth is anticipated in 2025, especially following Intel’s release of its Lunar Lake CPU by the end of 2024.

The AI PC market is currently propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.

Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.

Read more

(Photo credit: Apple)

Please note that this article cites information from WeChat account  DRAMeXchange.

2024-05-08

[News] U.S. Imposes Further Sanctions, Revoking Intel and Qualcomm’s License to Supply Chips to Huawei

The U.S. government has reportedly revoked the licenses of Intel and Qualcomm to supply semiconductor chips used in laptops and handsets to Huawei. According to Reuters citing sources, some companies received notices on May 7th, and the revocation of the licenses took immediate effect.

In April, Huawei unveiled its first AI-supported laptop, the MateBook X Pro, equipped with an Intel Core Ultra 9 processor. This announcement drew criticism from Republican lawmakers in the United States, who argued that the Commerce Department allowed Intel to export chips to Huawei. Notably, the sources cited in a report by Reuters on March 12th once stated that Intel’s competitor, AMD, had applied for a similar license to sell comparable chips in early 2021 but did not receive approval from the US Department of Commerce.

In response to the matter surrounding Intel and Huawei, the Commerce Department confirmed the revocation of some export licenses to Huawei but declined to provide further details. Still, revoking the licenses not only damages Huawei but may also impact U.S. suppliers with business relationships with the company.

According to a report from Bloomberg, Qualcomm, which obtained a license in 2020, has been selling older 4G networking chips to Huawei, but the company expects its business to gradually decrease next year.

Another report from Reuters also indicated that Qualcomm continues to license its 5G technology portfolio to Huawei, allowing the latter to use HiSilicon’s 5G chips since last year, raising concerns of violating U.S. sanctions. Additionally, according to the same report, documents submitted by Qualcomm this month indicated that its patent agreement with Huawei will expire in the fiscal year 2025, which is earlier than expected, thus prompting negotiations for renewal agreements to begin sooner. Qualcomm has not responded to these reports.

Due to concerns over potential espionage activities by Huawei, the White House included Huawei in the trade restriction list in 2019, which requires suppliers to apply for licenses before shipping goods to blacklisted companies. However, despite this, Huawei suppliers still obtained licenses worth billions of USD to sell goods and technology to the Chinese tech giant, including allowing Intel to sell CPUs starting in 2020.

Republican Representative Elise Stefanik believes that revoking the licenses will strengthen U.S. national security, protect U.S. intellectual property rights, and thus weaken the technological advancement capabilities of communist China.

Previously, U.S. Commerce Secretary Gina Raimondo pointed out that the new chips introduced by Huawei are not as capable and lag behind U.S. chips by several years in performance, indicating that U.S. export controls on China are effective.

Read more

(Photo credit: iStock)

Please note that this article cites information from Reuters and Bloomberg.

2024-05-08

[News] Intel Reportedly Collaborates with 14 Japanese Companies to Develop Semiconductor Backend Process Technology

According to a report from Nikkei News, US chip giant Intel will join forces with 14 Japanese companies to develop automation technology for “backend” semiconductor processes such as packaging. The aim is said to achieve automation by 2028, highlighting efforts by both the US and Japan to collaborate and reduce geopolitical risks in the semiconductor supply chain.

Intel’s collaborating partners include Japanese firms such as Omron, Yamaha Motor, Resonac, and Shin-Etsu Polymer, a subsidiary of Shin-Etsu Chemical Industry. The alliance, led by Intel Japan’s Managing Director Kunimasa Suzuki, plans to invest hundreds of billions of Japanese Yen in research and development, aiming to demonstrate technological achievements before 2028.

In the semiconductor field, as “frontend” process technologies such as circuit formation approach physical limits, the focus of technological competition is gradually shifting to “backend” processes such as chip stacking to enhance performance.

Most semiconductor backend processes are currently carried out through manual labor, leading to the concentration of factories in China and Southeast Asian countries with abundant labor force. However, to establish plants in countries like the US and Japan, where labor costs are higher, industry players consider automation technology as a crucial prerequisite.

Led by Intel, the alliance plans to establish backend production lines in Japan in the coming years, aiming for full automation. They also intend to standardize backend technologies to manage and control manufacturing, inspection, and equipment processing procedures under a single system.

According to data from the Japanese Ministry of Economy, Trade and Industry, Japanese companies currently hold a 30% share of the global semiconductor production equipment market and dominate approximately half of the semiconductor materials market.

It is widely expected that the Japanese Ministry of Economy, Trade and Industry will allocate hundreds of billions of Japanese Yen in subsidies for this project. The Japanese government has allocated approximately JPY 4 trillion (around USD 26 billion) from fiscal year 2021 to 2023 to support key industries contributing to economic security.

In April of this year, Japan approved a subsidy of JPY 53.5 billion to Rapidus to assist in backend technology development. Additionally, there are considerations to offer incentives to attract global backend capacity providers to establish operations in Japan.

Japanese and American policymakers are attempting to keep most of the chip manufacturing processes within their own territories, aiming to reduce risks in critical supply chains.

TrendForce has previously reported that Japan’s resurgence in the semiconductor arena is palpable, with the Ministry of Economy, Trade, and Industry fostering multi-faceted collaborations with the private sector. With a favorable exchange rate policy aiding factory construction and investments, the future looks bright for exports.

Read more

(Photo credit: Intel)

Please note that this article cites information from Nikkei News.

2024-05-08

[Insights] Memory Spot Price Update: Buyer Shift to Spot Inquiries Due to DRAM Module Contract Price Increase

According to TrendForce’s latest memory spot price trend report, due to a significant increase in DRAM module contract prices, some buyers are turning to spot inquiries, leading to partial transactions at lower prices. Meanwhile, NAND Flash prices have shown loosening in spot prices as certain module manufacturers adopt a more cautious approach towards future wafer price trends, reducing their inventory buildup. Details are as follows:

DRAM Spot Price:

Spot prices of DRAM chips have yet to rebound, and the overall chip transaction volume has been limited due to the tepid demand situation. Regarding DRAM modules, some spot transactions have been arranged in the lower price range as a few buyers experiencing significant increases in the contract market have sought quotes in the spot market. Currently, the May Day holiday is taking place in China, so the spot market has been rather quiet in recent days. Looking ahead, an important market indicator is whether inventory-related preparations for the 618 Sales Event will lead to a notable demand increase. The average spot price of the mainstream chips (i.e., DDR4 1Gx8 2666MT/s) has not changed from last week and is holding steady at US$1.949.

NAND Flash Spot Price:

A number of module houses, who are reserved towards future wafer price trends, are no longer building significant inventory in order to achieve austerity to maintain their cash required for operations. This has led to a loosening in spot prices. Suppliers are attempting to avoid another predicament of excessive provision by controlling product availability, though such action has proven to be quite restricted pertaining to the increase of packaged die prices. Spot prices of 512Gb TLC wafers have dropped by 0.99% this week, arriving at US$3.708.

2024-05-08

[News] SK Hynix Plans to Manufacture 3D NAND over 400 Layers at -70°C with the Help of TEL

SK Hynix has been exploring the potential of manufacturing 3D NAND at ultra-low temperatures, which may enable the South Korean memory giant to produce its new-generation product with over 400 layers, South Korea’s media outlet TheElec revealed.

According to the report, instead of testing in its own wafer fabs, SK Hynix has sent test wafers to Tokyo Electron (TEL) to test the performance of the latter’s latest cryogenic etching tool. Unlike existing ones, which usually operate at 0~30°C, the Japanese fab equipment maker’s new etching equipment is capable of performing high-speed etching at -70°C.

According to a press release by TEL, its latest memory channel hole etch technology enables a 10-µm-deep etch with a high aspect ratio in just 33 minutes. It can also reduce the global warming potential by 84% compared with previous technologies.

Industry sources cited by the report indicated that SK Hynix plans to utilize a triple-stack structure for 321-layer NAND. However, when it comes to etching in deep channel holes, achieving uniformity is a major challenge. As a result, companies usually adopt double or even triple-stack structures for 3D NAND manufacturing due to the considerable difficulty in etching vertical holes.

With the help of TEL’s new etching equipment, it may be possible in the future to manufacture 3D NAND with over 400 layers, even in structures with fewer stacked layers, allowing memory manufacturers to reduce costs thanks to simplified processes. SK Hynix aims to produce 3D NAND products with over 400 layers, and depending on their performance, these NAND chips may adopt single or double-stack structures.

 

(Photo credit: SK hynix)

Please note that this article cites information from TheElec
  • Page 1
  • 260 page(s)
  • 1300 result(s)

Get in touch with us