Apple


2024-05-08

[News] Apple Unveiled M4 Chip for AI, Heralding a New Era of AI PC

On May 7 (The US time), Apple launched its latest self-developed computer chip, M4, which is integrated into the new iPad Pro as its debut platform. M4 allegedly boasts Apple’s fastest-ever neural engine, capable of performing up to 380 trillion operations per second, surpassing the neural processing units of any AI PC available today.

Apple stated that the neural engine, along with the next-generation machine learning accelerator in the CPU, high-performance GPU, and higher-bandwidth unified memory, makes the M4 an extremely powerful AI chip.

  • Teardown of M4 Chip

Internally, M4 consists of 28 billion transistors, slightly more than M3. In terms of process node, the chip is built on the second-generation 3nm technology, functioning as a system-on-chip (SoC) that further enhances the efficiency of Apple’s chips.

Reportedly, M4 utilizes the second-generation 3nm technology in line with TSMC’s previously introduced N3E process. According to TSMC, while N3E’s density isn’t as high as N3B, it offers better performance and power characteristics.

On core architecture, the new CPU of M4 chip features up to 10 cores, comprising 4 performance cores and 6 efficiency cores, which is 2 more efficiency cores compared to M3.

The new 10-core GPU builds upon the next-generation GPU architecture introduced with M3 and brings dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to the iPad for the first time. M4 significantly improves professional rendering performance in applications like Octane, now 4 times faster than the M2.

Compared to the powerful M2 in the previous iPad Pro generation, M4 boasts a 1.5x improvement in CPU performance. Whether processing complex orchestral files in Logic Pro or adding demanding effects to 4K videos in LumaFusion, M4 can enhance the performance of the entire professional workflow.

As to memory, the M4 chip adopts faster LPDDR5X, achieving a unified memory bandwidth of 120GB/s. LPDDR5X is a mid-term update of the LPDDR5 standard, offering higher memory clock speeds up to 6400 MT/s. Currently, LPDDR5X speed reaches up to 8533 MT/s, although the memory clock speed of M4 only reaches approximately 7700 MT/s.

Data from the industry shows that Apple M3 features up to 24GB of memory, but there is no further data indicating whether Apple will address memory expansion. The new iPad Pro models will be equipped with 8GB or 16GB of DRAM, depending on the specific model.

The new neural network engine integrated in M4 chip has 16 cores, capable of running at a speed of 380 trillion operations per second, which is 60 times faster than the first neural network engine on the Apple A11 Bionic chip.

Additionally, M4 chip adopts a revolutionary display engine designed with cutting-edge technology, achieving astonishing precision, color accuracy, and brightness uniformity on the Ultra Retina XDR display, which combines the light from two OLED panels to create the most advanced display.

Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, stated that M4’s high-efficiency performance and its innovative display engine enable the iPad Pro’s slim design and groundbreaking display. Fundamental improvements in the CPU, GPU, neural engine, and memory system make M4 a perfect fit for the latest AI-driven applications. Overall, this new chip makes the iPad Pro the most powerful device of its kind.

  • 2024 Marks the First Year of AI PC Era

Currently, AI has emerged as a superstar worldwide. Apart from markets like servers, the consumer market is embracing a new opportunity–AI PC.

Previously, TrendForce anticipated 2024 to mark a significant expansion in edge AI applications, leveraging the groundwork laid by AI servers and branching into AI PCs and other terminal devices.  Edge AI applications with rigorous requirements will return to AI PC to dispersing the workload of AI servers and expand the possibility of AI usage scale. However, the definition of AI PC remains unclear.

According to Apple, the neural engine in M4 is Apple’s most powerful neural engine to date, outperforming any neural processing unit in any AI PC available today. Tim Millet, Vice President of Apple Platform Architecture, stated that M4 provides the same performance as M2 while using only half the power. Compared to the next-generation PC chips of various lightweight laptops, M4 delivers the same performance with only 1/4 of the power consumption.

Meanwhile, frequent developments from other major players suggest an increasingly fierce competition in AI PC sector, and the industry also holds high expectations for AI PC. Microsoft regarded 2024 as the “Year of AI PC.” Based on the estimated product launch timeline of PC brand manufacturers, Microsoft predicts that half of commercial computers will be AI PCs in 2026.

Intel has once emphasized that AI PC will be a turning point for the revival of the PC industry. In the industry highlights of 2024, AI PC will play a crucial role. Pat Gelsinger from Intel previously stated on a conference that driven by the demand for AI PC and the update cycles of Windows, customers continue to add processor orders to Intel. As such, Intel’s AI PC CPU shipments in 2024 are expected to exceed the original target of 40 million units.

TrendForce posited AI PCs are expected to meet Microsoft’s benchmark of 40 TOPS in computational power. With new products meeting this threshold expected to ship in late 2024, significant growth is anticipated in 2025, especially following Intel’s release of its Lunar Lake CPU by the end of 2024.

The AI PC market is currently propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.

Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.

Read more

(Photo credit: Apple)

Please note that this article cites information from WeChat account  DRAMeXchange.

2024-05-08

[News] Rise of In-House Chips: 5 Tech Giants In the Front

With the skyrocketing demand for AI, cloud service providers (CSPs) are hastening the development of in-house chips. Apple, making a surprising move, is actively developing a data center-grade chip codenamed “Project ACDC,” signaling its foray into the realm of AI accelerators for servers.

As per a report from global media The Wall Street Journal, Apple is developing an AI accelerator chip for data center servers under the project name “Project ACDC.” Sources familiar with the matter revealed that Apple is closely collaborating with TSMC, but the timing of the new chip’s release remains uncertain.

Industry sources cited by the same report from Commercial Times disclosed that Apple’s AI accelerator chip will be developed using TSMC’s 3-nanometer process. Servers equipped with this chip are expected to debut next year, further enhancing the performance of its data centers and future cloud-based AI tools.

Industry sources cited in Commercial Times‘ report reveal that cloud service providers (CSPs) frequently choose TSMC’s 5 and 7-nanometer processes for their in-house chip development, capitalizing on TSMC’s mature advanced processes to enhance profit margins. Additionally, the same report also highlights that major industry players including Microsoft, AWS, Google, Meta, and Apple rely on TSMC’s advanced processes and packaging, which significantly contributes to the company’s performance.

Apple has consistently been an early adopter of TSMC’s most advanced processes, relying on their stability and technological leadership. Apple’s adoption of the 3-nanometer process and CoWoS advanced packaging next year is deemed the most reasonable solution, which will also help boost TSMC’s 3-nanometer production capacity utilization.

Generative AI models are rapidly evolving, enabling businesses and developers to address complex problems and discover new opportunities. However, large-scale models with billions or even trillions of parameters pose more stringent requirements for training, tuning, and inference.

Per Commercial Times citing industry sources, it has noted that Apple’s entry into the in-house chip arena comes as no surprise, given that giants like Google and Microsoft have long been deploying in-house chips and have successively launched iterative products.

In April, Google unveiled its next-generation AI accelerator, TPU v5p, aimed at accelerating cloud-based tasks and enhancing the efficiency of online services such as search, YouTube, Gmail, Google Maps, and Google Play Store. It also aims to improve execution efficiency by integrating cloud computing with Android devices, thereby enhancing user experience.

At the end of last year, AWS introduced two in-house chips, Graviton4 and Trainium2, to strengthen energy efficiency and computational performance to meet various innovative applications of generative AI.

Microsoft also introduced the Maia chip, designed for processing OpenAI models, Bing, GitHub Copilot, ChatGPT, and other AI services.

Meta, on the other hand, completed its second-generation in-house chip, MTIA, designed for tasks related to AI recommendation systems, such as content ranking and recommendations on Facebook and Instagram.

Read more

(Photo credit: Apple)

Please note that this article cites information from The Wall Street Journal and Commercial Times.

2024-05-07

[News] Apple Allegedly Developing AI Processor for Data Centers, with TSMC as Its Foundry Partner

According to a report from Economic Daily News citing The Wallstreet Journal, Apple is rumored to be developing its own AI chips tailored for data centers, which could potentially give the world’s top smartphone seller a crucial advantage in the AI arms race. The report, quoting sources familiar with the matter, stated that Apple has been working closely with its chip manufacturing partner TSMC to design and produce these chips in the primary stage. However, it is still unclear whether the final version has been produced yet.

It is suggested that Apple’s server chips may focus on executing AI models, particularly in AI inference, rather than AI training, where Nvidia’s chips currently dominate.

Over the past decade, Apple has gradually become a major player in chip design for products like iPhone, iPad, Apple Watch, and Mac. The latest project involving Apple chips for data center servers, internally named “Project ACDC” (short for Apple Chips in Data Center), will integrate Apple’s IC design capabilities into the operation of clients’ servers, sources said.

The project has been in operation for several years, though the timetable for launching this server chip remains unclear. Apple is expected to unveil more new AI products and AI-related updates at its Worldwide Developers Conference (WWDC) in June.

An Apple spokesperson declined to comment on the reported developments.

According to reports from Wccftech on April 23rd, Apple is said to be working on a self-developed AI server processor using TSMC’s 3-nanometer process, with plans for mass production expected in the second half of 2025.

Please note that this article cites information from the Wallstreet Journal and Economic Daily News
2024-05-06

[News] Apple M4 Incoming, Boosting TSMC’s 3nm Production

In a bid to seize the AI PC market opportunity, Apple is set to debut its new iPad Pro on the 7th, featuring its in-house M4 chip. With the momentum of the M4 chip’s strong debut, Apple reportedly plans to revamp its entire Mac lineup. The initial batch of M4 Macs is estimated to hit the market gradually from late this year to early next year.

It’s reported by a report from Commercial Times that Apple’s M4 chip adopts TSMC’s N3E process, aligning with Apple’s plans for a major performance upgrade for Mac, which is expected to boost TSMC’s operations.

Notably, per Wccftech’s previous report, it is rumored that the N3E process is also used for producing products like the A18 Pro, the upcoming Qualcomm Snapdragon 8 Gen 4, and the MediaTek Dimensity 9400, among other major clients’ products.

Apple held an online launch event in Taiwan on May 7th at 10 p.m. Per industry sources cited by the same report, besides introducing accessories like iPad Pro, iPad Air, and Apple Pencil, the event will mark the debut of the M4 self-developed chip, unveiling the computational capabilities of Apple’s first AI tablet.

With major computer brands and chip manufacturers competing to release AI PCs, such as Qualcomm’s Snapdragon X Elite and X Plus, and Intel introducing Core Ultra into various laptop brands, it is imperative for Apple to upgrade the performance of its products. Therefore, the strategy of highlighting AI performance through the M4 chip comes as no surprise.

According to a report by Mark Gurman from Bloomberg, the M4 chip will be integrated across Apple’s entire Mac product line. The first batch of M4 Macs is said to be expected to debut as early as the end of this year, including new iMac models, standard 14-inch MacBook Pro, high-end 14-inch and 16-inch MacBook Pro, and Mac mini. New products for 2025 will also be released gradually, such as updates to the 13-inch and 15-inch MacBook Air in the spring, updates to the Mac Studio in mid-year, and finally updates to the Mac Pro.

The report from Commercial Times has claimed that the M4 chip will come in three versions: Donan, Brava, and Hidra. The Donan variant is intended for entry-level MacBook Pro, MacBook Air, and low-end Mac mini models. The Brava version is expected to be used in high-end MacBook Pro and Mac mini models, while the Hidra version will be integrated into desktop Mac Pro computers.

Apple’s plan to introduce the M4 chip into its Mac series is expected to boost the revenue of TSMC’s 3-nanometer family. The report has indicated that the M4 chip will still be manufactured using TSMC’s 3-nanometer process, but with enhancements to the neural processing engine (NPU), providing AI capabilities to Apple’s product line. Additionally, industry sources cited by the same report have revealed that the M4 will utilize TSMC’s N3E process, an improvement over the previous N3B process used in the M3 series chips.

Meanwhile, TSMC continues to advance its existing advanced process node optimization versions. Among them, the N3E variant of the 3-nanometer family, which entered mass production in the fourth quarter of last year, will be followed by N3P and N3X. Currently, N3E is highly likely to be featured in the new generation iPad Pro.

Source: TSMC

Read more

(Photo credit: Apple)

Please note that this article cites information from Commercial Times, Wccftech and Bloomberg.

2024-04-25

[News] Apple Rumored to Develop In-House AI Processor for Mass Production in the Second Half of 2025

According to reports from global media outlets like MacRumors and Wccftech on April 23rd, Apple is said to be developing its first in-house AI processor for PCs, the M4 chip, and is also working on a self-developed AI server processor using TSMC’s 3-nanometer process, with plans for mass production expected in the second half of 2025.

As per Wccftech’s report, based on the production schedule, Apple’s AI server processor might utilize TSMC’s “N3E” process. It is rumored that the N3E process is also used for producing products like the A18 Pro, the upcoming Qualcomm Snapdragon 8 Gen 4, and the MediaTek Dimensity 9400, among other major clients’ products.

Regarding this matter, per a report from Economic Daily News citing sources, it has indicated that Apple’s development of AI server processors will bring new momentum to TSMC’s advanced process orders. Subsequently, assembly orders for related AI servers are expected to be undertaken by Foxconn, becoming two major benefactors of Apple’s aggressive push into AI among Taiwan’s manufacturers.

The source referenced previous reports suggesting that Apple has secured the initial capacity for TSMC’s 3-nanometer process for at least a year. According to TSMC’s financial reports, the revenue contribution from its largest customer exceeded NTD 500 billion in 2022 and is projected to reach NTD 546.5 billion in 2023, setting a new record. TSMC’s largest customer is, anticipated by the report from Economic Daily News, to be Apple.

The same report from Economic Daily News continues by quoting industry sources who revealed that Apple has conducted extensive AI functionality testing, which is highly confidential. Apple and Foxconn have reportedly been engaged in many projects and ongoing tests.

With Apple’s full-scale push into the AI field and plans to introduce AI features in this year’s new iPhone models, there are also rumors of Apple possibly launching its own developed AI chip.

Read more

(Photo credit: Apple)

Please note that this article cites information from MacRumorsWccftech and Economic Daily News.

  • Page 10
  • 33 page(s)
  • 165 result(s)

Get in touch with us