News
According to a report from Economic Daily News, while the industry is still focusing on the launch of Apple’s new products featuring self-developed M4 chips, Apple is reportedly making significant investments in the development of its next-generation M5 chips, to strengthen its position in the AI PC competition with more powerful ARM architecture processors.
Notably, the report highlighted that Apple will continue to adopt TSMC’s 3nm process, increasing orders for TSMC’s advanced processes. According to the report, it is expected that the next-generation M5 chip will be launched as early as the second half of next year to the end of the year.
In the current wave of AI PC competition, tech giants in the x86 camp, such as Intel and AMD, have launched new processors to vie for market share. Meanwhile, Apple, the market leader in the Arm camp, is accelerating its efforts to expand its presence and continue to develop next-generation self-developed chips, as indicated by the report.
According to the report, citing industry sources, Apple’s upcoming M5 chip is expected to deliver enhanced AI performance and computing power, potentially triggering a new wave of iPhone purchases. The report indicated that this will generate substantial chip foundry orders for TSMC. Plus, it will also benefit Apple’s end-product partners, such as Foxconn and Quanta.
Regarding Apple’s decision not to use TSMC’s 2nm process for the M5, the report, citing industry sources, noted that this is primarily due to the high costs. However, compared to the M4, the M5 features significant advancements, as it will utilize TSMC’s 3D chip-stacking technology, known as SoIC. This approach allows for better thermal management and reduced leakage compared to traditional 2D designs, as the report pointed out.
Apart from using TSMC’s 3nm process for chip development, the report noted, citing industry sources, that Apple has actively placed orders for TSMC’s 2nm process and the first batch of production capacity of the A16 process.
According to the report, The 2nm process is expected to be introduced as early as next year in the APs for Apple’s iPhone 17 Pro and 17 Pro Max models. As for the rumored ultra-thin iPhone 17 Air model, its AP may continue to use the 3nm process family.
Regarding clients for the 2nm process, the report noted, citing comments from TSMC’s chairman C.C. Wei during a previous earnings call, that inquiries for the 2nm process are outpacing those for the 3nm process. Additionally, the A16 process is considered highly attractive for AI server applications.
Wei noted that high-performance computing (HPC) applications are increasingly moving toward chiplet designs; however, this shift will not impact the adoption of the 2nm process. According to the report, current customer demand for the 2nm process exceeds that of the 3nm process, and production capacity is expected to be higher, as the report indicated.
According to an industrial source cited by MoneyDJ, TSMC started the mass production of 3nm in 2022, while the 2nm is expected to enter volume production in 2025, indicating that the generation cycle for a node has been expanded to three years.
Thus, supported by TSMC’s major clients, the contribution from 3nm will continue to rise next year and remain a key revenue driver in 2026, while the 2nm process is expected to replicate or even surpass the success of 3nm, MoneyDJ notes. According to previous market speculations, tech giants such as Apple, NVIDIA and AMD are believed to be the first batch of TSMC’s 2nm customers.
Read more
(Photo credit: Apple)
News
Apple is reportedly pushing the boundaries of AI with the upcoming iPhone 16 series, which is expected to have computational power that surpasses industry expectations. According to a report from Economic Daily News, it has suggested that Apple is developing the A18 chip for this year’s iPhone 16 models, with performance potentially exceeding that of Apple’s current most powerful AI chip, the M4. This advancement means the iPhone 16 series will be more capable of running AI models on-device, adapting to various AI tasks.
While these applications are primarily aimed at high-end smartphones, the sources cited by the same report have shown optimism that TSMC and Foxconn, as parts of Apple’s supply chain, are likely to benefit from this development.
The same report further cites the rumor that the A18 chip developed for the iPhone 16 series will feature a highly powerful neural engine, crucial for handling generative AI functions. To keep up with the AI trend, Apple introduced its proprietary AI application, “Apple Intelligence,” in collaboration with OpenAI at this year’s Worldwide Developers Conference (WWDC). This application is designed for high-end models like the iPhone 15 Pro and Pro Max, with hardware capabilities exceeding expectations.
The iPad Pro which unveiled in May is the first to feature the M4 chip. Compared to its predecessor, the M2, the M4 chip boasts up to a 50% increase in CPU speed. Built with TSMC’s second-generation 3-nanometer technology, the M4 chip includes Apple’s fastest neural engine to date, capable of supporting up to 38 trillion operations per second.
If the A18 chip is equipped with an even more powerful neural engine, its computational speed will surpass that of the M4 chip. This means the iPhone 16 series will be able to run AI models locally with greater efficiency. Reportedly, TSMC is the exclusive supplier for the iPhone 16’s processors, therefore expected to be benefited from the strond demand of A18.
On the other hand, Foxconn, historically the largest assembler of iPhones, has recently focused on high-end models. As Apple intensifies AI functionality in new devices, the market anticipates a new wave of device upgrades, enhancing Foxconn’s performance in consumer electronics in the latter half of the year.
Read more
(Photo credit: Apple)
News
On May 7 (The US time), Apple launched its latest self-developed computer chip, M4, which is integrated into the new iPad Pro as its debut platform. M4 allegedly boasts Apple’s fastest-ever neural engine, capable of performing up to 380 trillion operations per second, surpassing the neural processing units of any AI PC available today.
Apple stated that the neural engine, along with the next-generation machine learning accelerator in the CPU, high-performance GPU, and higher-bandwidth unified memory, makes the M4 an extremely powerful AI chip.
Internally, M4 consists of 28 billion transistors, slightly more than M3. In terms of process node, the chip is built on the second-generation 3nm technology, functioning as a system-on-chip (SoC) that further enhances the efficiency of Apple’s chips.
Reportedly, M4 utilizes the second-generation 3nm technology in line with TSMC’s previously introduced N3E process. According to TSMC, while N3E’s density isn’t as high as N3B, it offers better performance and power characteristics.
On core architecture, the new CPU of M4 chip features up to 10 cores, comprising 4 performance cores and 6 efficiency cores, which is 2 more efficiency cores compared to M3.
The new 10-core GPU builds upon the next-generation GPU architecture introduced with M3 and brings dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to the iPad for the first time. M4 significantly improves professional rendering performance in applications like Octane, now 4 times faster than the M2.
Compared to the powerful M2 in the previous iPad Pro generation, M4 boasts a 1.5x improvement in CPU performance. Whether processing complex orchestral files in Logic Pro or adding demanding effects to 4K videos in LumaFusion, M4 can enhance the performance of the entire professional workflow.
As to memory, the M4 chip adopts faster LPDDR5X, achieving a unified memory bandwidth of 120GB/s. LPDDR5X is a mid-term update of the LPDDR5 standard, offering higher memory clock speeds up to 6400 MT/s. Currently, LPDDR5X speed reaches up to 8533 MT/s, although the memory clock speed of M4 only reaches approximately 7700 MT/s.
Data from the industry shows that Apple M3 features up to 24GB of memory, but there is no further data indicating whether Apple will address memory expansion. The new iPad Pro models will be equipped with 8GB or 16GB of DRAM, depending on the specific model.
The new neural network engine integrated in M4 chip has 16 cores, capable of running at a speed of 380 trillion operations per second, which is 60 times faster than the first neural network engine on the Apple A11 Bionic chip.
Additionally, M4 chip adopts a revolutionary display engine designed with cutting-edge technology, achieving astonishing precision, color accuracy, and brightness uniformity on the Ultra Retina XDR display, which combines the light from two OLED panels to create the most advanced display.
Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, stated that M4’s high-efficiency performance and its innovative display engine enable the iPad Pro’s slim design and groundbreaking display. Fundamental improvements in the CPU, GPU, neural engine, and memory system make M4 a perfect fit for the latest AI-driven applications. Overall, this new chip makes the iPad Pro the most powerful device of its kind.
Currently, AI has emerged as a superstar worldwide. Apart from markets like servers, the consumer market is embracing a new opportunity–AI PC.
Previously, TrendForce anticipated 2024 to mark a significant expansion in edge AI applications, leveraging the groundwork laid by AI servers and branching into AI PCs and other terminal devices. Edge AI applications with rigorous requirements will return to AI PC to dispersing the workload of AI servers and expand the possibility of AI usage scale. However, the definition of AI PC remains unclear.
According to Apple, the neural engine in M4 is Apple’s most powerful neural engine to date, outperforming any neural processing unit in any AI PC available today. Tim Millet, Vice President of Apple Platform Architecture, stated that M4 provides the same performance as M2 while using only half the power. Compared to the next-generation PC chips of various lightweight laptops, M4 delivers the same performance with only 1/4 of the power consumption.
Meanwhile, frequent developments from other major players suggest an increasingly fierce competition in AI PC sector, and the industry also holds high expectations for AI PC. Microsoft regarded 2024 as the “Year of AI PC.” Based on the estimated product launch timeline of PC brand manufacturers, Microsoft predicts that half of commercial computers will be AI PCs in 2026.
Intel has once emphasized that AI PC will be a turning point for the revival of the PC industry. In the industry highlights of 2024, AI PC will play a crucial role. Pat Gelsinger from Intel previously stated on a conference that driven by the demand for AI PC and the update cycles of Windows, customers continue to add processor orders to Intel. As such, Intel’s AI PC CPU shipments in 2024 are expected to exceed the original target of 40 million units.
TrendForce posited AI PCs are expected to meet Microsoft’s benchmark of 40 TOPS in computational power. With new products meeting this threshold expected to ship in late 2024, significant growth is anticipated in 2025, especially following Intel’s release of its Lunar Lake CPU by the end of 2024.
The AI PC market is currently propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.
Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.
Read more
(Photo credit: Apple)