News
As Apple keeps advancing in AI as well as developing its own in-house processors, industry sources indicated that the tech giant’s Chief Operating Officer (COO) Jeff Williams recently made a visit to TSMC, and was personally received by TSMC’s President, C.C. Wei, according a report by Economic Daily News.
The low-profile visit was made to secure TSMC’s advanced manufacturing capacity, potentially 2nm process, booked for Apple’s in-house AI-chips, according to the report.
Apple has been collaborating with TSMC for many years on the A-series processors used in iPhones. In recent years, Apple initiated the long-term Apple Silicon project, creating the M-series processors for MacBook and iPad, with Williams playing a key role. Thus, his recent visit to Taiwan has garnered significant industry attention.
Apple did not respond to the rumor. TSMC, on the other hand, has maintained its usual stance, not commenting on market speculations related to specific customers.
According to an earlier report from The Wallstreet Journal, Apple has been working closely with TSMC to design and produce its own AI chips tailored for data centers in the primary stage. It is suggested that Apple’s server chips may focus on executing AI models, particularly in AI inference, rather than AI training, where NVIDIA’s chips currently dominate.
Also, in a bid to seize the AI PC market opportunity, Apple’s new iPad Pro launched in early May has featured its in-house M4 chip. In an earlier report by Wccftech, Apple’s M4 chip adopts TSMC’s N3E process, aligning with Apple’s plans for a major performance upgrade for Mac.
In addition to Apple, with the flourishing of AI applications, TSMC has also reportedly beening working closely with the other two major AI giants, NVIDIA and AMD. It’s reported by the Economic Daily News that they have secured TSMC’s advanced packaging capacity for CoWoS and SoIC packaging through this year and the next, bolstering TSMC’s AI-related business orders.
Read more
(Photo credit: TSMC)
News
Industry sources cited by a report from Economic Daily News have indicated that Apple is accelerating the development of its foldable device, moving up the expected launch from 2026 to 2025. Apple has reportedly placed orders for flexible panels from Samsung, with plans for the foldable device to debut with the iPad before expanding to the iPhone.
Moreover, the smartphone market leader is said to have already secured a supply of flexible panels from Samsung in the first half of this year, hinting at its determination to enter the foldable market.
Hinges are expected to be the most crucial and newly added component for Apple’s foldable device, experiencing a surge in demand. Shin Zu Shing, Taiwanese supplier for foldable smartphone hinges, having cooperated with Apple in the field for many years, stands to benefit greatly.
In addition, other Taiwanese Apple supply chain partners, including Foxconn, Largan Precision, and Pegatron, are anticipated to benefit similarly as with existing iPad and iPhone production. The aforementioned Apple suppliers typically refrain from commenting on individual customer and order dynamics.
A report from SamMobile also indicated that, Apple may have signed a contract with Samsung Display (SDC) for the supply of foldable displays. It is estimated in the same report that limited supplies will begin in 2025, ramping up to mass production in 2026. By 2027, the supply is expected to reach 65 million units, increasing to 100 million units in 2028.
Additionally, the ordered display sizes are larger than those of existing iPhones, indicating that the display components procured by Apple from Samsung will be used in new foldable device products.
Industry sources cited in the report from Economic Daily News believe that Apple’s first foldable device will be unveiled by the end of 2025 or early 2026, targeting the ultra-high-end market segment. It is expected to come in two sizes: 7.9 inches and 8.3 inches, competing against foldable devices from Samsung and Huawei.
According to the analysis released by TrendForce in the second half of last year, Apple’s development in the folding field still requires time. Apple’s foray into foldables has been tepid, to say the least.
TrendForce reports that global shipments of foldable phones reached 15.9 million units in 2023, marking a 25% YoY increase and accounting for approximately 1.4% of the overall smartphone market. In 2024, shipments are expected to rise to about 17.7 million units, growing by 11% and slightly increasing the market share to 1.5%. However, this growth rate remains below market expectations, with the segment’s share predicted to exceed 2% only by 2025.
Read more
(Photo credit: Apple)
News
As a strategic technology empowering a new round of technological revolution and industrial transformation, AI has become one of the key driving forces for the development of new industrialization. Fueled by the ChatGPT craze, AI and its applications are rapidly gaining traction worldwide. From an industrial perspective, NVIDIA currently holds almost absolute dominance in the AI chip market.
Meanwhile, major tech companies such as Google, Microsoft, and Apple are actively joining the competition, scrambling to seize the opportunity. Meta, Google, Intel, and Apple have launched the latest AI chips in hopes of reducing reliance on companies like NVIDIA. Microsoft and Samsung have also reportedly made investment plans for AI development.
Recently, according to multiple global media reports, Microsoft is developing a new AI mega-model called MAI-1. This model far exceeds some of Microsoft’s previously released open-source models in scale and is expected to rival well-known large models like Google’s Gemini 1.5, Anthropic’s Claude 3, and OpenAI’s GPT-4 in terms of performance. Reports suggest that Microsoft may demonstrate MAI-1 at the upcoming Build developer conference.
In response to the growing demand for AI computing, Microsoft recently announced a plan to invest billions of dollars in building AI infrastructure in Wisconsin. Microsoft stated that this move will create 2,300 construction jobs, and could contribute to up to 2,000 data center jobs when completing construction.
Furthermore, Microsoft will establish a new AI lab at the University of Wisconsin-Milwaukee to provide AI technology training.
Microsoft’s investment plan in the US involves an amount of USD 3.3 billion, which plus its investments previously announced in Japan, Indonesia, Malaysia and Thailand amount to over USD 11 billion in reference to AI-related field.
Microsoft’s recent announcements shows that it plans to invest USD 2.9 billion over the next two years to enhance its cloud computing and AI infrastructure in Japan, USD 1.7 billion within the next four years to expand cloud services and AI in Indonesia, including building data centers, USD 2.2 billion over the next four years in Malaysia in cloud computing and AI, and USD 1 billion to set up the first data center in Thailand, dedicated to providing AI skills training for over 100,000 people.
Apple has also unveiled its first AI chip, M4. Apple introduced that the neural engine in M4 chip is the most powerful one the company has ever developed, outstripping any neural processing unit in current AI PCs. Apple further emphasized that it will “break new ground” in generative AI this year, bringing transformative opportunities to users.
According to a report from The Wall Street Journal, Apple has been working on its own chips designed to run AI software on data center servers. Sources cited in the report revealed that the internal codename for the server chip project is ACDC (Apple Chips in Data Center). The report indicates that the ACDC project has been underway for several years, but it’s currently uncertain whether this new chip will be commissioned and when it might hit the market.
Tech journalist Mark Gurman also suggests that Apple will introduce AI capabilities in the cloud this year using its proprietary chips. Gurman’s sources indicate that Apple intends to deploy high-end chips (Similar to those designed for Mac) in cloud computing servers to handle cutting-edge AI tasks on Apple devices. Simpler AI-related functions will continue to be processed directly by chips embedded in iPhone, iPad, and Mac devices.
As per industry sources cited by South Korean media outlet ZDNet Korea, Samsung Electronics’ AI inference chip, Mach-1, is set to begin prototype production using a multi-project wafer (MPW) approach and is expected to be based on Samsung’s in-house 4nm process.
Previously at a shareholder meeting, Samsung revealed its plan to launch a self-made AI accelerator chip, Mach-1, in early 2025. As a critical step in Samsung’s AI development strategy, Mach-1 chip is an AI inference accelerator built on application-specific integrated circuit (ASIC) design and equipped with LPDDR memory, making it particularly suitable for edge computing applications.
Kyung Kye-hyun, head of Samsung Electronics’ DS (Semiconductor) division, stated that the development goal of this chip is to reduce the data bottleneck between off-chip memory and computing chips to 1/8 through algorithms, while also achieving an eight-fold improvement in efficiency. He noted that Mach-1 chip design has gained the verification of field-programmable gate array (FPGA) technology and is currently in the physical implementation stage of system-on-chip (SoC), which is expected to be ready in late 2024, with a Mach-1 chip-driven AI system to be launched in early 2025.
In addition to developing AI chip Mach-1, Samsung has established a dedicated research lab in Silicon Valley focusing on general artificial intelligence (AGI) research. The intention is to develop new processors and memory technologies capable of meeting future AGI system processing requirements.
Read more
(Photo credit: Pixabay)
News
On May 7 (The US time), Apple launched its latest self-developed computer chip, M4, which is integrated into the new iPad Pro as its debut platform. M4 allegedly boasts Apple’s fastest-ever neural engine, capable of performing up to 380 trillion operations per second, surpassing the neural processing units of any AI PC available today.
Apple stated that the neural engine, along with the next-generation machine learning accelerator in the CPU, high-performance GPU, and higher-bandwidth unified memory, makes the M4 an extremely powerful AI chip.
Internally, M4 consists of 28 billion transistors, slightly more than M3. In terms of process node, the chip is built on the second-generation 3nm technology, functioning as a system-on-chip (SoC) that further enhances the efficiency of Apple’s chips.
Reportedly, M4 utilizes the second-generation 3nm technology in line with TSMC’s previously introduced N3E process. According to TSMC, while N3E’s density isn’t as high as N3B, it offers better performance and power characteristics.
On core architecture, the new CPU of M4 chip features up to 10 cores, comprising 4 performance cores and 6 efficiency cores, which is 2 more efficiency cores compared to M3.
The new 10-core GPU builds upon the next-generation GPU architecture introduced with M3 and brings dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to the iPad for the first time. M4 significantly improves professional rendering performance in applications like Octane, now 4 times faster than the M2.
Compared to the powerful M2 in the previous iPad Pro generation, M4 boasts a 1.5x improvement in CPU performance. Whether processing complex orchestral files in Logic Pro or adding demanding effects to 4K videos in LumaFusion, M4 can enhance the performance of the entire professional workflow.
As to memory, the M4 chip adopts faster LPDDR5X, achieving a unified memory bandwidth of 120GB/s. LPDDR5X is a mid-term update of the LPDDR5 standard, offering higher memory clock speeds up to 6400 MT/s. Currently, LPDDR5X speed reaches up to 8533 MT/s, although the memory clock speed of M4 only reaches approximately 7700 MT/s.
Data from the industry shows that Apple M3 features up to 24GB of memory, but there is no further data indicating whether Apple will address memory expansion. The new iPad Pro models will be equipped with 8GB or 16GB of DRAM, depending on the specific model.
The new neural network engine integrated in M4 chip has 16 cores, capable of running at a speed of 380 trillion operations per second, which is 60 times faster than the first neural network engine on the Apple A11 Bionic chip.
Additionally, M4 chip adopts a revolutionary display engine designed with cutting-edge technology, achieving astonishing precision, color accuracy, and brightness uniformity on the Ultra Retina XDR display, which combines the light from two OLED panels to create the most advanced display.
Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, stated that M4’s high-efficiency performance and its innovative display engine enable the iPad Pro’s slim design and groundbreaking display. Fundamental improvements in the CPU, GPU, neural engine, and memory system make M4 a perfect fit for the latest AI-driven applications. Overall, this new chip makes the iPad Pro the most powerful device of its kind.
Currently, AI has emerged as a superstar worldwide. Apart from markets like servers, the consumer market is embracing a new opportunity–AI PC.
Previously, TrendForce anticipated 2024 to mark a significant expansion in edge AI applications, leveraging the groundwork laid by AI servers and branching into AI PCs and other terminal devices. Edge AI applications with rigorous requirements will return to AI PC to dispersing the workload of AI servers and expand the possibility of AI usage scale. However, the definition of AI PC remains unclear.
According to Apple, the neural engine in M4 is Apple’s most powerful neural engine to date, outperforming any neural processing unit in any AI PC available today. Tim Millet, Vice President of Apple Platform Architecture, stated that M4 provides the same performance as M2 while using only half the power. Compared to the next-generation PC chips of various lightweight laptops, M4 delivers the same performance with only 1/4 of the power consumption.
Meanwhile, frequent developments from other major players suggest an increasingly fierce competition in AI PC sector, and the industry also holds high expectations for AI PC. Microsoft regarded 2024 as the “Year of AI PC.” Based on the estimated product launch timeline of PC brand manufacturers, Microsoft predicts that half of commercial computers will be AI PCs in 2026.
Intel has once emphasized that AI PC will be a turning point for the revival of the PC industry. In the industry highlights of 2024, AI PC will play a crucial role. Pat Gelsinger from Intel previously stated on a conference that driven by the demand for AI PC and the update cycles of Windows, customers continue to add processor orders to Intel. As such, Intel’s AI PC CPU shipments in 2024 are expected to exceed the original target of 40 million units.
TrendForce posited AI PCs are expected to meet Microsoft’s benchmark of 40 TOPS in computational power. With new products meeting this threshold expected to ship in late 2024, significant growth is anticipated in 2025, especially following Intel’s release of its Lunar Lake CPU by the end of 2024.
The AI PC market is currently propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.
Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.
Read more
(Photo credit: Apple)
News
With the skyrocketing demand for AI, cloud service providers (CSPs) are hastening the development of in-house chips. Apple, making a surprising move, is actively developing a data center-grade chip codenamed “Project ACDC,” signaling its foray into the realm of AI accelerators for servers.
As per a report from global media The Wall Street Journal, Apple is developing an AI accelerator chip for data center servers under the project name “Project ACDC.” Sources familiar with the matter revealed that Apple is closely collaborating with TSMC, but the timing of the new chip’s release remains uncertain.
Industry sources cited by the same report from Commercial Times disclosed that Apple’s AI accelerator chip will be developed using TSMC’s 3-nanometer process. Servers equipped with this chip are expected to debut next year, further enhancing the performance of its data centers and future cloud-based AI tools.
Industry sources cited in Commercial Times‘ report reveal that cloud service providers (CSPs) frequently choose TSMC’s 5 and 7-nanometer processes for their in-house chip development, capitalizing on TSMC’s mature advanced processes to enhance profit margins. Additionally, the same report also highlights that major industry players including Microsoft, AWS, Google, Meta, and Apple rely on TSMC’s advanced processes and packaging, which significantly contributes to the company’s performance.
Apple has consistently been an early adopter of TSMC’s most advanced processes, relying on their stability and technological leadership. Apple’s adoption of the 3-nanometer process and CoWoS advanced packaging next year is deemed the most reasonable solution, which will also help boost TSMC’s 3-nanometer production capacity utilization.
Generative AI models are rapidly evolving, enabling businesses and developers to address complex problems and discover new opportunities. However, large-scale models with billions or even trillions of parameters pose more stringent requirements for training, tuning, and inference.
Per Commercial Times citing industry sources, it has noted that Apple’s entry into the in-house chip arena comes as no surprise, given that giants like Google and Microsoft have long been deploying in-house chips and have successively launched iterative products.
In April, Google unveiled its next-generation AI accelerator, TPU v5p, aimed at accelerating cloud-based tasks and enhancing the efficiency of online services such as search, YouTube, Gmail, Google Maps, and Google Play Store. It also aims to improve execution efficiency by integrating cloud computing with Android devices, thereby enhancing user experience.
At the end of last year, AWS introduced two in-house chips, Graviton4 and Trainium2, to strengthen energy efficiency and computational performance to meet various innovative applications of generative AI.
Microsoft also introduced the Maia chip, designed for processing OpenAI models, Bing, GitHub Copilot, ChatGPT, and other AI services.
Meta, on the other hand, completed its second-generation in-house chip, MTIA, designed for tasks related to AI recommendation systems, such as content ranking and recommendations on Facebook and Instagram.
Read more
(Photo credit: Apple)