AI PC


2024-06-07

[News] Qualcomm Reportedly Targets Data Centers as Its Next Step, Expecting Products to Adopt Nuvia

Last year, Qualcomm entered the PC market, sparking an AI PC frenzy in collaboration with Microsoft Copilot+. According to Qualcomm CEO Cristiano Amon, beyond mobile devices, PCs, and automotive applications, Qualcomm is now focusing on data centers. In the long term, these products will eventually adopt Qualcomm’s in-house developed Nuvia architecture.

Amon pointed out that as PCs enter a new cycle and AI engines bring new experiences, just as mobile phones require slim designs but must not overheat or become too bulky, Qualcomm has always been focused on technological innovation rather than just improving power consumption. While traditional PC leaders may emphasize TOPS (trillions of operations per second), energy and efficiency are also crucial.

Amon stressed the importance of maintaining battery life and integrating functionalities beyond CPU and GPU, which he believes will be key to defining leadership in the PC market. He also joked that if you use an X86 computer, it would run out of battery quickly, but with a new computer (AI PC) next year, it would last a long time without draining power.

Amon noted that Qualcomm’s Snapdragon X Elite and Snapdragon X Plus have been developed with superior NPU performance and battery life. Moreover, Snapdragon X Elite is just the first generation, which focuses more on performance supremacy, while the upcoming generations may put more emphasis on computational power, and integrating these into chip design.

Currently, more than 20 AI PCs equipped with Snapdragon X Elite and Snapdragon X Plus have been launched, including models from 7 OEMs, such Acer, Asus, Dell, HP, and others.

Amon believed that the market penetration rate will continue to increase next year. He sees AI PCs as a new opportunity, suggesting that it may take some time for them to be widely adopted when a new version of Windows for PC market emerges. However, considering the end of Windows 10 support, users can transition to new models with Copilot+, which he believes will be adopted much faster.

Amon pointed out that NPUs have already demonstrated their advantages in the PC and automotive chip industries, and these capabilities can be extended to data centers or other technologies.

He then highlighted data centers as a significant opportunity for transition to Arm architecture and expressed belief in increased opportunities for edge computing in the future. Amon also mentioned the adoption of Nuvia architecture in smartphones, data centers, and automotive industries. Additionally, he disclosed plans to launch mobile products featuring Microsoft processors at the October Snapdragon Annual Summit.

Read more

(Photo credit: Qualcomm)

Please note that this article cites information from TechNews.

2024-06-04

[News] Intel CEO Gelsinger Unveils Lunar Lake Processors, Giving Credit to TSMC

Intel CEO Pat Gelsinger delivered a keynote speech at COMPUTEX Taipei earlier today, unveiling the next-generation client architecture set to launch this year. According to a report from CNA, He expressed gratitude to TSMC for collaborating on the development of the Lunar Lake processors, intended for the next generation of AI PCs. Currently, there are over 80 designs from 20 manufacturers.

Previously on the IFS Direct Connect event in San Jose, USA, Gelsinger pointed out in an interview that two generations of CPU Tiles would be manufactured using TSMC’s N3B process,  marking the official arrival of Intel CPU orders for laptop platforms.

Gelsinger’s interview confirms that Intel has indeed expanded its outsourcing orders to TSMC. Currently, TSMC is responsible for producing Intel CPUs, GPUs, and NPUs tiles for the Arrow and Lunar Lake platforms.

As per CNA’s report, Gelsinger announced the launch of the Xeon 6 platform and processor family designed to meet the demands of data centers, as well as the Gaudi AI accelerator. He also unveiled details of the Lunar Lake processor architecture.

As the flagship processor for the next generation of AI PCs, Lunar Lake significantly enhances graphics and AI processing, reducing system-on-chip power consumption by 40% and providing over three times the AI computing capability. Lunar Lake processors are expected to start shipping in the third quarter of this year.

Additionally, Intel plans to ship over 40 million Core Ultra processors this year, further solidifying its position in the AI PC field.

Gelsinger remarked that having 100,000 transistors on a chip would be remarkable in the early days, but now there are already 1 billion transistors on a chip, with the potential to reach even 1 trillion in the future.

Contrary to what NVIDIA CEO Jensen Huang recently described in his speech, Gelsinger indicated further that Moore’s Law is alive and well, and Taiwan continues to play a core role.

According to Gelsinger, Intel has been operating in Taiwan since 1985 and will enter its 40th year of operation next year. The partnership between Intel and Taiwan spans 39 years, and the combined initials of Intel (I) and Taiwan (T) stand for information technology (IT). Together, Intel and its Taiwanese partners can change the world once again.

Read more

(Photo credit: Intel)

Please note that this article cites information from CNA and Intel.

2024-05-22

[News] Intel’s Lunar Lake Bundled Memory Reportedly Causes Uproar in the PC Supply Chain

On May 20th, Intel announced that the release date for its next-generation processor, Lunar Lake, has been moved up, with official shipments expected in the third quarter. The NPU performance is set to reach 45 TOPS. However, per a report from Economic Daily News, the industry is puzzled by the fact that this chip is bundled with 16GB and 32GB memory, with Intel holding the specification control tightly. Reportedly, this move has disrupted the industry order, and PC manufacturers are said to be privately expressing their dissatisfaction.

It is expected that 20 brands will release 80 models featuring this processor. Combined shipments of Metro Lake and Lunar Lake this year are projected to reach 40 million units. Unlike the previous generation, Lunar Lake’s packaging design integrates LPDDR5x memory into a single package, emphasizing low power consumption.

On May 20th, Microsoft launched its next-generation AI PCs, equipped with a more powerful AI assistant, Copilot, and new features. It also established a new standard for AI PC architecture, “Copilot+ PC.” The initial products all feature Qualcomm’s “Snapdragon X Elite” processors designed with Arm architecture.

Qualcomm’s CPUs in the new PCs are equipped with a Neural Processing Engine (NPE) designed specifically for AI applications, boasting 45 TOPS. This, as per another report from the Economic Daily News, results in a 58% increase in speed and extended battery life compared to Apple’s latest top-tier MacBook, which uses the M3 chip. Additionally, they support Microsoft’s AI chatbot, Copilot.

Intel, on the other hand, made a rare announcement, revealing that its next-generation Lunar Lake will have a total performance exceeding 100 TOPS, with the NPU alone exceeding 45 TOPS—nearly three times that of the previous generation. Additionally, the CPU and GPU combined computing power will exceed 60 TOPS, making it the second qualified processor for Microsoft’s Copilot+ PC platform.

However, it is important to note that according to Intel’s plans, the new generation processors Ultra 5/7/9 will be bundled with memory and shipped together with the CPU. Specifically, the high-end Ultra 9 will be bundled with 32GB of memory, while the Ultra 5 and Ultra 7 will have 16GB and 32GB versions. Per Microsoft’s recommendations, AI PCs need at least 16GB of memory. While Intel’s approach meets this requirement, it limits the ability of brands to adjust specifications and leaves memory manufacturers out of the loop.

In simpler terms, there is still a demand for 8GB memory in lower-end notebooks, and high-end laptops can require more than 64GB of memory. However, Intel’s Lunar Lake constraints make it difficult to plan both high-end and entry-level versions. Industry sources cited in the same report from Economic Daily News indicate that Intel’s next-generation Arrow Lake will not be bundled with memory.

Reportedly, industry sources also state that procurement contracts with memory suppliers have traditionally been long-term, accounting for annual memory requirements. Now, Intel’s bundling of memory with its single platform changes the industry’s ecosystem. Previously, PC brands would develop various combinations (CPU + memory + SSD capacity) for their product lines. However, with Intel defining five laptop CPU + memory specifications, it limits the customization capabilities of PC brands.

With Intel launching Lunar Lake early, AMD is set to counter with its next-generation AI processor Ryzen series named Strix Point in the fourth quarter. The Strix Point processor will feature AI processing power exceeding 50 TOPS, and there will also be an APU, Strix Halo, expected to launch around the end of the year with performance exceeding 60 TOPS, making it a significant player in AI computing power.

CEO Pat Gelsinger recently demonstrated the performance of the Lunar Lake processor, emphasizing that its total AI workload exceeds 100 TOPS, with the NPU contributing 45 TOPS. The CPU features Lion Cove architecture P-cores and Skymont architecture E-cores, while the GPU and CPU together provide over 60 TOPS of computing power. This means Intel’s chip AI performance will be more than three times that of current products, with a total combined performance exceeding 100 TOPS.

Read more

(Photo credit: Intel)

Please note that this article cites information from Intel and Economic Daily News.

2024-05-09

[COMPUTEX 2024] The Rise of Generative AI Sparks Innovation across Industries, with Taiwan-based Companies Leading as Essential Partners in the Global Supply Chain

“The Dawn of Generative AI Has Come!” This new chapter in the course of human technological evolution was first introduced by NVIDIA’s founder, Jensen Huang. Qualcomm’s CEO, Cristiano Amon, also shares this optimism regarding generative AI. Amon believes this technology is rapidly evolving and being adopted for applications such as mobile devices. It is expected to have the potential to radically transform the landscape of the smartphone industry. Similarly, Intel has declared the arrival of the “AI PC” era, signaling a major shift in computing-related technologies and applications.

COMPUTEX 2024, the global showcase of AIoT and startup innovations, will run from June 4th to June 7th. This year’s theme, ‘Connecting AI’, aligns perfectly with the article’s focus on the transformative power of Generative AI and Taiwan’s pivotal role in driving innovation across industries.

This year, AI is transitioning from cloud computing to on-premise computing. Various “AI PCs” and “AI smartphones” are being introduced to the market, offering a wide range of selections. The current year of 2024 is even being referred to as the “Year of AI PC,” with brands such as Asus, Acer, Dell, Lenovo, and LG actively releasing new products to capture market share. With the rapid rise of AI PCs and AI smartphones, revolutionary changes are expected to occur in workplaces and people’s daily lives. Furthermore, the PC and smartphone industries are also expected to be reinvigorated with new sources of demand.

An AI PC refers to a laptop (notebook) computer capable of performing on-device AI computations. Its main difference from regular office or business laptops lies in its CPU, which includes an additional neural processing unit (NPU). Examples of AI CPUs include Intel’s Core Ultra series and AMD’s Ryzen 8040 series. Additionally, AI PCs come with more DRAM to meet the demands of AI computations, thereby supporting related applications like those involving machine learning.

Microsoft’s role is crucial in this context, as the company has introduced a conversational AI assistant called “Copilot” that aims to seamlessly integrate itself into various tasks, such as working on Microsoft Office documents, video calls, web browsing, and other forms of collaborative activities. With Copilot, it is now possible to add a direct shortcut button for AI on the keyboard, allowing PC users to experience a holistic collaborative relationship with AI.

In the future, various computer functions will continue to be optimized with AI. Moreover, barriers that existed for services such as ChatGPT, which still require an internet connection, are expected to disappear. Hence, AI-based apps on PCs could one day be run offline. Such a capability is also one of the most eagerly awaited features among PC users this year.

Surging Development of LLMs Worldwide Has Led to a Massive Increase in AI Server Shipments

AI-enabled applications are not limited to PCs and smartphones. For example, an increasing number of cloud companies have started providing services that leverage AI in various domains, including passenger cars, household appliances, home security devices, wearable devices, headphones, cameras, speakers, TVs, etc. These services often involve processing voice commands and answering questions using technologies like ChatGPT. Going forward, AI-enabled applications will become ubiquitous in people’s daily lives.

Not to be overlooked is the fact that, as countries and multinational enterprises continue to develop their large language models (LLMs), the demand for AI servers will increase and thus promote overall market growth. Furthermore, edge AI servers are expected to become a major growth contributor in the future as well. Small-sized businesses are more likely to use LLMs that are more modest in scale for various applications. Therefore, they are more likely to consider adopting lower-priced AI chips that also offer excellent cost-to-performance ratios.

TrendForce projects that shipments of AI servers, including models equipped with GPUs, FPGAs, and ASICs, will reach 1.655 million units in 2024, marking a growth of 40.2% compared with the 2023 figure. Furthermore, the share of AI servers in the overall server shipments for 2024 is projected to surpass 12%.

Regarding the development of AI chips in the current year of 2024, the focus is on the competition among the B100, MI300, and Gaudi series respectively released by NVIDIA, AMD, and Intel. Apart from these chips, another significant highlight of this year is the emergence of in-house designed chips or ASICs from cloud service providers.

In addition to AI chips, the development of AI on PCs and smartphones is certainly another major driving force behind the technology sector in 2024. In the market for CPUs used in AI PCs, Intel’s Core Ultra series and AMD’s Ryzen 8000G series are expected to make a notable impact. The Snapdragon X Elite from Qualcomm has also garnered significant attention as it could potentially alter the competitive landscape in the near future.

Turning to the market for SoCs used in AI smartphones, the fierce competition between Qualcomm’s Snapdragon 8 Gen 3 and MediaTek’s Dimensity 9300 series is a key indicator. Another development that warrants attention is the adoption of AI chips in automotive hardware, such as infotainment systems and advanced driver assistance systems. The automotive market is undoubtedly one of the main battlegrounds among chip suppliers this year.

The supply chain in Taiwan has played a crucial role in providing the hardware that supports the advancement of AI-related technologies. When looking at various sections of the AI ecosystem, including chip manufacturing as well as the supply chains for AI servers and AI PCs, Taiwan-based companies have been important contributors.

Taiwan-based Companies in the Supply Chain Stand Ready for the Coming Wave of AI-related Demand

In the upstream of the supply chain, semiconductor foundries and OSAT providers such as TSMC, UMC, and ASE have always been key suppliers. As for ODMs or OEMs, companies including Wistron, Wiwynn, Inventec, Quanta, Gigabyte, Supermicro, and Foxconn Industrial Internet have become major participants in the supply chains for AI servers and AI PCs.

In terms of components, AI servers are notable for having a power supply requirement that is 2-3 times greater than that of general-purpose servers. The power supply units used in AI servers are also required to offer specification and performance upgrades. Turning to AI PCs, they also have higher demands for both computing power and energy consumption. Therefore, advances in the technologies related to power supply units represent a significant indicator this year with respect to the overall development of AI servers and AI PCs. Companies including Delta Electronics, LITE-ON, AcBel Polytech, CWT, and Chicony are expected to make important contributions to the upgrading and provisioning of power supply units.

Also, as computing power increases, heat dissipation has become a pressing concern for hardware manufacturers looking to further enhance their products. The advancements in heat dissipation made by solution providers such as Sunon, Auras, AVC, and FCN during this year will be particularly noteworthy.

Besides the aforementioned companies, Taiwan is also home to numerous suppliers for other key components related to AI PCs. The table below lists notable component providers operating on the island.

With the advent of generative AI, the technology sector is poised for a boom across its various domains. From AI PCs to AI smartphones and a wide range of smart devices, this year’s market for electronics-related technologies is characterized by diversity and innovation. Taiwan’s supply chain plays a vital role in the development of AI PCs and AI servers, including chips, components, and entire computing systems. As competition intensifies in the realm of LLMs and AI chips, this entire market is expected to encounter more challenges and opportunities.

Join the AI grand event at Computex 2024, alongside CEOs from AMD, Intel, Qualcomm, and ARM. Discover more about this expo! https://bit.ly/44Gm0pK

(Photo credit: Qualcomm)

2024-05-08

[News] Apple Unveiled M4 Chip for AI, Heralding a New Era of AI PC

On May 7 (The US time), Apple launched its latest self-developed computer chip, M4, which is integrated into the new iPad Pro as its debut platform. M4 allegedly boasts Apple’s fastest-ever neural engine, capable of performing up to 380 trillion operations per second, surpassing the neural processing units of any AI PC available today.

Apple stated that the neural engine, along with the next-generation machine learning accelerator in the CPU, high-performance GPU, and higher-bandwidth unified memory, makes the M4 an extremely powerful AI chip.

  • Teardown of M4 Chip

Internally, M4 consists of 28 billion transistors, slightly more than M3. In terms of process node, the chip is built on the second-generation 3nm technology, functioning as a system-on-chip (SoC) that further enhances the efficiency of Apple’s chips.

Reportedly, M4 utilizes the second-generation 3nm technology in line with TSMC’s previously introduced N3E process. According to TSMC, while N3E’s density isn’t as high as N3B, it offers better performance and power characteristics.

On core architecture, the new CPU of M4 chip features up to 10 cores, comprising 4 performance cores and 6 efficiency cores, which is 2 more efficiency cores compared to M3.

The new 10-core GPU builds upon the next-generation GPU architecture introduced with M3 and brings dynamic caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to the iPad for the first time. M4 significantly improves professional rendering performance in applications like Octane, now 4 times faster than the M2.

Compared to the powerful M2 in the previous iPad Pro generation, M4 boasts a 1.5x improvement in CPU performance. Whether processing complex orchestral files in Logic Pro or adding demanding effects to 4K videos in LumaFusion, M4 can enhance the performance of the entire professional workflow.

As to memory, the M4 chip adopts faster LPDDR5X, achieving a unified memory bandwidth of 120GB/s. LPDDR5X is a mid-term update of the LPDDR5 standard, offering higher memory clock speeds up to 6400 MT/s. Currently, LPDDR5X speed reaches up to 8533 MT/s, although the memory clock speed of M4 only reaches approximately 7700 MT/s.

Data from the industry shows that Apple M3 features up to 24GB of memory, but there is no further data indicating whether Apple will address memory expansion. The new iPad Pro models will be equipped with 8GB or 16GB of DRAM, depending on the specific model.

The new neural network engine integrated in M4 chip has 16 cores, capable of running at a speed of 380 trillion operations per second, which is 60 times faster than the first neural network engine on the Apple A11 Bionic chip.

Additionally, M4 chip adopts a revolutionary display engine designed with cutting-edge technology, achieving astonishing precision, color accuracy, and brightness uniformity on the Ultra Retina XDR display, which combines the light from two OLED panels to create the most advanced display.

Apple’s Senior Vice President of Hardware Technologies, Johny Srouji, stated that M4’s high-efficiency performance and its innovative display engine enable the iPad Pro’s slim design and groundbreaking display. Fundamental improvements in the CPU, GPU, neural engine, and memory system make M4 a perfect fit for the latest AI-driven applications. Overall, this new chip makes the iPad Pro the most powerful device of its kind.

  • 2024 Marks the First Year of AI PC Era

Currently, AI has emerged as a superstar worldwide. Apart from markets like servers, the consumer market is embracing a new opportunity–AI PC.

Previously, TrendForce anticipated 2024 to mark a significant expansion in edge AI applications, leveraging the groundwork laid by AI servers and branching into AI PCs and other terminal devices.  Edge AI applications with rigorous requirements will return to AI PC to dispersing the workload of AI servers and expand the possibility of AI usage scale. However, the definition of AI PC remains unclear.

According to Apple, the neural engine in M4 is Apple’s most powerful neural engine to date, outperforming any neural processing unit in any AI PC available today. Tim Millet, Vice President of Apple Platform Architecture, stated that M4 provides the same performance as M2 while using only half the power. Compared to the next-generation PC chips of various lightweight laptops, M4 delivers the same performance with only 1/4 of the power consumption.

Meanwhile, frequent developments from other major players suggest an increasingly fierce competition in AI PC sector, and the industry also holds high expectations for AI PC. Microsoft regarded 2024 as the “Year of AI PC.” Based on the estimated product launch timeline of PC brand manufacturers, Microsoft predicts that half of commercial computers will be AI PCs in 2026.

Intel has once emphasized that AI PC will be a turning point for the revival of the PC industry. In the industry highlights of 2024, AI PC will play a crucial role. Pat Gelsinger from Intel previously stated on a conference that driven by the demand for AI PC and the update cycles of Windows, customers continue to add processor orders to Intel. As such, Intel’s AI PC CPU shipments in 2024 are expected to exceed the original target of 40 million units.

TrendForce posited AI PCs are expected to meet Microsoft’s benchmark of 40 TOPS in computational power. With new products meeting this threshold expected to ship in late 2024, significant growth is anticipated in 2025, especially following Intel’s release of its Lunar Lake CPU by the end of 2024.

The AI PC market is currently propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs.

Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.

Read more

(Photo credit: Apple)

Please note that this article cites information from WeChat account  DRAMeXchange.

  • Page 2
  • 6 page(s)
  • 28 result(s)

Get in touch with us