News
China aims to establish at least 50 AI standards by 2026, as outlined in a new draft policy from Beijing, according to a report by South China Morning Post. The draft policy, released on Tuesday by the Ministry of Industry and Information Technology (MIIT), will not only cover training for large language models (LLMs), but even semiconductors.
This initiative is part of China’s effort to catch up with the U.S. in AI development, the report noted. Earlier in April, Alibaba’s chairman, Joe Tsai, mentioned in an interview that China is at least two years behind its leading US counterparts, like OpenAI and Google, in the global AI race.
China’s proposed standards will cover training for large language models (LLMs), which are the foundation of generative AI services like ChatGPT. Additionally, they will address safety, governance, industrial applications, software, computing systems, data centers, and the technical requirements and testing methodologies for semiconductors.
According to MIIT, these standards are expected to apply to at least 1,000 Chinese technology companies. The document also states that China will participate in creating at least 20 international AI standards, the report said.
MIIT’s draft policy identifies 12 critical technologies in the AI supply chain, including LLMs, natural language processing, computer vision, and machine learning, which involves systems performing complex tasks akin to human problem-solving. The draft policy also identifies four layers that comprise China’s AI industry chain: the foundation (including the computing power, algorithms, and data needed to train LLMs), the framework, the model, and applications.
Citing an industrial expert, the report indicated that the latest draft policy, unlike the usual command-and-control regulations, has adopted a pro-market, soft-law approach to guide and promote China’s AI industry. This stance, which is comparatively innovation-oriented and market-friendly, will not only enable the establishment and development of an AI ecosystem, but benefit other industries as well.
China’s tech giants, led by Huawei, has been aggressively advancing in the AI arena. Previously, Huawei claimed its second-generation AI chip “Ascend 910B” could compete with NVIDIA’s A100 and was working to replace NVIDIA, which holds over 90% of the market share in China. However, according to ChosunBiz, the chip, being manufactured by China’s leading semiconductor foundry, SMIC, has been in mass production for over half a year, yet the yield rate remains around 20%.
On the other hand, in response to US export bans, NVIDIA has commenced to sell H20, its AI chip tailored for the Chinese market earlier this year.
Read more
News
“The Dawn of Generative AI Has Come!” This new chapter in the course of human technological evolution was first introduced by NVIDIA’s founder, Jensen Huang. Qualcomm’s CEO, Cristiano Amon, also shares this optimism regarding generative AI. Amon believes this technology is rapidly evolving and being adopted for applications such as mobile devices. It is expected to have the potential to radically transform the landscape of the smartphone industry. Similarly, Intel has declared the arrival of the “AI PC” era, signaling a major shift in computing-related technologies and applications.
COMPUTEX 2024, the global showcase of AIoT and startup innovations, will run from June 4th to June 7th. This year’s theme, ‘Connecting AI’, aligns perfectly with the article’s focus on the transformative power of Generative AI and Taiwan’s pivotal role in driving innovation across industries.
This year, AI is transitioning from cloud computing to on-premise computing. Various “AI PCs” and “AI smartphones” are being introduced to the market, offering a wide range of selections. The current year of 2024 is even being referred to as the “Year of AI PC,” with brands such as Asus, Acer, Dell, Lenovo, and LG actively releasing new products to capture market share. With the rapid rise of AI PCs and AI smartphones, revolutionary changes are expected to occur in workplaces and people’s daily lives. Furthermore, the PC and smartphone industries are also expected to be reinvigorated with new sources of demand.
An AI PC refers to a laptop (notebook) computer capable of performing on-device AI computations. Its main difference from regular office or business laptops lies in its CPU, which includes an additional neural processing unit (NPU). Examples of AI CPUs include Intel’s Core Ultra series and AMD’s Ryzen 8040 series. Additionally, AI PCs come with more DRAM to meet the demands of AI computations, thereby supporting related applications like those involving machine learning.
Microsoft’s role is crucial in this context, as the company has introduced a conversational AI assistant called “Copilot” that aims to seamlessly integrate itself into various tasks, such as working on Microsoft Office documents, video calls, web browsing, and other forms of collaborative activities. With Copilot, it is now possible to add a direct shortcut button for AI on the keyboard, allowing PC users to experience a holistic collaborative relationship with AI.
In the future, various computer functions will continue to be optimized with AI. Moreover, barriers that existed for services such as ChatGPT, which still require an internet connection, are expected to disappear. Hence, AI-based apps on PCs could one day be run offline. Such a capability is also one of the most eagerly awaited features among PC users this year.
Surging Development of LLMs Worldwide Has Led to a Massive Increase in AI Server Shipments
AI-enabled applications are not limited to PCs and smartphones. For example, an increasing number of cloud companies have started providing services that leverage AI in various domains, including passenger cars, household appliances, home security devices, wearable devices, headphones, cameras, speakers, TVs, etc. These services often involve processing voice commands and answering questions using technologies like ChatGPT. Going forward, AI-enabled applications will become ubiquitous in people’s daily lives.
Not to be overlooked is the fact that, as countries and multinational enterprises continue to develop their large language models (LLMs), the demand for AI servers will increase and thus promote overall market growth. Furthermore, edge AI servers are expected to become a major growth contributor in the future as well. Small-sized businesses are more likely to use LLMs that are more modest in scale for various applications. Therefore, they are more likely to consider adopting lower-priced AI chips that also offer excellent cost-to-performance ratios.
TrendForce projects that shipments of AI servers, including models equipped with GPUs, FPGAs, and ASICs, will reach 1.655 million units in 2024, marking a growth of 40.2% compared with the 2023 figure. Furthermore, the share of AI servers in the overall server shipments for 2024 is projected to surpass 12%.
Regarding the development of AI chips in the current year of 2024, the focus is on the competition among the B100, MI300, and Gaudi series respectively released by NVIDIA, AMD, and Intel. Apart from these chips, another significant highlight of this year is the emergence of in-house designed chips or ASICs from cloud service providers.
In addition to AI chips, the development of AI on PCs and smartphones is certainly another major driving force behind the technology sector in 2024. In the market for CPUs used in AI PCs, Intel’s Core Ultra series and AMD’s Ryzen 8000G series are expected to make a notable impact. The Snapdragon X Elite from Qualcomm has also garnered significant attention as it could potentially alter the competitive landscape in the near future.
Turning to the market for SoCs used in AI smartphones, the fierce competition between Qualcomm’s Snapdragon 8 Gen 3 and MediaTek’s Dimensity 9300 series is a key indicator. Another development that warrants attention is the adoption of AI chips in automotive hardware, such as infotainment systems and advanced driver assistance systems. The automotive market is undoubtedly one of the main battlegrounds among chip suppliers this year.
The supply chain in Taiwan has played a crucial role in providing the hardware that supports the advancement of AI-related technologies. When looking at various sections of the AI ecosystem, including chip manufacturing as well as the supply chains for AI servers and AI PCs, Taiwan-based companies have been important contributors.
Taiwan-based Companies in the Supply Chain Stand Ready for the Coming Wave of AI-related Demand
In the upstream of the supply chain, semiconductor foundries and OSAT providers such as TSMC, UMC, and ASE have always been key suppliers. As for ODMs or OEMs, companies including Wistron, Wiwynn, Inventec, Quanta, Gigabyte, Supermicro, and Foxconn Industrial Internet have become major participants in the supply chains for AI servers and AI PCs.
In terms of components, AI servers are notable for having a power supply requirement that is 2-3 times greater than that of general-purpose servers. The power supply units used in AI servers are also required to offer specification and performance upgrades. Turning to AI PCs, they also have higher demands for both computing power and energy consumption. Therefore, advances in the technologies related to power supply units represent a significant indicator this year with respect to the overall development of AI servers and AI PCs. Companies including Delta Electronics, LITE-ON, AcBel Polytech, CWT, and Chicony are expected to make important contributions to the upgrading and provisioning of power supply units.
Also, as computing power increases, heat dissipation has become a pressing concern for hardware manufacturers looking to further enhance their products. The advancements in heat dissipation made by solution providers such as Sunon, Auras, AVC, and FCN during this year will be particularly noteworthy.
Besides the aforementioned companies, Taiwan is also home to numerous suppliers for other key components related to AI PCs. The table below lists notable component providers operating on the island.
With the advent of generative AI, the technology sector is poised for a boom across its various domains. From AI PCs to AI smartphones and a wide range of smart devices, this year’s market for electronics-related technologies is characterized by diversity and innovation. Taiwan’s supply chain plays a vital role in the development of AI PCs and AI servers, including chips, components, and entire computing systems. As competition intensifies in the realm of LLMs and AI chips, this entire market is expected to encounter more challenges and opportunities.
Join the AI grand event at Computex 2024, alongside CEOs from AMD, Intel, Qualcomm, and ARM. Discover more about this expo! https://bit.ly/44Gm0pK
(Photo credit: Qualcomm)
News
As the Apple Worldwide Developers Conference (WWDC) in June approaches, recent rumors about Apple’s AI research have resurfaced. According to reports from MacRumors and Tom’s Guide, Apple is reportedly developing a large language model (LLM) comparable to ChatGPT that can run directly on devices without relying on cloud platforms.
In late February of this year, Apple reportedly decided to terminate its electric car development project “Project Titan” initiated a decade ago and redirected research funds and resources into the field of generative AI. This move has drawn significant attention to Apple’s activities in the AI sector.
Moreover, MacRumors also reports that Apple’s AI research team, led by John Giannandrea, began developing a conversational AI software, known today as a large language model, four years ago. It is understood that Apple’s proprietary large language model has been trained with over 200 billion parameters, making it more powerful than ChatGPT 3.5.
Previously, Apple disclosed that the iOS 18 operating system, set to launch this year, will incorporate AI capabilities. Recently, tech website Tom’s Guide speculated further that iOS 18 could execute large language models directly on Apple devices. However, whether Apple’s large language model can be successfully integrated into various Apple software services remains to be seen.
Using Apple’s voice assistant Siri as an example, at an AI summit held by Apple in February last year, employees were informed that Siri would integrate a large language model in the future. However, former Siri engineer John Burkey revealed to The New York Times that Siri’s programming is quite complex, requiring six weeks to rebuild the database for each new sentence added.
On the other hand, amid Apple’s AI research facing challenges, interest in its Vision Pro headset device has also begun to wane, with recent sales cooling rapidly. As per a report by Mark Gurman from Bloomberg, he has indicated that demands for Vision Pro demos are way down at Apple stores, and sales of Vision Pro at some stores have dropped from a few units per day to a few units per week.
Read more
(Photo credit: Apple)
News
In the dynamic wave of generative AI, AI PCs emerge as a focal point in the industry’s development. Technological upgrades across the industry chain and the distinctive features of on-device AI, such as security, low latency, and high reliability, drive their rapid evolution. AI PCs are poised to become a mainstream category within the PC market, converging with the PC replacement trend, reported by Jiwei.
On-Device AI, driven by technologies like lightweighting language large models (LLMs), signifies the next stage in AI development. PC makers aim to propel innovative upgrades in AI PC products by seamlessly integrating resources both upstream and downstream. The pivotal upgrade lies in the chip, with challenges in hardware-software coordination, data storage, and application development being inevitable. Nevertheless, AI PCs are on track to evolve at an unprecedented pace, transforming into a “hybrid” encompassing terminals, edge computing, and cloud technology.
Is AI PC Industry Savior?
In the face of consecutive quarters of global PC shipment decline, signs of a gradual easing in the downward trend are emerging. The industry cautiously anticipates a potential recovery, considering challenges such as structural demand cooling and supply imbalances.
Traditionally viewed as a mature industry grappling with long-term growth challenges, the PC industry is witnessing a shift due to the evolution of generative AI technology and the extension of the cloud to the edge. This combination of AI technology with terminal devices like PCs is seen as a trendsetter, with the ascent of AI PCs considered an “industry savior” that could open new avenues for growth in the PC market.
Yuanqing Yang, Chairman and CEO of Lenovo, elaborates on the stimulation of iterative computation and upgrades in AI-enabled terminals by AIGC. Recognizing the desire to enjoy the benefits of AIGC while safeguarding privacy, personal devices or home servers are deemed the safest. Lenovo is poised to invest approximately 7 billion RMB in the AI field over the next three years.
Analysis from Orient Securities, also known as DFZQ, reveals that the surge in consumer demand from the second half of 2020 to 2021 is expected to trigger a substantial PC replacement cycle from the second half of 2024 to 2025, initiating a new wave of PC upgrades.
Undoubtedly, AI PCs are set to usher in a transformative wave and accelerate development against the backdrop of the PC replacement trend. Guotai Junan Securities said that AI PCs feature processors with enhanced computing capabilities and incorporating multi-modal algorithms. This integration is anticipated to fundamentally reshape the PC experience, positioning AI PCs as a hybrid terminals, edge computing, and cloud technology to meet the new demands of generative AI workloads.
PC Ecosystem Players Strategically Positioning for Dominance
The AI PC field is experiencing vibrant development, with major PC ecosystem companies actively entering the scene. Companies such as Lenovo, Intel, Qualcomm, and Microsoft have introduced corresponding innovative initiatives. Lenovo showcased the industry’s first AI PC at the 2023 TechConnect World Innovation, Intel launched the AI PC Acceleration Program at its Innovation 2023, and Qualcomm introduced the Snapdragon X Elite processor specifically designed for AI at the Snapdragon Summit. Meanwhile, Microsoft is accelerating the optimization of office software, integrating Bing and ChatGPT into the Windows.
While current promotions of AI PC products may exceed actual user experiences, terminals displayed by Lenovo, Intel’s AI PC acceleration program, and the collaboration ecosystem deeply integrated with numerous independent software vendors (ISVs) indicate that the upgrade of on-device AI offers incomparable advantages compared to the cloud. This includes integrating the work habits of individual users, providing a personalized and differentiated user experience.
Ablikim Ablimiti, Vice President of Lenovo, highlighted five core features of AI PCs: possessing personal large models, natural language interaction, intelligent hybrid computing, open ecosystems, and ensuring real privacy and security. He stated that the encounter of AI large models with PCs is naturally harmonious, and terminal makers are leading this innovation by integrating upstream and downstream resources to provide a complete intelligent service for AI PCs.
In terms of chips, Intel Core Ultra is considered a significant processor architecture change in 40 years. It adopts the advanced Meteor Lake architecture, fully integrating chipset functions into the processor, incorporating NPU into the PC processor for the first time, and also integrating the dazzling series core graphics card. This signifies a significant milestone in the practical commercial application of AI PCs.
TrendForce: AI PC Demand to Expand from High-End Enterprises
TrendForce believes that due to the high costs of upgrading both software and hardware associated with AI PCs, early development will be focused on high-end business users and content creators. This group has a strong demand for leveraging AI processing capabilities to improve productivity efficiency and can also benefit immediately from related applications, making them the primary users of the first generation. The emergence of AI PCs is not expected to necessarily stimulate additional PC purchase demand. Instead, most upgrades to AI PC devices will occur naturally as part of the business equipment replacement cycle projected for 2024.
(Image: Qualcomm)
Explore more
News
The fusion of AIGC with end-user devices is highlighting the importance of personalized user experiences, cost efficiency, and faster response times in generative AI applications. Major companies like Lenovo and Xiaomi are ramping up their efforts in the development of edge AI, extending the generative AI wave from the cloud to the edge and end-user devices.
On October 24th, Lenovo hosted its 9th Lenovo Tech World 2023, announcing deepening collaborations with companies like Microsoft, NVIDIA, Intel, AMD, and Qualcomm in the areas of smart devices, infrastructure, and solutions. At the event, Lenovo also unveiled its first AI-powered PC. This compact AI model, designed for end-user applications, offers features such as photo editing, intelligent video editing, document editing, and auto task-solving based on user thought patterns.
Smartphone manufacturers are also significantly extending their efforts into edge AI. Xiaomi recently announced their first use of Qualcomm Snapdragon 8 Gen 3, significantly enhancing their ability to handle LLMs at the end-user level. Xiaomi has also embedded AI LLMs into their HyperOS system to enhance user experiences.
During the 2023 vivo Developer Conference on November 1st, vivo introduced their self-developed Blue Heart model, offering five products with parameters ranging from billions to trillions, covering various core scenarios. Major smartphone manufacturers like Huawei, OPPO, and Honor are also actively engaged in developing LLMs.
Speeding up Practical Use of AI Models in Business
While integrating AI models into end-user devices enhances user experiences and boosts the consumer electronics market, it is equally significant for advancing the practical use of AI models. As reported by Jiwei, Jian Luan, the head of the AI Lab Big Model Team from Xiaomi, explains that large AI models have gain attention because they effectively drive the production of large-scale informational content. This is made possible through users’ extensive data, tasks, and parameter of AI model training. The next step in achieving lightweight models, to ensure effective operation on end-user devices, will be the main focus of industry development.
In fact, generative AI’s combination with smart terminal has several advantages:
Users often used to complain about the lack of intelligence in AI devices, stating that AI systems would reset to a blank state after each interaction. This is a common issue with cloud-based LLMs. Handling such concerns at the end-user device level can simplify the process.
In other words, the expansion of generative AI from the cloud to the edge integrates AI technology with hardware devices like PCs and smartphones. This is becoming a major trend in the commercial application and development of large AI models. It has the potential to enhance or resolve challenges in AI development related to personalization, security and privacy risks, high computing costs, subpar performance, and limited interactivity, thereby accelerating the commercial use of AI models.
Integrated Chips for End-User Devices: CPU+GPU+NPU
The lightweight transformation and localization of AI LLMs rely on advancements in chip technology. Leading manufacturers like Qualcomm, Intel, NVIDIA, AMD, and others have been introducing products in this direction. Qualcomm’s Snapdragon X Elite, the first processor in the Snapdragon X series designed for PCs, integrates a dedicated Neural Processing Unit (NPU) capable of supporting large-scale language models with billions of parameters.
The Snapdragon 8 Gen 3 platform supports over 20 AI LLMs from companies like Microsoft, Meta, OpenAI, Baidu, and others. Intel’s latest Meteor Lake processor integrates an NPU in PC processors for the first time, combining NPU with the processor’s AI capabilities to improve the efficiency of AI functions in PCs. NVIDIA and AMD also plan to launch PC chips based on Arm architecture in 2025 to enter the edge AI market.
Kedar Kondap, Senior Vice President and General Manager of Compute and Gaming Business at Qualcomm, emphasizes the advantages of LLM localization. He envisions highly intelligent PCs that actively understand user thoughts, provide privacy protection, and offer immediate responses. He highlights that addressing these needs at the end-user level provides several advantages compared to solving them in the cloud, such as simplifying complex processes and offering enhanced user experiences.
To meet the increased demand for AI computing when extending LLMs from the cloud to the edge and end-user devices, the integration of CPU+GPU+NPU is expected to be the future of processor development. This underscores the significance of Chiplet technology.
Feng Wu, Chief Engineer of Signal Integrity and Power Integrity at Sanechips/ZTE, explains that by employing Die to Die and Fabric interconnects, it is possible to densely and efficiently connect more computing units, achieving large-scale chip-level hyperscale computing.
Additionally, by connecting the CPU, GPU, and NPU at high speeds in the same system, chip-level heterogeneity enhances data transfer rates, reduces data access power, increases data processing speed, and lowers storage access power to meet the parameter requirements of LLMs.
(Image: Qualcomm)