News
According to sources cited in a report from Reuters, it’s said that IC design giant MediaTek is developing an ARM-based PC chip that will run Microsoft’s Windows operating system.
Last month, Microsoft unveiled a new generation of laptops featuring ARM-based chips, which provide sufficient computing power to run AI applications. Its executives stated that this represents the future trend of consumer computing. MediaTek’s latest development of an ARM-based PC chip is said to be geared toward these types of laptops.
The same report indicates that Microsoft’s move plans to take aim at Apple, which has been using ARM-based chips in its Mac computers for about four years. Microsoft’s decision to optimize Windows using ARM-based chips could further pose a threat to Intel’s long-standing dominance in the PC market.
Regarding this matter, both MediaTek and Microsoft declined to comment.
Reportedly, according to industry sources, MediaTek’s PC chip is scheduled to launch by the end of next year, coinciding with the expiration of Qualcomm’s exclusive agreement to supply chips for laptops. MediaTek’s chip, based on ARM’s existing designs, will significantly accelerate the development process by less design work.
It is currently unclear whether Microsoft has approved MediaTek’s PC chip for supporting the Copilot+ feature in Windows programs.
ARM executives have stated that one of their clients used ready-made components to complete a chip design in about nine months, although this client was not MediaTek. For experienced chip designers, creating and testing advanced chips typically takes more than a year, depending on the complexity.
In the latest press release from TrendForce, MediaTek’s strategy in the PC domain is also highlighted. Reportedly, the Arm chip co-developed by MediaTek and NVIDIA, with adoption of Wi-Fi 7 and 5G, is also slated to occupy a spot in the AI NB market since 2Q25, and initiate a new wave of technical innovation after 2025. According to TrendForce’s forecast, Arm chips are likely to surpass 20% in market penetration at an accelerated velocity in 2025.
Read more
(Photo credit: MediaTek)
News
Last year, Qualcomm entered the PC market, sparking an AI PC frenzy in collaboration with Microsoft Copilot+. According to Qualcomm CEO Cristiano Amon, beyond mobile devices, PCs, and automotive applications, Qualcomm is now focusing on data centers. In the long term, these products will eventually adopt Qualcomm’s in-house developed Nuvia architecture.
Amon pointed out that as PCs enter a new cycle and AI engines bring new experiences, just as mobile phones require slim designs but must not overheat or become too bulky, Qualcomm has always been focused on technological innovation rather than just improving power consumption. While traditional PC leaders may emphasize TOPS (trillions of operations per second), energy and efficiency are also crucial.
Amon stressed the importance of maintaining battery life and integrating functionalities beyond CPU and GPU, which he believes will be key to defining leadership in the PC market. He also joked that if you use an X86 computer, it would run out of battery quickly, but with a new computer (AI PC) next year, it would last a long time without draining power.
Amon noted that Qualcomm’s Snapdragon X Elite and Snapdragon X Plus have been developed with superior NPU performance and battery life. Moreover, Snapdragon X Elite is just the first generation, which focuses more on performance supremacy, while the upcoming generations may put more emphasis on computational power, and integrating these into chip design.
Currently, more than 20 AI PCs equipped with Snapdragon X Elite and Snapdragon X Plus have been launched, including models from 7 OEMs, such Acer, Asus, Dell, HP, and others.
Amon believed that the market penetration rate will continue to increase next year. He sees AI PCs as a new opportunity, suggesting that it may take some time for them to be widely adopted when a new version of Windows for PC market emerges. However, considering the end of Windows 10 support, users can transition to new models with Copilot+, which he believes will be adopted much faster.
Amon pointed out that NPUs have already demonstrated their advantages in the PC and automotive chip industries, and these capabilities can be extended to data centers or other technologies.
He then highlighted data centers as a significant opportunity for transition to Arm architecture and expressed belief in increased opportunities for edge computing in the future. Amon also mentioned the adoption of Nuvia architecture in smartphones, data centers, and automotive industries. Additionally, he disclosed plans to launch mobile products featuring Microsoft processors at the October Snapdragon Annual Summit.
Read more
(Photo credit: Qualcomm)
News
The New York Times reported on June 5th that the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC) have reached an agreement, led by senior officials of both agencies, over the past week. The DOJ will investigate whether NVIDIA has violated antitrust laws, while the FTC will examine the conducts of OpenAI and Microsoft.
Reportedly, Jonathan Kanter, who is said to be the top antitrust official in the DOJ’s Antitrust Division, highlighted at an AI conference at Stanford University last week that AI’s reliance on massive amounts of data and computing power gives dominant companies a significant advantage. In a February interview, FTC Chair Lina Khan stated that the FTC aims to identify potential issues in the early stages of AI development.
As per Reuters’ report, Microsoft, OpenAI, NVIDIA, DOJ and FTC did not immediately respond to requests for comment outside regular business hours.
In a May interview with CNBC, Appian co-founder and CEO Matt Calkins stated that AI might not be a winner take all market. He suggested that if alliances could secure victory in the AI race, Google would already have won.
Per a report from Roll Call on May 15th, a bipartisan Senate AI working group led by Senate Majority Leader Chuck Schumer released an AI roadmap, calling for the federal government to invest at least USD 32 billion annually in non-defense-related AI systems.
In March, The Information reported that Microsoft does not want its hiring of Inflection AI’s two co-founders and the majority of its 70-member team to be perceived as an acquisition.
Read more
(Photo credit: NVIDIA)
News
On May 20, a report by Reuters revealed that Google plans to invest an additional Euro 1 billion in its data center park in Finland. This move aims to expand the scale and boost its AI business growth in Europe.
The report notes that in recent years, many data centers have been established in Nordic countries due to the cool climate, tax incentives, and ample supply of renewable energy. Finland’s wind power capacity has seen significant growth over these years, up by 75% to 5,677 megawatts by 2022, which brings electricity prices even down to negative values on particularly windy days.
Thus, Data center operators like Google have been taken advantage of this renewable energy, and already signed long-term wind power purchase agreements in Finland.
Driven by the AI wave, cloud providers such as Microsoft, Google, Meta, and Amazon have an increasingly robust demand for AI servers and data centers.
According to a previous forecast by TrendForce, considering the global CSPs’ demands for high-end AI servers (Those equipped with NVIDIA, AMD, or other high-end ASIC chips included) in 2024, the demands from four major U.S. CSPs: Microsoft, Google, AWS, and Meta are expected to account for 20.2%, 16.6%, 16%, and 10.8% of global demand respectively, reigning over the global market with a total proportion of more than 60%.
Read more
(Photo credit: Google)
News
As a strategic technology empowering a new round of technological revolution and industrial transformation, AI has become one of the key driving forces for the development of new industrialization. Fueled by the ChatGPT craze, AI and its applications are rapidly gaining traction worldwide. From an industrial perspective, NVIDIA currently holds almost absolute dominance in the AI chip market.
Meanwhile, major tech companies such as Google, Microsoft, and Apple are actively joining the competition, scrambling to seize the opportunity. Meta, Google, Intel, and Apple have launched the latest AI chips in hopes of reducing reliance on companies like NVIDIA. Microsoft and Samsung have also reportedly made investment plans for AI development.
Recently, according to multiple global media reports, Microsoft is developing a new AI mega-model called MAI-1. This model far exceeds some of Microsoft’s previously released open-source models in scale and is expected to rival well-known large models like Google’s Gemini 1.5, Anthropic’s Claude 3, and OpenAI’s GPT-4 in terms of performance. Reports suggest that Microsoft may demonstrate MAI-1 at the upcoming Build developer conference.
In response to the growing demand for AI computing, Microsoft recently announced a plan to invest billions of dollars in building AI infrastructure in Wisconsin. Microsoft stated that this move will create 2,300 construction jobs, and could contribute to up to 2,000 data center jobs when completing construction.
Furthermore, Microsoft will establish a new AI lab at the University of Wisconsin-Milwaukee to provide AI technology training.
Microsoft’s investment plan in the US involves an amount of USD 3.3 billion, which plus its investments previously announced in Japan, Indonesia, Malaysia and Thailand amount to over USD 11 billion in reference to AI-related field.
Microsoft’s recent announcements shows that it plans to invest USD 2.9 billion over the next two years to enhance its cloud computing and AI infrastructure in Japan, USD 1.7 billion within the next four years to expand cloud services and AI in Indonesia, including building data centers, USD 2.2 billion over the next four years in Malaysia in cloud computing and AI, and USD 1 billion to set up the first data center in Thailand, dedicated to providing AI skills training for over 100,000 people.
Apple has also unveiled its first AI chip, M4. Apple introduced that the neural engine in M4 chip is the most powerful one the company has ever developed, outstripping any neural processing unit in current AI PCs. Apple further emphasized that it will “break new ground” in generative AI this year, bringing transformative opportunities to users.
According to a report from The Wall Street Journal, Apple has been working on its own chips designed to run AI software on data center servers. Sources cited in the report revealed that the internal codename for the server chip project is ACDC (Apple Chips in Data Center). The report indicates that the ACDC project has been underway for several years, but it’s currently uncertain whether this new chip will be commissioned and when it might hit the market.
Tech journalist Mark Gurman also suggests that Apple will introduce AI capabilities in the cloud this year using its proprietary chips. Gurman’s sources indicate that Apple intends to deploy high-end chips (Similar to those designed for Mac) in cloud computing servers to handle cutting-edge AI tasks on Apple devices. Simpler AI-related functions will continue to be processed directly by chips embedded in iPhone, iPad, and Mac devices.
As per industry sources cited by South Korean media outlet ZDNet Korea, Samsung Electronics’ AI inference chip, Mach-1, is set to begin prototype production using a multi-project wafer (MPW) approach and is expected to be based on Samsung’s in-house 4nm process.
Previously at a shareholder meeting, Samsung revealed its plan to launch a self-made AI accelerator chip, Mach-1, in early 2025. As a critical step in Samsung’s AI development strategy, Mach-1 chip is an AI inference accelerator built on application-specific integrated circuit (ASIC) design and equipped with LPDDR memory, making it particularly suitable for edge computing applications.
Kyung Kye-hyun, head of Samsung Electronics’ DS (Semiconductor) division, stated that the development goal of this chip is to reduce the data bottleneck between off-chip memory and computing chips to 1/8 through algorithms, while also achieving an eight-fold improvement in efficiency. He noted that Mach-1 chip design has gained the verification of field-programmable gate array (FPGA) technology and is currently in the physical implementation stage of system-on-chip (SoC), which is expected to be ready in late 2024, with a Mach-1 chip-driven AI system to be launched in early 2025.
In addition to developing AI chip Mach-1, Samsung has established a dedicated research lab in Silicon Valley focusing on general artificial intelligence (AGI) research. The intention is to develop new processors and memory technologies capable of meeting future AGI system processing requirements.
Read more
(Photo credit: Pixabay)