As a strategic technology empowering a new round of technological revolution and industrial transformation, AI has become one of the key driving forces for the development of new industrialization. Fueled by the ChatGPT craze, AI and its applications are rapidly gaining traction worldwide. From an industrial perspective, NVIDIA currently holds almost absolute dominance in the AI chip market.
Meanwhile, major tech companies such as Google, Microsoft, and Apple are actively joining the competition, scrambling to seize the opportunity. Meta, Google, Intel, and Apple have launched the latest AI chips in hopes of reducing reliance on companies like NVIDIA. Microsoft and Samsung have also reportedly made investment plans for AI development.
Recently, according to multiple global media reports, Microsoft is developing a new AI mega-model called MAI-1. This model far exceeds some of Microsoft’s previously released open-source models in scale and is expected to rival well-known large models like Google’s Gemini 1.5, Anthropic’s Claude 3, and OpenAI’s GPT-4 in terms of performance. Reports suggest that Microsoft may demonstrate MAI-1 at the upcoming Build developer conference.
In response to the growing demand for AI computing, Microsoft recently announced a plan to invest billions of dollars in building AI infrastructure in Wisconsin. Microsoft stated that this move will create 2,300 construction jobs, and could contribute to up to 2,000 data center jobs when completing construction.
Furthermore, Microsoft will establish a new AI lab at the University of Wisconsin-Milwaukee to provide AI technology training.
Microsoft’s investment plan in the US involves an amount of USD 3.3 billion, which plus its investments previously announced in Japan, Indonesia, Malaysia and Thailand amount to over USD 11 billion in reference to AI-related field.
Microsoft’s recent announcements shows that it plans to invest USD 2.9 billion over the next two years to enhance its cloud computing and AI infrastructure in Japan, USD 1.7 billion within the next four years to expand cloud services and AI in Indonesia, including building data centers, USD 2.2 billion over the next four years in Malaysia in cloud computing and AI, and USD 1 billion to set up the first data center in Thailand, dedicated to providing AI skills training for over 100,000 people.
Apple has also unveiled its first AI chip, M4. Apple introduced that the neural engine in M4 chip is the most powerful one the company has ever developed, outstripping any neural processing unit in current AI PCs. Apple further emphasized that it will “break new ground” in generative AI this year, bringing transformative opportunities to users.
According to a report from The Wall Street Journal, Apple has been working on its own chips designed to run AI software on data center servers. Sources cited in the report revealed that the internal codename for the server chip project is ACDC (Apple Chips in Data Center). The report indicates that the ACDC project has been underway for several years, but it’s currently uncertain whether this new chip will be commissioned and when it might hit the market.
Tech journalist Mark Gurman also suggests that Apple will introduce AI capabilities in the cloud this year using its proprietary chips. Gurman’s sources indicate that Apple intends to deploy high-end chips (Similar to those designed for Mac) in cloud computing servers to handle cutting-edge AI tasks on Apple devices. Simpler AI-related functions will continue to be processed directly by chips embedded in iPhone, iPad, and Mac devices.
As per industry sources cited by South Korean media outlet ZDNet Korea, Samsung Electronics’ AI inference chip, Mach-1, is set to begin prototype production using a multi-project wafer (MPW) approach and is expected to be based on Samsung’s in-house 4nm process.
Previously at a shareholder meeting, Samsung revealed its plan to launch a self-made AI accelerator chip, Mach-1, in early 2025. As a critical step in Samsung’s AI development strategy, Mach-1 chip is an AI inference accelerator built on application-specific integrated circuit (ASIC) design and equipped with LPDDR memory, making it particularly suitable for edge computing applications.
Kyung Kye-hyun, head of Samsung Electronics’ DS (Semiconductor) division, stated that the development goal of this chip is to reduce the data bottleneck between off-chip memory and computing chips to 1/8 through algorithms, while also achieving an eight-fold improvement in efficiency. He noted that Mach-1 chip design has gained the verification of field-programmable gate array (FPGA) technology and is currently in the physical implementation stage of system-on-chip (SoC), which is expected to be ready in late 2024, with a Mach-1 chip-driven AI system to be launched in early 2025.
In addition to developing AI chip Mach-1, Samsung has established a dedicated research lab in Silicon Valley focusing on general artificial intelligence (AGI) research. The intention is to develop new processors and memory technologies capable of meeting future AGI system processing requirements.
Read more
(Photo credit: Pixabay)