Artificial Intelligence


2023-10-30

[Insights] Apple’s Quiet Pursuit of AI and the Advantage in AI Subscription Models

According to Bloomberg, Apple is quietly catching up with its competitors in the AI field. Observing Apple’s layout for the AI field, in addition to acquiring AI-related companies to gain relevant technology quickly, Apple is now developing its large language model (LLM).

TrendForce’s insights:

  1. Apple’s Low-Profile Approach to AI: Seizing the Next Growth Opportunity

As the smartphone market matures, brands are not only focusing on hardware upgrades, particularly in camera modules, to stimulate device replacements, but they are also observing the emergence of numerous brands keen on introducing new AI functionalities in smartphones. This move is aimed at reigniting the growth potential of smartphones. Some Chinese brands have achieved notable progress in the AI field, especially in large language models.

For instance, Xiaomi introduced its large language model MiLM-6B, ranking tenth in the C-Eval list (a comprehensive evaluation benchmark for Chinese language models developed in collaboration with Tsinghua University, Shanghai Jiao Tong University, and the University of Edinburgh) and topping the list in its category in terms of parameters. Meanwhile, Vivo has launched the large model VivoLM, with its VivoLM-7B model securing the second position on the C-Eval ranking.

As for Apple, while it may appear to be in a mostly observatory role as other Silicon Valley companies like OpenAI release ChatGPT, and Google and Microsoft introduce AI versions of search engines, the reality is that since 2018, Apple has quietly acquired over 20 companies related to AI technology from the market. Apple’s approach is characterized by its extreme discretion, with only a few of these transactions publicly disclosing their final acquisition prices.

On another front, Apple has been discreetly developing its own large language model called Ajax. It commits daily expenditures of millions of dollars for training this model with the aim of making its performance even more robust compared to OpenAI’s ChatGPT 3.5 and Meta’s LLaMA.

  1. Apple’s Advantage in Developing a Paid Subscription Model for Large Language Models Compared to Other Brands

Analyzing the current most common usage scenarios for smartphones among general consumers, these typically revolve around activities like taking photos, communication, and information retrieval. While there is potential to enhance user experiences with AI in some functionalities, these usage scenarios currently do not fall under the category of “essential AI features.”

However, if a killer application involving large language models were to emerge on smartphones in the future, Apple is poised to have an exclusive advantage in establishing such a service as a subscription-based model. This advantage is due to recent shifts in Apple’s revenue composition, notably the increasing contribution of “Service” revenue.

In August 2023, Apple CEO Tim Cook highlighted in Apple’s third-quarter financial report that Apple’s subscription services, which include Apple Arcade, Apple Music, iCloud, AppleCare, and others, had achieved record-breaking revenue and amassed over 1 billion paying subscribers.

In other words, compared to other smartphone brands, Apple is better positioned to monetize a large language model service through subscription due to its already substantial base of paying subscription users. Other smartphone brands may find it challenging to gain consumer favor for a paid subscription service involving large language models, as they lack a similarly extensive base of subscription users.

Read more

2023-10-25

[News] Lenovo’s AI PC and ‘AI Twins’ Unveiled, Market Entry Expected After September

At Global Tech World Event on October 24th, Lenovo Group’s Chairman and CEO, Yuanqing Yang, has presented AI-powered PCs and enterprise-level “AI Twins”(AI assistant) to a global audience, heralding a new dawn for personal computers. He revealed that AI PCs are slated to hit the market no sooner than September of the following year.

Yang said that the journey of AI PCs involves a maturation process. Historically, they start with a 10% market share but are destined to become the norm, envisioning a future where every computer is an AI PC.

Regarding foundation models, Yang pointed out that some companies are hastily jumping on the bandwagon. However, he emphasized Lenovo’s commitment to not rush into trends and noted the drawbacks and vulnerabilities in China’s existing public foundation models, including concerns about personal privacy and data security. Lenovo’s focus is on establishing hybrid foundation models.

Given the need to compress models for device deployment, Lenovo is currently concentrating on research related to domain-adaptive model fine-tuning, lightweight model compression, and privacy protection techniques.

Moreover, Yang highlighted Lenovo’s prior announcement of a US$1 billion investment in the AI Innovation over the next three years. However, he clarified that this amount falls short of the financial demands since virtually all of Lenovo’s business domains involve AI and fundamental services, requiring substantial financial backing.

Lenovo’s Q1 earnings report from mid-August had unveiled the company’s plan to allocate an additional $1 billion over the next three years for expediting AI technology and applications. This encompasses the development of AI devices, AI infrastructure, and the integration of generative AI technologies like AIGC into industry vertical solutions.

Besides, chip manufacturers like Intel is joining forces to expedite the development of the AI PC ecosystem. Their objective is to realize AI applications on over 100 million personal computers by 2025. This endeavor has piqued the interest of well-known international brands like Acer, Asus, HP, and Dell, all of which have a positive outlook on the potential of AI-powered PCs. It is anticipated that AI PCs will be a pivotal factor in revitalizing the PC industry’s annual growth by 2024.

Currently, there are no brands selling AI PCs in the true sense, but leading manufacturers have already revealed their plans for related products. The industry anticipates a substantial release of AI PCs in 2024.

(Image: Lenovo)

2023-10-04

[News] Unveil China’s 14 Major Challenges in Electronic Information Engineering: AI, New Sensors, and Optoelectronic Semiconductors

As the United States intensifies its chip embargo against China, the Chinese Academy of Engineering (CAE) has released an annual report for technological development. This report serves as a strategic guide to navigate the embargo and promote autonomous technological growth comprehensively.

2023-10-03

[News] Is Tenstorrent Setting Its Sights on NVIDIA? Plans to Utilize Samsung’s 4nm Process for Chiplet Production

As reported by China’s Jiwei on October 2nd, Samsung has revealed that its chip manufacturing division has secured an order from AI chip client Tenstorrent to produce chips utilizing its cutting-edge 4nm process.

2023-07-31

High-Tech PCB Manufacturers Poised to Gain from Remarkable Increase in AI Server PCB Revenue

Looking at the impact of AI server development on the PCB industry, mainstream AI servers, compared to general servers, incorporate 4 to 8 GPUs. Due to the need for high-frequency and high-speed data transmission, the number of PCB layers increases, and there’s an upgrade in the adoption of CCL grade as well. This surge in GPU integration drives the AI server PCB output value to surpass that of general servers by several times. However, this advancement also brings about higher technological barriers, presenting an opportunity for high-tech PCB manufacturers to benefit.

TrendForce’s perspective: 

  • The increased value of AI server PCBs primarily comes from GPU boards.

Taking the NVIDIA DGX A100 as an example, its PCB can be divided into CPU boards, GPU boards, and accessory boards. The overall value of the PCB is about 5 to 6 times higher than that of a general server, with approximately 94% of the incremental value attributed to the GPU boards. This is mainly due to the fact that general servers typically do not include GPUs, while the NVIDIA DGX A100 is equipped with 8 GPUs.

Further analysis reveals that CPU boards, which consist of CPU boards, CPU mainboards, and functional accessory boards, make up about 20% of the overall AI server PCB value. On the other hand, GPU boards, including GPU boards, NV Switch, OAM (OCP Accelerator Module), and UBB (Unit Baseboard), account for around 79% of the total AI server PCB value. Accessory boards, composed of components such as power supplies, HDD, and cooling systems, contribute to only about 1% of the overall AI server PCB value.

  • The technological barriers of AI servers are rising, leading to a decrease in the number of suppliers.

Since AI servers require multiple card interconnections with more extensive and denser wiring compared to general servers, and AI GPUs have more pins and an increased number of memory chips, GPU board assemblies may reach 20 layers or more. With the increase in the number of layers, the yield rate decreases.

Additionally, due to the demand for high-frequency and high-speed transmission, CCL materials have evolved from Low Loss grade to Ultra Low Loss grade. As the technological barriers rise, the number of manufacturers capable of entering the AI server supply chain also decreases.

Currently, the suppliers for CPU boards in AI servers include Ibiden, AT&S, Shinko, and Unimicron, while the mainboard PCB suppliers consist of GCE and Tripod. For GPU boards, Ibiden serves as the supplier, and for OAM PCBs, Unimicron and Zhending are the suppliers, with GCE, ACCL, and Tripod currently undergoing certification. The CCL suppliers include EMC. For UBB PCBs, the suppliers are GCE, WUS, and ACCL, with TUC and Panasonic being the CCL suppliers.

Regarding ABF boards, Taiwanese manufacturers have not yet obtained orders for NVIDIA AI GPUs. The main reason for this is the limited production volume of NVIDIA AI GPUs, with an estimated output of only about 1.5 million units in 2023. Additionally, Ibiden’s yield rate for ABF boards with 16 layers or more is approximately 10% to 20% higher than that of Taiwanese manufacturers. However, with TSMC’s continuous expansion of CoWoS capacity, it is expected that the production volume of NVIDIA AI GPUs will reach over 2.7 million units in 2024, and Taiwanese ABF board manufacturers are likely to gain a low single-digit percentage market share.

(Photo credit: Google)

  • Page 9
  • 10 page(s)
  • 49 result(s)

Get in touch with us