News
Currently, the issue of low power consumption remains a key concern in the industry. According to a recent report by the International Energy Agency (IEA), given that an average Google search requires 0.3Wh and each request to OpenAI’s ChatGPT consumes 2.9Wh, the 9 billion searches conducted daily would require an additional 10 terawatt-hours (TWh) of electricity annually. Based on the projected sales of AI servers, AI industry might see exponential growth in 2026, with power consumption needs at least ten times that of last year.
Ahmad Bahai, CTO of Texas Instruments, per a previous report from Business Korea, stated that recently, in addition to the cloud, AI services have also shifted to mobile and PC devices, leading to a surge in power consumption, and hence, this will be a hot topic.
In response to market demands, the industry is actively developing semiconductors with lower power consumption. On memory products, the development of LPDDR and related products such as Low Power Compression Attached Memory Module (LPCAMM) is accelerating. These products are particularly suitable for achieving energy conservation in mobile devices with limited battery capacity. Additionally, the expansion of AI applications in server and automotive fields is driving the increased use of LPDDR to reduce power consumption.
In terms of major companies, Micron, Samsung Electronics, and SK Hynix are speeding up the development of the next generation of LPDDR. Recently, Micron announced the launch of Crucial LPCAMM2. Compared to existing modules, this product is 64% smaller and 58% more power-efficient. As a low-power dedicated packaging module that includes several latest LPDDR products (LPDDR5X), it is a type of LPCAMM. LPCAMM was first introduced by Samsung Electronics last year, and it is expected to enjoy significant market growth this year.
Currently, the Joint Electron Device Engineering Council (JEDEC) plans to complete the development of LPDDR6 specifications within this year. According to industry news cited by the Korean media BusinessKorea, LPDDR6 is expected to start commercialization next year. The industry predicts that LPDDR6’s bandwidth may more than double that of previous generation.
Read more
(Photo credit: SK Hynix)
News
SoftBank Group founder Masayoshi Son, as per a report from Bloomberg, is planning to raise USD 100 billion to establish an AI chip company, aiming to complement the group’s ARM business.
The report indicates that Masayoshi Son plans to name the new artificial intelligence chip venture “Izanagi,” after the deity of creation and life in Japanese mythology, and Son himself will directly lead the project.
Regarding funding, reportedly, one proposed scheme under consideration involves SoftBank providing USD 30 billion, while another USD 70 billion may come from institutions in the Middle East.
However, details regarding the final funding sources and how the funds will be utilized in the future have not been disclosed by Masayoshi Son at this time.
Masayoshi Son is highly optimistic about the development of AI, claiming to be a heavy user of ChatGPT in an interview and engaging in conversations with it almost every day. In October 2023, Son expressed his belief that within the next decade, artificial intelligence will surpass human intelligence in nearly all domains, achieving a level of general artificial intelligence.
SoftBank’s UK-based chip intellectual property company, Arm, raised approximately USD 4.87 billion in its initial public offering. At the time of Arm’s listing, Son stated that he is a big believer in artificial intelligence and that Arm will also be at the core beneficiary of the AI revolution.
At the shareholders’ meeting on June 21, 2023, Masayoshi Son presented a chart of human evolution titled “Evolution Speed.”
The chart depicted a flat curve from the birth of humanity to the agricultural revolution, followed by a slight increase during the industrial and information revolutions. This indicated that the curve representing the development of artificial intelligence would experience a rapid upward surge within a few years, with its slope approaching a nearly vertical line.
Read more
(Photo credit: SoftBank News)
Insights
In late December 2023, reports surfaced indicating OpenAI CEO Sam Altman’s intention to raise funds to construct a semiconductor plant, ensuring a secure supply of AI chips.
According to a report from the Washington Post on January 24, 2024, Sam Altman has engaged with US congressional members to discuss the construction of the semiconductor plant, including considerations of timing and location, highlighting his increasingly fervent ambition to establish the facility.
TrendForce’s Insights:
The rapid emergence of AI-generated content (AIGC) undoubtedly stood out as a highlight of 2023, closely tied to the quality and efficiency of the large language models (LLMs) used. Take OpenAI’s ChatGPT, for instance, which employs the GPT-3.5 model released in 2020. With 175 billion training parameters, it surpasses its predecessor, GPT-2, by over 100 times, itself being over 10 times larger than the initial GPT from 2018.
In pursuit of better content quality, diversified outputs, and enhanced efficiency, the continuous expansion of model training parameters becomes an inevitable trend. While efforts are made to develop lightweight versions of language models for terminal devices, the cloud-based AI computing arena anticipates a continued expansion of language model training parameters, moving towards the “trillion” scale.
Due to the limited growth rate of AI chip performance, coping with the rapidly increasing model training parameters and the vast amount of data generated by the flourishing development of cloud-based AIGC applications inevitably requires relying on more AI chips. This situation continues to exert pressure on the chip supply chain.
Given that the demand for AI computing is escalating faster than the growth rate of chip performance and capacity, it’s understandable why Sam Altman is concerned about chip supply.
The construction of advanced process fabs is immensely costly, with estimates suggesting that the construction cost of a single 3nm fab could amount to billions of dollars. Even if Sam Altman manages to raise sufficient funds for plant construction, there remains a lack of advanced semiconductor process and packaging technology, not to mention capacity, yield, and operational efficiency.
Therefore, it is anticipated that Sam Altman will continue to seek collaboration with sfoundries to achieve his factory construction goal.
Looking at foundries worldwide, TSMC is undoubtedly the preferred partner. After all, TSMC not only holds a leading position in advanced processes and packaging technologies but also boasts the most extensive experience in producing customized AI chips.
While Samsung and Intel are also suitable partners from a localization perspective, considering factors like production schedules and yield rates, choosing TSMC appears to be more cost-effective.
(Photo credit: OpenAI)
News
According to sources cited by the Financial Times, South Korean chip manufacturer SK Hynix is reportedly planning to establish a packaging facility in Indiana, USA. This move is expected to significantly advance the US government’s efforts to bring more artificial intelligence (AI) chip supply chains into the country.
SK Hynix’s new packaging facility will specialize in stacking standard dynamic random-access memory (DRAM) chips to create high-bandwidth memory (HBM) chips. These chips will then be integrated with NVIDIA’s GPUs for training systems like OpenAI’s ChatGPT.
Per one source close to SK Hynix cited by the report, the increasing demand for HBM from American customers and the necessity of close collaboration with chip designers have deemed the establishment of advanced packaging facilities in the US essential.
Regarding this, SK Hynix reportedly responded, “Our official position is that we are currently considering a possible investment in the US but haven’t made a final decision yet.”
The report quoted Kim Yang-paeng, a researcher at the Korea Institute for Industrial Economics and Trade, as saying, “If SK Hynix establishes an advanced HBM memory packaging facility in the United States, along with TSMC’s factory in Arizona, this means Nvidia can ultimately produce GPUs in the United States.”
Previously, the United States was reported to announce substantial chip subsidies by the end of March. The aim is to pave the way for chip manufacturers like TSMC, Samsung, and Intel by providing them with billions of dollars to accelerate the expansion of domestic chip production.
These subsidies are a core component of the US 2022 “CHIPS and Science Act,” which allocates a budget of USD 39 billion to directly subsidize and revitalize American manufacturing.
Read more
(Photo credit: SK Hynix)
Insights
According to Bloomberg, Apple is quietly catching up with its competitors in the AI field. Observing Apple’s layout for the AI field, in addition to acquiring AI-related companies to gain relevant technology quickly, Apple is now developing its large language model (LLM).
TrendForce’s insights:
As the smartphone market matures, brands are not only focusing on hardware upgrades, particularly in camera modules, to stimulate device replacements, but they are also observing the emergence of numerous brands keen on introducing new AI functionalities in smartphones. This move is aimed at reigniting the growth potential of smartphones. Some Chinese brands have achieved notable progress in the AI field, especially in large language models.
For instance, Xiaomi introduced its large language model MiLM-6B, ranking tenth in the C-Eval list (a comprehensive evaluation benchmark for Chinese language models developed in collaboration with Tsinghua University, Shanghai Jiao Tong University, and the University of Edinburgh) and topping the list in its category in terms of parameters. Meanwhile, Vivo has launched the large model VivoLM, with its VivoLM-7B model securing the second position on the C-Eval ranking.
As for Apple, while it may appear to be in a mostly observatory role as other Silicon Valley companies like OpenAI release ChatGPT, and Google and Microsoft introduce AI versions of search engines, the reality is that since 2018, Apple has quietly acquired over 20 companies related to AI technology from the market. Apple’s approach is characterized by its extreme discretion, with only a few of these transactions publicly disclosing their final acquisition prices.
On another front, Apple has been discreetly developing its own large language model called Ajax. It commits daily expenditures of millions of dollars for training this model with the aim of making its performance even more robust compared to OpenAI’s ChatGPT 3.5 and Meta’s LLaMA.
Analyzing the current most common usage scenarios for smartphones among general consumers, these typically revolve around activities like taking photos, communication, and information retrieval. While there is potential to enhance user experiences with AI in some functionalities, these usage scenarios currently do not fall under the category of “essential AI features.”
However, if a killer application involving large language models were to emerge on smartphones in the future, Apple is poised to have an exclusive advantage in establishing such a service as a subscription-based model. This advantage is due to recent shifts in Apple’s revenue composition, notably the increasing contribution of “Service” revenue.
In August 2023, Apple CEO Tim Cook highlighted in Apple’s third-quarter financial report that Apple’s subscription services, which include Apple Arcade, Apple Music, iCloud, AppleCare, and others, had achieved record-breaking revenue and amassed over 1 billion paying subscribers.
In other words, compared to other smartphone brands, Apple is better positioned to monetize a large language model service through subscription due to its already substantial base of paying subscription users. Other smartphone brands may find it challenging to gain consumer favor for a paid subscription service involving large language models, as they lack a similarly extensive base of subscription users.
Read more