News
According to a report from Economic Daily News, Luxshare, a crucial player in the Chinese Apple supply chain, is said to be entering NVIDIA’s supply chain for the GB200, as it has announced the development of various components tailored for NVIDIA’s GB200 AI servers.
These components encompass connector, power-related items, and cooling products. The sources cited by the same report have noted that Luxshare’s focus areas align closely with Taiwanese expertise, setting the stage for another direct showdown with Taiwanese manufacturers.
Luxshare, previously not prominent in the server domain, has now reportedly made its move into NVIDIA’s top-tier AI products, attracting market attention. Especially given Luxshare’s swift entry into the iPhone supply chain previously, aggressively competing for orders with Taiwanese Apple suppliers.
As per the same report, Luxshare has revealed in its investor conference records that it has developed solutions corresponding to the NVIDIA GB200 AI server architecture, including products for electrical connection, optical connection, power management, and cooling. The company is reportedly said to be expected to offer solutions priced at approximately CNY 2.09 million and anticipates that the total market size will reach hundreds of billions of CNY.
If Luxshare adopts a similar strategy of leveraging its latecomer advantage in entering the NVIDIA AI supply chain, it will undoubtedly encounter intense competition.
Industry sources cited by the report also point out that Luxshare’s claim to supply components for NVIDIA’s GB200 is in areas where Taiwanese suppliers excel.
For instance, while connector is Luxshare’s core business, Taiwanese firms like JPC Connectivity and Lintes Tech also serve as suppliers of connectors for NVIDIA’s GB200 AI servers. They are poised to compete directly with Luxshare in the future.
In terms of power supply, Delta Electronics leverages its expertise in integrating power, cooling, and passive components to provide a comprehensive range of AI power integration solutions, from the grid to the chip. They cater to orders for power supplies for NVIDIA’s Blackwell architecture series B100, B200, and GB200 servers, and will also compete with Luxshare in the future.
When it comes to thermal management, Asia Vital Components and Auras Technology are currently the anticipated players in the market, and they are also poised to compete with Luxshare.
Read more
(Photo credit: Luxshare)
News
Driven by AI-driven demand for optical communication and ASICs, Marvell, a major network IC design company, is accelerating its AI-related business. According to a report from Commercial Times, the revenue from this segment is expected to grow from USD 200 million in fiscal year 2023 to USD 550 million in fiscal year 2024.
Marvell previously announced plans to utilize TSMC’s process technology to produce a 2-nanometer chip optimized for accelerating infrastructure. Reports suggest that TSMC will be a primary beneficiary of Marvell’s chip fabrication business.
“The 2nm platform will enable Marvell to deliver highly differentiated analog, mixed-signal, and foundational IP to build accelerated infrastructure capable of delivering on the promise of AI. Our partnership with TSMC on our 5nm, 3nm and now 2nm platforms has been instrumental in helping Marvell expand the boundaries of what can be achieved in silicon,” said Sandeep Bharathi, chief development officer at Marvell, in Marvell’s previous press release.
In addition, Marvell holds a high market share in the global optical communication digital signal processor (DSP) field. Marvell pointed out that AI has accelerated the rate of transmission speed upgrades, reducing the doubling cycle from 4 years to 2 years, thereby driving rapid growth in the company’s performance.
During Marvell AI Day, company management expressed optimism about its AI business outlook and shared the positive news of receiving AI chip orders from large technology companies. At the time, industry sources have speculated that this customer could be Microsoft.
Marvell CEO Matt Murphy revealed that the company has acquired its third AI hyperscale customer and is developing an AI accelerator slated for production in 2026. These orders encompass customized AI training accelerators and AI inference accelerators for Customer A, a customized Arm architecture CPU for Customer B, and a new customized AI accelerator for Customer C.
Marvell indicates that the AI training accelerators for Customer A and the Arm architecture CPU for Customer B are currently in the ramp-up phase for production. The AI inference accelerator for Customer A and the AI accelerator for Customer C are scheduled for production in 2025 and 2026, respectively.
The report cites sources indicating that Marvell’s customer B is Google, and the Arm-based CPU in question is the recently unveiled Google Axion. However, Marvell has not responded to this information.
Marvell highlighted advancements in chip technology, including advanced packaging techniques that integrate multiple chips.
Read more
(Photo credit: TSMC)
News
The sweeping AI wave not only keeps AI chips in the market spotlight but also ushers in a new round of opportunities for the memory market. Recently, Citibank announced that SSD will replace HDD in the AI field, citing SSD’s faster speed, which are more suitable for AI training. It is reported that data centers of top US tech companies are shifting from HDD to enterprise SSD.
From consumer electronics to enterprise markets, and now in the era of AI, the battle between SSD and HDD is underway once again .
Industry sources point out that SSD surpass HDD by nearly 10 times in terms of access speed, while HDD boasts the advantage of lower cost.
According to a previous report from Nikkei Asia, in recent years, as NAND Flash prices declined in a downward cycle, the cost gap between SSD and HDD has begun to narrow, enabling SSD to gradually replace HDD in some fields. For instance, in consumer PC storage devices below 2TB, HDD have been phased out and replaced by SSD.
This seems to indicate that SSD has significantly outpaced HDD, but it is still difficult to say that SSD will completely replace HDD. After all, compared to consumer products, data centers have higher performance requirements for SSD. Furthermore, from a cost perspective, enterprises face significant pressure if they want to fully substitute SSD for HDD.
The current AI boom has provided opportunities for the development of both HDD and SSD, with a surge in demand for high-capacity products leading to price increases.
Industry sources reveal that HDD manufacturers reduced supply due to poor market conditions last year. With the arrival of the AI wave, supply of HDD outbalanced demand in 2H23, driving prices higher. From 3Q23 to 1Q24, HDD prices have increased by 10-20% overall. The latest reports show that Western Digital has recently notified customers of continuous price increases for HDD products. Industry sources expect HDD market prices to continue to rise in 2Q24, with increases ranging from 5% to 10%.
Likewise, SSD market is also facing supply shortages, especially in the enterprise SSD segment. TrendForce predicts a strong increase of about 13-18% in NAND Flash contract prices in 2Q24, with enterprise SSD contract prices expected to increase by 20-25% QoQ, representing the highest among all product lines.
At present, SSD and HDD are expected to coexist and progress together. However, looking ahead to the future, some memory manufacturers hope that SSD can continue to advance and even replace HDD.
In 2023, Shawn Rosemarin, Vice President of Research and Development Department of Pure Storage, stated that HDD would be completely phased out within 5 years. HDD consume too much power; 3% of global electricity used for data centers, and one-third of this power consumption comes from storage systems, the majority of which are mechanical hard drives. The cost difference in operating such large-scale deployments is striking. If the storage device is shifted to SSD, power consumption will be reduced by 80-90%.
However, HDD manufacturers have countered this statement. Rainer Kaese, Senior Manager of HDD Business Development Department at Toshiba, believes that HDD will continue to exist for some time. In the long run, they will continue to be cheaper than SSD, and data center engineers will develop more efficient HDD to meet stricter power consumption requirements.
The debate between these two sides reveals the respective strengths and weaknesses of SSD and HDD. As manufacturers continue to enhance performance, reduce costs, and lower power consumption, the competition between SSD and HDD is expected to continue in the future.
Read more
(Photo credit: Western Digital)
News
With the rapid advancement of AI-powered PC chips, industry giants like Intel, AMD, and Qualcomm, alongside various brands, are optimistic about the inaugural year of AI PCs entering the market.
According to a report from Commercial Times, chip manufacturers are showcasing their AI PC chip solutions, with newcomer Qualcomm partnering with Google to launch Snapdragon X expected mid-year, while Intel leveraging both hardware and software resources.
Per the same report citing sources, laptop brands are beginning to plan AI PC-related products for the second half of the year. Recently, companies like Dell, Lenovo, and HP have held internal meetings with the Taiwan supply chain. In addition to contract manufacturers, IC design is also a key focus, with companies like MediaTek and Realtek being actively engaged.
Reportedly, each company currently has its own perspective on AI PC, with many opting to integrate AI accelerator chips. However, Microsoft and Intel have jointly defined AI PC as requiring NPU, CPU, and GPU, along with support for Microsoft’s Copilot. They are also incorporating a physical Copilot key directly on the keyboard and become the standard setters.
To adapt to significant changes in software and hardware, Intel is expanding its ecosystem. In addition to AI application software, they are incorporating Independent Hardware Vendors (IHVs) into their AI PC acceleration program.
This collaboration assists IHV partners in preparing, optimizing, and leveraging hardware opportunities in AI PC applications. Support is provided from the early stages of hardware solutions and platform development, offering numerous opportunities for IC design companies in Taiwan to enter Intel’s supply chain during the nascent stage of AI PC.
Reportedly, Qualcomm is rumored to maintain its partnership with Google as it ventures into the AI PC market this year with Snapdragon X Elite. Qualcomm and Google have previously collaborated closely in the realm of Android smartphones, with many devices equipped with Snapdragon chipsets already using Google software.
Intel estimates that by the end of this year, the market will introduce over 300 AI acceleration applications, further advancing its AI software framework and enhancing the developer ecosystem. Intel further predicts that by the end of 2025, there will be over 100 million PCs shipped with AI accelerators, indicating immense opportunities in the AI PC market. However, competition is fierce, and success in this market requires innovative products that are differentiated and meet user needs. With both Intel and Qualcomm unveiling unique strategies, the AI PC market is poised for significant developments.
For AI PC, TrendForce believes that due to the high costs of upgrading both software and hardware, early development will be focused on high-end business users and content creators. This targeted group has a strong demand for leveraging AI processing capabilities to improve productivity efficiency and can also benefit immediately from related applications, making them the first-generation primary users.
The emergence of AI PCs is not expected to necessarily stimulate additional PC purchase demand. Instead, most upgrades to AI PC devices will occur naturally as part of the business equipment replacement cycle projected for 2024.
Nevertheless, looking to the long term, the potential development of more diverse AI tools—along with a price reduction—may still lead to a higher adoption rate of consumer AI PCs.
Read more
(Photo credit: Intel)
News
Could AI Be Heading Towards an “Energy Crisis”? Speculation suggests that a Microsoft engineer involved in the GPT-6 training cluster project has warned that deploying over 100,000 H100 GPUs in a single state might trigger a collapse of the power grid. Despite signs of OpenAI’s progress in training GPT-6, the availability of electricity could emerge as a critical bottleneck.
Kyle Corbitt, co-founder and CEO of AI startup OpenPipe, revealed in a post on social platform X that he recently spoke with a Microsoft engineer responsible for the GPT-6 training cluster project. The engineer complained that deploying InfiniBand-level links between GPUs across regions has been a painful task.
Continuing the conversation, Corbitt asked, “why not just colocate the cluster in one region?” The Microsoft engineer replied, “Oh yeah, we tried that first. We can’t put more than 100K H100s in a single state without bringing down the power grid.”
At the just-concluded CERAWeek 2024, attended by top executives from the global energy industry, discussions revolved around the advancement of AI technology in the sector and the significant demand for energy driven by AI.
As per a report from Bloomberg, during his speech, Toby Rice, chief of the largest US natural gas driller, EQT Corp., cited a forecast predicting AI could gobble up more power than households by 2030.
Additionally, Sam Altman from OpenAI has expressed concerns about the energy, particularly electricity, demands of AI. Per a report from Reuters, at the Davos Forum earlier this year, he stated that AI’s development requires breakthroughs in energy, as AI is expected to bring about significantly higher electricity demands than anticipated.
According to a report by The New Yorker on March 9th citing data of Alex de Vries, a data expert at the Dutch National Bank, it has indicated that ChatGPT consumes over 500,000 kilowatt-hours of electricity daily to process around 200 million user requests, equivalent to over 17,000 times the daily electricity consumption of an average American household. As for search giant Google, if it were to use AIGC for every user search, its annual electricity consumption would increase to around 29 billion kilowatt-hours, surpassing the annual electricity consumption of countries like Kenya and Guatemala.
Looking back at 2022, when AI hadn’t yet sparked such widespread enthusiasm, data centers in China and the United States respectively accounted for 3% and 4% of their respective total societal electricity consumption.
As global computing power gradually increases, a March 24th research report from Huatai Securities predicts that by 2030, the total electricity consumption of data centers in China and the United States will reach approximately 0.95/0.65 trillion kilowatt-hours and 1.7/1.2 trillion kilowatt-hours respectively, representing over 3.5 times and 6 times that of 2022. In an optimistic scenario, by 2030, the AI electricity consumption in China/US will account for 20%/31% of the total societal electricity consumption in 2022.
Read more
(Photo credit: Taiwan Business Topics)