Copilot


2024-06-07

[News] Qualcomm Reportedly Targets Data Centers as Its Next Step, Expecting Products to Adopt Nuvia

Last year, Qualcomm entered the PC market, sparking an AI PC frenzy in collaboration with Microsoft Copilot+. According to Qualcomm CEO Cristiano Amon, beyond mobile devices, PCs, and automotive applications, Qualcomm is now focusing on data centers. In the long term, these products will eventually adopt Qualcomm’s in-house developed Nuvia architecture.

Amon pointed out that as PCs enter a new cycle and AI engines bring new experiences, just as mobile phones require slim designs but must not overheat or become too bulky, Qualcomm has always been focused on technological innovation rather than just improving power consumption. While traditional PC leaders may emphasize TOPS (trillions of operations per second), energy and efficiency are also crucial.

Amon stressed the importance of maintaining battery life and integrating functionalities beyond CPU and GPU, which he believes will be key to defining leadership in the PC market. He also joked that if you use an X86 computer, it would run out of battery quickly, but with a new computer (AI PC) next year, it would last a long time without draining power.

Amon noted that Qualcomm’s Snapdragon X Elite and Snapdragon X Plus have been developed with superior NPU performance and battery life. Moreover, Snapdragon X Elite is just the first generation, which focuses more on performance supremacy, while the upcoming generations may put more emphasis on computational power, and integrating these into chip design.

Currently, more than 20 AI PCs equipped with Snapdragon X Elite and Snapdragon X Plus have been launched, including models from 7 OEMs, such Acer, Asus, Dell, HP, and others.

Amon believed that the market penetration rate will continue to increase next year. He sees AI PCs as a new opportunity, suggesting that it may take some time for them to be widely adopted when a new version of Windows for PC market emerges. However, considering the end of Windows 10 support, users can transition to new models with Copilot+, which he believes will be adopted much faster.

Amon pointed out that NPUs have already demonstrated their advantages in the PC and automotive chip industries, and these capabilities can be extended to data centers or other technologies.

He then highlighted data centers as a significant opportunity for transition to Arm architecture and expressed belief in increased opportunities for edge computing in the future. Amon also mentioned the adoption of Nuvia architecture in smartphones, data centers, and automotive industries. Additionally, he disclosed plans to launch mobile products featuring Microsoft processors at the October Snapdragon Annual Summit.

Read more

(Photo credit: Qualcomm)

Please note that this article cites information from TechNews.

2023-04-25

AI Sparks a Revolution Up In the Cloud

OpenAI’s ChapGPT, Microsoft’s Copilot, Google’s Bard, and latest Elon Musk’s TruthGPT – what will be the next buzzword for AI? In just under six months, the AI competition has heated up, stirring up ripples in the once-calm AI server market, as AI-generated content (AIGC) models take center stage.

The convenience unprecedentedly brought by AIGC has attracted a massive number of users, with OpenAI’s mainstream model, GPT-3, receiving up to 25 million daily visits, often resulting in server overload and disconnection issues.

Given the evolution of these models has led to an increase in training parameters and data volume, making computational power even more scarce, OpenAI has reluctantly adopted measures such as paid access and traffic restriction to stabilize the server load.

High-end Cloud Computing is gaining momentum

According to Trendforce, AI servers currently have a merely 1% penetration rate in global data centers, which is far from sufficient to cope with the surge in data demand from the usage side. Therefore, besides optimizing software to reduce computational load, increasing the number of high-end AI servers in hardware will be another crucial solution.

Take GPT-3 for instance. The model requires at least 4,750 AI servers with 8 GPUs for each, and every similarly large language model like ChatGPT will need 3,125 to 5,000 units. Considering ChapGPT and Microsoft’s other applications as a whole, the need for AI servers is estimated to reach some 25,000 units in order to meet the basic computing power.

As the emerging applications of AIGC and its vast commercial potential have both revealed the technical roadmap moving forward, it also shed light on the bottlenecks in the supply chain.

The down-to-earth problem: cost

Compared to general-purpose servers that use CPUs as their main computational power, AI servers heavily rely on GPUs, and DGX A100 and H100, with computational performance up to 5 PetaFLOPS, serve as primary AI server computing power. Given that GPU costs account for over 70% of server costs, the increase in the adoption of high-end GPUs has made the architecture more expansive.

Moreover, a significant amount of data transmission occurs during the operation, which drives up the demand for DDR5 and High Bandwidth Memory (HBM). The high power consumption generated during operation also promotes the upgrade of components such as PCBs and cooling systems, which further raises the overall cost.

Not to mention the technical hurdles posed by the complex design architecture – for example, a new approach for heterogeneous computing architecture is urgently required to enhance the overall computing efficiency.

The high cost and complexity of AI servers has inevitably limited their development to only large manufacturers. Two leading companies, HPE and Dell, have taken different strategies to enter the market:

  • HPE has continuously strengthened its cooperation with Google and plans to convert all products to service form by 2022. It also acquired startup Pachyderm in January 2023 to launch cloud-based supercomputing services, making it easier to train and develop large models.
  • In March 2023, Dell launched its latest PowerEdge series servers, which offers options equipped with NVIDIA H100 or A100 Tensor Core GPUs and NVIDIA AI Enterprise. They use the 4th generation Intel Xeon Scalable processor and introduce Dell software Smart Flow, catering to different demands such as data centers, large public clouds, AI, and edge computing.

With the booming market for AIGC applications, we seem to be one step closer to a future metaverse centered around fully virtualized content. However, it remains unclear whether the hardware infrastructure can keep up with the surge in demand. This persistent challenge will continue to test the capabilities of cloud server manufacturers to balance cost and performance.

(Photo credit: Google)

  • Page 1
  • 1 page(s)
  • 2 result(s)

Get in touch with us