At the end of January 2025, Chinese AI startup DeepSeek launched its latest large language model (LLM), DeepSeek-R1, featuring full open-source accessibility and affordability. According to TrendForce, DeepSeek’s emergence signals a significant shift in the AI industry, moving from hardware scaling to software optimization. Below, TrendForce analyzes the rise of DeepSeek and its implications for the industry.
1. DeepSeek Delivers Cost-Effective, High-Performance AI, but Practical Applications Remain the Next Challenge
DeepSeek is an open-source AI model company backed by the Chinese private equity giant High-Flyer. Following the launch of DeepSeek-V3 at the end of 2024, the company introduced the even more cost-effective DeepSeek-R1 in late January 2025. According to Chatbot Arena, a ranking and evaluation platform for AI systems, R1 is tied for third place in the overall ranking alongside Open o1, ranks second in the AI analysis quality index, and holds the top position in highly technical fields such as coding and mathematics.
TrendForce highlights two key factors driving R1’s growing market attention. First, its open-source accessibility allows developers to freely download and easily modify the model. Second, its lower training costs enable API pricing at just 3.7% of OpenAI’s o1. These advantages not only significantly expand R1’s user base but also disrupt the current industry trend of major cloud service providers (CSPs) making heavy capital investments in AI.
2. DeepSeek’s Applications Extend Beyond Chat-Based AI, as its R1 Model Also Enables “Deep Thinking” Functionality
From a technical perspective, key technologies behind DeepSeek-V3 and R1 include the Mixture of Experts (MoE) architecture and Knowledge Distillation, which enhance processing speed and deployment flexibility.
In fact, Google’s Gemini 1.5, launched in early 2024, also adopted similar technologies. As AI technology advances, DeepSeek has further reduced training costs, reinforcing the shift from large-scale general models to smaller, specialized AI—shifting from hardware scaling to software-driven optimization.
From an application perspective, the AI industry is evolving from instruction-based Generative AI to Agentic AI. This trend is evident in recent developments such as OpenAI’s Operator, introduced in January 2025, which enables ChatGPT to automate routine business tasks, and Perplexity.ai’s Perplexity Assistant. For DeepSeek-R1, expanding beyond simple Q&A interactions into more advanced functionalities will be crucial in sustaining long-term user adoption.
3. DeepSeek Emerges as a “Game Changer” in AI Development, Challenging U.S. Tech Giants’ Dominance
DeepSeek-V3’s final training cost was approximately USD 5–5.6 million, significantly lower than that of competitors like OpenAI and Meta. However, its performance in various benchmark tests matches or even surpasses that of leading models from these U.S. tech giants.
For instance, DeepSeek-V3 was trained using only 2,000 NVIDIA H800 GPUs, demonstrating a much more cost-effective approach. This breakthrough is expected to accelerate the development of cheaper and more efficient AI models, shaking up the AI industry.
The processing capability of DeepSeek-V3 is comparable to OpenAI’s GPT-4o and Amazon-backed Anthropic Claude Sonnet 3.5, and even surpasses Meta’s Llama. Moreover, the latest DeepSeek-R1 model has outperformed OpenAI’s newest o1 model in processing capabilities, further solidifying its competitive edge.
Although DeepSeek’s next steps are still unfolding, its rise signals a new era of competition, challenging the long-standing dominance of U.S.-based AI companies.
4. DeepSeek-R1’s Open-Source Model is Transforming the AI Industry
One of the particularly notable aspects of DeepSeek-R1 is its MIT-certified open-source nature, allowing free commercial use. This fundamentally challenges the traditional business models of major tech firms, which typically do not release AI models as open source and charge high API usage fees.
Furthermore, DeepSeek provides multiple distilled versions of its model, which require significantly less computing power (GPU capacity) than conventional AI models. According to DeepSeek’s official development data, R1’s training resources amount to just 3–5% of those used by OpenAI’s ChatGPT o1, eliminating past barriers caused by limited computing resources. This breakthrough opens new opportunities for researchers and developers to create and deploy high-performance AI models more efficiently.
Read more
(Photo credit: DeepSeek’s X)