DeepSeek-R1: A Milestone for AGI
Advertisements
As the world accelerates towards the realization of Artificial General Intelligence (AGI), the interplay between algorithmic innovation and computational resources emerges as a cornerstone of this journey. By 2024, a clear market consensus had formed around the notion that for the inference market to thrive, the cost of inference must be significantly reduced. Presently, groundbreaking contributions from teams like Doubao and DeepSeek have successfully diminished inference costs, propelling AI toward industrialization. The achievement of DeepSeek marks a substantial leap for open-source models, contrasting favorably against proprietary models and setting the stage for an exciting future.
DeepSeek V3, based on the Transformer architecture, is an incredibly potent Mixture of Experts (MoE) language model equipped with a staggering 671 billion parameters, with each token activating 37 billion parameters. The model’s innovations can be observed at various levels: structurally, the balanced load strategy and training objectives optimize performance; during pre-training, efficiency saw remarkable enhancements; and in post-training, the knowledge distillation with DeepSeek-R1 represents a culmination of these innovations. Impressively, the DeepSeek team completed the pre-training of V3 with a mere 2.664 million H800 GPU hours on a dataset of 14.8 trillion tokens. DeepSeek-R1 serves as the apex derivative of V3's architecture, rivalling the capabilities of OpenAI’s offerings.
What sets DeepSeek-R1-Zero apart is its impressive inference capability, which was achieved through extensive reinforcement learning (RL) without the need for supervised fine-tuning (SFT) in the preliminary stages. This model has naturally developed various powerful and interesting reasoning behaviors, signifying a pivotal milestone in the research community. It is a groundbreaking example of verified open research, showcasing that the reasoning capabilities of large language models (LLMs) can be effectively driven solely by RL. The ripple effect of this breakthrough is substantial, potentially paving the way for future advancements in the field.
Amid the surge of innovation, OpenAI has responded vigorously, releasing O3-mini—an iteration that exhibits augmented capabilities in understanding the physical world and programming tasks. This model demonstrated remarkable prowess in challenging physical simulations, outclassing DeepSeek R1 during tests involving rotational dynamics of small spheres. Notably, O3-mini can produce complex ejection programs in a four-dimensional space, displaying immense potential for further exploitation.

In conjunction with O3-mini, OpenAI unveiled Deep Research, a cutting-edge agent that streamlines research efforts. With just a single prompt, ChatGPT can scour, analyze, and synthesize hundreds of online resources to create comprehensive reports akin to those of expert research analysts. The forthcoming version of OpenAI’s o3 model specifically tailors its capabilities for web browsing and data analysis, harnessing reasoning to interpret a vast corpus of texts, images, and PDFs, and adjusting its responses based on encountered information.
The future landscape of AI remains brimming with unexplored prospects, and the commercialization of AGI appears imminent. The Google DeepMind team has classified AI's evolution into six developmental stages. In specialized fields, certain AI models have surpassed human capabilities, as seen with AlphaFold, AlphaZero, and StockFish. However, from the perspective of general artificial intelligence, it is evident that we are still in preliminary phases. Models like ChatGPT only reach the ‘Level 1-Emerging’ classification, signifying that significant progress is necessary for attaining true intelligence parity.
Investment in AI is increasingly seen as a critical area for growth. The industry’s pace far surpasses traditional manufacturing sectors. Since the beginning of 2023, scaling laws in AI have led to a competitive race for computational resources globally. Geopolitical factors further add to this urgency, with Chinese teams notably stepping up in response to these competitive pressures. The fundamental truth remains that both algorithmic innovation and computational resources are indispensable on the path to AGI—a convergence that became evident by 2024 when the necessity of reducing inference costs to achieve market prosperity was established.
Additionally, there is a growing sentiment that while North American technology giants may initially concentrate their efforts on algorithm development, this should not signal a cessation of investment in computational resources. Quite the contrary—the industrialization of AI is likely to accelerate the influx of resource investment in a way that prevents the cyclical downturns previously experienced when AI initiatives faltered.
The inference market looks promising, with anticipated rapid scaling in various capabilities, including text generation, video creation from text, and cross-modal functionalities—indicating that AI is on the verge of comprehending the physical universe. Applications such as autonomous driving and humanoid robotics herald the onset of a transformative industrial revolution on the horizon.
In training, the ongoing exploration into model training demands significant computational investment. Furthermore, the explosive growth in the inference market is expected to create new avenues for model exploration. Noteworthy advancements such as world models may also expedite breakthroughs in the AI paradigm. However, a competitive landscape will likely lead to the attrition of companies unable to outperform open-source benchmarks, effectively clearing the field for more capable entities.
Mark Zuckerberg once articulated at a Meta investors meeting a compelling vision: “Over time, just as every business has a website, a social presence, and an email address, each will inevitably possess an AI agent for customer interaction. Our objective is to realize a scalable AI agent to enhance sales and economize resources.” This vision increasingly seems within reach, and historically, as computing evolved from laboratory experiments to ubiquitous household fixtures, it did not spell doom for existing enterprises but instead fostered the creation of extraordinary new companies. AI appears poised to undergo a similar transformative journey; hence, keen attention to related investments along the industrial chain is judicious.
Nevertheless, there are risks to consider: macroeconomic downturns, underwhelming downstream demand, and intensifying trade disputes between China and the United States all pose threats that could impact the burgeoning AI landscape.
Post Comment