At the 2025 ZGC Forum in Beijing, held from March 27 to 31, artificial intelligence (AI) dominated the agenda. At the event's flagship "Future AI Pioneer Forum," 01.AI CEO and Sinovation Ventures founder Kai-Fu Lee shared his perspective on the trajectory of generative AI in China.
Just nine months ago, Lee remarked that China was still awaiting its "ChatGPT moment"—a defining breakthrough in large language models (LLMs) capable of powering both enterprise (B2B) and consumer (B2C) use cases. Today, he believes that moment has arrived, dubbing it China's "DeepSeek moment."
DeepSeek narrows AI innovation gap with the US
Lee highlighted DeepSeek's achievement in reverse-engineering and open-sourcing the reasoning training pipeline behind LLMs, calling it a key step in closing the AI capability gap between China and the US. He praised DeepSeek-R1 for mastering chain-of-thought (CoT) reasoning and publicly releasing the methodology—a move he called "stunning."
Lee noted that DeepSeek stands out for its engineering efficiency and uniquely Chinese development model, which diverges sharply from OpenAI's approach. In benchmark comparisons, he claimed that DeepSeek-R1 runs faster and is five to ten times more cost-efficient than comparable US models.
According to Lee, the pace of AI development is accelerating. He pointed to the two-year cycles between ChatGPT 2.0, 3.0, and now 4.5 as evidence. Each leap, he said, has been driven by human-led training, including algorithmic advances, cost optimization, and access to higher-quality data.
AI-to-AI learning signals a new training paradigm
"We've entered an era where AI is teaching AI," Lee said. Advanced models now exhibit slow thinking, self-reflection, and the ability to iteratively improve. He described a shift toward a teacher-student framework, where larger models train smaller ones using techniques like model distillation, labeled datasets, and synthetic data generation to accelerate deployment.
Lee sees DeepSeek's biggest achievement in proving that open-source LLMs can rival—and even commercialize at the level of—state-of-the-art closed models. "The closed-source route is unsustainable," he said. "Open source is the only viable path forward."
Had DeepSeek remained closed, Lee speculated, it would never have reached its current level of global influence. In US-based open-source communities and on social media platforms, DeepSeek is receiving rare levels of enthusiasm—something Chinese software has historically struggled to achieve. Compared with closed platforms like OpenAI, DeepSeek's openness is winning hearts.
01.AI pivots to enterprise solutions powered by DeepSeek
Riding this momentum, Lee believes China is undergoing its own AI awakening. The "DeepSeek moment," he said, has educated the market at scale, removing key barriers to nationwide AI-first adoption.
Lee disclosed that 01.AI has strategically pivoted to fully integrate DeepSeek, with plans to develop it into an enterprise-grade solution. "We see DeepSeek as the core kernel of an AI 2.0-era Windows system," he said, positioning 01.AI as the architect of that future platform.
Lee closed with a blunt message: the industry must prioritize performance-to-cost optimization. For LLM developers, the focus has shifted from chasing milestones to creating real business value. His slogan for 2025? "Make AI work."
Article edited by Charlene Chen