DeepSeek Has Finally Pushed OpenAI to Its Limit
SUMMARY:
China's Open-source Models Have Exploded, and It's Hard Not to Touch OpenAI's Nerves as Well as Silicon Valley's.
Phoenix News Technology
Author | Jiang Fan
Editor | Dong Yuqing
Beijing Time, August 6th, early morning, OpenAI suddenly released its first open-source language model GPT-OSS, dropping a bombshell in the global tech circle.
Even though people were eagerly waiting for GPT-5, which has not yet appeared, OpenAI might just cause another wave of open-source trends.
Is OpenAI's High-performance Model Now Free?
This release of open-source models pt-oss-120b and gpt-oss-20b, according to OpenAI founder Oferman, achieves a performance level of o4-mini on phones and laptops. He claims this is a major technical victory.
In particular, gpt-oss-120b uses the MoE architecture and has 1170 billion parameters, with about 51 billion activation parameters. It only requires a single 80GB GPU to run, with performance similar to closed-source o4-mini.
Similarly, gpt-oss-20b is also based on the MoE architecture and has 210 billion parameters, with about 36 billion activation parameters. It can smoothly run on devices with 16GB of memory, with performance similar to o3-mini.
Actually, reviewing the past few years, OpenAI has always been walking the "closed-source + charging" route.
Regardless of whether it's GPT-4 or GPT-40, the core model has always remained closed. The industry once believed, "The strongest models will never be open-sourced."
But GPT-OSS's appearance has broken this consensus.
According to OpenAI officials, GPT-OSS is a "small but high-performance" language model that covers multiple languages and fields. What's more important is that OpenAI claims this model can be used for free in commercial applications, which is a godsend for Chinese AI startup companies.
Getting Ready to Declare War on Domestic Models?
As the pioneer of ChatGPT, OpenAI's move means a huge shift: from closed-source and charging to open-source collaboration. Is this preparing to declare war on domestic models?
Actually, considering OpenAI founder Oferman's recent updates on X-platform, the release of GPT-OSS is not just a one-time impulse, but rather a well-thought-out "strategic adjustment."
The Most Fundamental Reason Is:
China's open-source models are developing too fast.
First, DeepSeek relied on R1 to trigger a tidal wave. In fact, since V2 models, it has found the formula of high-value and low-cost, due to its groundbreaking innovations in model structure, making its model costs significantly lower than before.
After that, many Chinese models followed suit, walking the open-source path.
Take, for example, the most rapid iteration of Tongyue Qwen, which has made significant contributions to the global community. Phoenix News Technology found that over the past three months, Ali's Tongyue Qwen open-source model has experienced intense iterations, releasing six major updates and adding more than 55 new model versions, covering foundation models, programming models, vector models, and edge-side optimization models.
This year at WAIC, there were even discussions about open-source models entering China's time. As of August 2025, China's open-source large-model ecosystem is thriving, with multiple influential open-source model teams covering foundation models, multimodal, programming, and lightweight directions. For example, Kimi's K2 has just sparked significant discussions overseas; the GLM-4.5 released by Zhuyu has native support for Agent development; and Tencent's HunyuanWorld-1 is the world's first open-source 3D world generation model.
This explosive development is hard not to touch OpenAI's nerves, as well as Silicon Valley's.