How DeepSeek is challenging OpenAI’s AI dominance

Unlike OpenAI, which is secretive about its AI training methods, DeepSeek openly shared how it built its reasoning models
An undated image of the DeepSeek logo. — DeepSeek/Canva

An undated image of the DeepSeek logo. — DeepSeek/Canva 

DeepSeek, Chinese AI company, has become the most downloaded app on the Apple App Store, overtaking OpenAI’s ChatGPT. The company’s latest AI models, R1 and R1-Zero, have impressed experts by delivering performance equal to, or even better than, OpenAI’s best public models. 

What’s even more surprising is that DeepSeek achieved this with much lower costs. DeepSeek’s biggest advantage is its efficiency. 

While OpenAI reportedly spent over $100 million to train its GPT-4 model, DeepSeek trained its V3 model, used as the base for R1 and R1-Zero, for less than $6 million. 

The company relied on older NVIDIA chips, which are still legally available in China, instead of the latest high-cost hardware. Experts believe that US restrictions on selling advanced chips to China have forced DeepSeek to focus on optimising its AI instead of relying on expensive computing power.

Unlike OpenAI, which has been secretive about its AI training methods, DeepSeek openly shared how it built its reasoning models. A key advancement was replacing human feedback with a self-learning algorithm that lets the AI recognise and correct its own mistakes. 

Moreover, this approach, known as pure reinforcement learning, allowed DeepSeek’s AI to perform exceptionally well in math and coding tasks. The company refined its model using a small amount of labelled data, making it even more effective.