DeepSeek rolls out cheaper AI model to compete with OpenAI

New model leverages DeepSeek Sparse Attention, which is a system designed to reduce computing costs
An undated image of the DeepSeek logo. — DeepSeek/Canva
An undated image of the DeepSeek logo. — DeepSeek/Canva

Chinese AI developer DeepSeek has announced a new experimental language model that it claims will reduce API prices by more than 50% in an attempt to undermine competitors like OpenAI and increase global AI competition.

The model, named DeepSeek-V3.2-Exp, was launched on the developer platform Hugging Face. The company called the model "an intermediate step toward our next generation architecture," which suggests a larger launch could be coming soon.

DeepSeek hinted that the next architecture could be the most important product release since the V3 and R1 models were released earlier this year, models that rocked Silicon Valley and the global investment community.

The new model leverages DeepSeek Sparse Attention, which is a system designed to reduce computing costs while improving performance on longer textual sequences.

This innovation allows training, according to DeepSeek, to be more effective while also allowing the model to take on a larger and more sophisticated workload.

Analysts indicated that while DeepSeek's latest experimental release is likely not nearly as disruptive as its last large public release in January, it poses a credible challenge.

By offering advanced AI capabilities at a fraction of the cost, DeepSeek is positioning itself to pressure both domestic rivals such as Alibaba’s Qwen and international leaders, including OpenAI.

DeepSeek’s strategy has consistently focused on combining high performance with affordability. This approach helped its earlier models gain global attention as some of the most cost-effective yet capable alternatives in the AI race.