
In a move which is likely triggered by a discord between the two artificial intelligence (AI) giants, Anthropic has cancelled OpenAI’s access to its Claude AI models.
The access termination comes on the heels of reports churned out by multiple news outlets online, claiming that OpenAI used Claude to benchmark its own AI models in coding, writing, and safety assessments through internal tools.
The Claude maker confirmed the development in a statement to TechCrunch, stressing that OpenAI’s technical team had employed Claude’s coding capabilities ahead of GPT-5’s development. This marks a breach of its terms designed to prevent competitors from leveraging Claude for rival services.
The company clarified, however, that its API access for OpenAI would remain intact for “benchmarking and safety evaluations.”
OpenAI defended its actions as “industry-standard practice,” adding: “While we respect Anthropic’s choice to revoke our API access, it’s regrettable given our API remains open to them.”
The dispute is a reflection of escalating tensions in the AI space. Anthropic had previously restricted access to Windsurf, with Anthropic Chief Science Officer Jared Kaplan stating: “Selling Claude to OpenAI would be incongruous.”
Analysts were of the view that the move also paints a picture of firms being extra diligent with the protection of their proprietary advancements.
while Anthropic’s dual approach, allowing evaluative access while blocking developmental use, highlights efforts to balance collaboration with competition safeguards, scrutiny over cross-company tool usage seems more likely to intensify given GPT-5’s pending launch.