Meta fixes critical security flaw in AI chatbot

Prompt numbers produced by Meta's servers were 'easily guessable'
An undated image. — Shutterstock
An undated image. — Shutterstock

The Meta AI chatbot has successfully fixed a serious security flaw that let users view and access other users' private prompts and AI-generated responses.

Appsecure Security Founder and CEO Sandeep Hodkasia found the bug and received a $10,000 bug bounty for divulging it in private.

Following a thorough analysis of the operation of Meta AI's editing feature, Hodkasia discovered a vulnerability.

When users alter their prompts to create new text or images, Meta's servers give each prompt a unique number and the AI-generated response that goes with it.

However, Hodkasia found that he could access other users' prompts and responses by tinkering with this special number.

This bug was caused by Meta's servers not correctly confirming that the user who requested the prompt response was authorised to view it.

The prompt numbers produced by Meta's servers were "easily guessable," said Hodkasia, which raised concerns about the possibility that malevolent actors could use automated tools to exploit this vulnerability and scrape users' original prompts.

On January 24, 2025, Meta confirmed that it had fixed the bug and discovered no signs of malicious exploitation.

Meta spokesman Ryan Daniels stated: "We rewarded the researcher and found no evidence of abuse."

The business has responded quickly to the problem, guaranteeing user data security and integrity.

This incident emphasises how crucial security is becoming to AI products, especially as tech companies race to introduce and improve their AI products.

Meta AI's standalone app, launched earlier this year to compete with competitors such as ChatGPT, encountered early difficulties when some users unintentionally shared private conversations with Meta AI's chatbot.