
An undated image of Meta logo. — Meta/Canva
Meta has placed new limitations on its AI chatbots following an investigation which discovered disturbing conversations with minors and exploitation of celebrity likenesses.
Notably, Meta states that its chatbots are also being retrained to avoid interacting with teenagers on sensitive issues regarding self-harm, suicide, and eating disorders, topics which teenagers may struggle to talk to a human about face-to-face. They will also be told not to explore inappropriate romantic conversations.
Meta described the updates as "interim measures" pending concrete policies being developed.
The limitations follow the results of a Reuters investigation which found that Meta's AI systems could readily be manipulated to produce sexually-explicit dialogue with a minor; create shirtless images of underage celebrities and eventually impersonate public figures like Taylor Swift, Scarlett Johansson and Selena Gomez.
Some bots provided users with fictitious physical addresses, one of which was linked to the death of a 76-year-old man in New Jersey.
Meta spokesperson Stephanie Otway acknowledged that the company had made a mistake allowing minors to have these interactions to begin with, adding that the AI would now provide minors with directions to an expert instead.
Moreover, she also confirmed limitations on access to specific AI characters, including those with "heavily sexualised profiles".
While Meta has removed several bots flagged by Reuters, many remain on its platforms, raising questions about enforcement. Some AI personas were even created by Meta employees, despite company rules banning impersonation and sexually explicit content.
The company now faces scrutiny from US lawmakers, with the Senate and 44 state attorneys general probing its AI safety practices. Meta has not announced whether it plans to expand restrictions beyond interactions with minors.