
New steps have been announced by Facebook's parent company, Meta, to safeguard teenagers who use its artificial intelligence products.
The business is putting measures in place to stop its AI systems from having inappropriate conversations with children, such as ones about suicide, self-harm, or romantic or sensual subjects.
The action was taken in response to a Reuters report that exposed Meta's chatbots had been permitted to have offensive conversations with children, which sparked intense outrage and led US Senator Josh Hawley to open an investigation into Meta's AI practices.
The company's handling of AI interactions with children has drawn criticism from both Democrats and Republicans in Congress.
A Meta Spokesperson Andy Stone, stated that the company is working on longer-term solutions to guarantee safe, age-appropriate AI interactions while also implementing temporary measures to limit teen access to specific AI characters.
According to Stone, "We are taking these short-term steps while developing longer-term measures to ensure teens have safe, age-appropriate AI interactions."
The safeguards are currently being implemented and will be modified as the business improves its systems.
An internal document defining guidelines that allowed chatbots to flirt and play romantic role-playing games with kids was previously verified as authentic by Meta.
But, following enquiries from Reuters earlier this month, the business took those parts down.
Stone stated, "The aforementioned notes and examples were and remain incorrect and in conflict with our policies and have been eliminated."
By taking these new steps, Meta hopes to allay worries about its AI policies and make its products safer for teens.