
OpenAI is creating a new ChatGPT experience specifically for teenagers, putting safety first because of growing worries about how AI may affect the mental health of young people.
This action follows a lawsuit that claimed ChatGPT's lack of protection was a contributing factor in the suicide death of a teenager.
OpenAI CEO Sam Altman stated: "We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection."
Stricter restrictions, such as content filters that forbid flirtatious conversations and discussions about self-harm, will be included in the teen version.
If a teen exhibits suicidal thoughts in a crisis, OpenAI will notify parents or law enforcement. In addition, parents can link their accounts, establish guidelines for ChatGPT responses, and impose "blackout hours" during which the app is not accessible.
OpenAI will guide minors to the teen mode using age-prediction technology. ChatGPT will fall back to the under-18 experience if the system is unable to reliably determine a user's age.
This comes before lawmakers are questioning tech companies about teen safety at a Senate hearing looking into AI's possible threat to youth.
By the end of 2025, the company intends to launch the teen-focused experience, which will provide enhanced safety for parents. The strategy used by OpenAI is similar to past efforts by businesses such as Google, which launched YouTube Kids in response to regulatory pressure and criticism.
With the launch of teen mode, OpenAI aims to balance between young users' freedom, privacy, and safety with new features.