
OpenAI has rolled out parental controls for ChatGPT across the web and mobile platforms amid increased scrutiny of artificial intelligence (AI) and a lawsuit stemming from the death of the parents of a California teenager who died by suicide after allegedly receiving potentially harmful suggestions from ChatGPT.
With the new settings, both parents and teens are required to link accounts in order to implement stricter controls. Once linked, parents can reduce exposure to sensitive content, disable voice mode and image generation, set quiet hours to limit access, and control whether ChatGPT remembers previous interactions or incorporates them into the training of OpenAI's models.
With the parental control feature turned on, parents can limit exposure to sensitive content, turn off the voice mode and image generations, set quiet hours to prevent access, and control whether ChatGPT remembers previous conversations or if chat history is utilised in further training of OpenAI's models.
Perhaps most importantly, parents will not be given direct access to request the chat transcripts. However, in the rare event that serious safety concerns are raised by the systems and/or trained reviewers, OpenAI will send a notification to parents only with information necessary to maintain the teens' safety. Parents will also be notified if a teen unlinks their account.
The Microsoft-backed company, which has reported approximately 700 million weekly active users, is also working on a system that predicts age and could apply the features for all users under age 18 automatically.
The move comes as US regulators ramp up scrutiny of AI platforms over risks to minors. Last month, Meta introduced additional safeguards to prevent its AI products from engaging in flirty conversations or discussions of self-harm with teenagers, while temporarily restricting access to some AI characters.