
Although Google's in-house Gemini artificial intelligence (AI) tools gained warm reception wherever it was made available, Common Sense Media, a nonprofit firm focused on kids' safety in media and technology, has termed Google’s Gemini AI products unsafe for kids and teens in its assessment.
Common Sense pointed out that Gemini does not sensibly tailor its guidance for younger users, leading to both the “Under 13” and “Teen Experience” tiers being rated as “High Risk.”
It was outlined in the evaluation report that while Gemini identifies itself as a computer rather than a friend, it contains several areas that need improvement, TechCrunch reported.
Why Gemini AI is unsafe for kids and teens?
The report also highlighted indicated that both the Gemini tiers are apparently the adult versions of the AI, with only a handful of safety features.
Common Sense is of the view that AI products should prioritise child safety from the start, rather than simply modifying existing adult models.
The analysis brought to light that Gemini has the margin of sharing inappropriate and unsafe content with children, including topics related to sex, drugs, and mental health advice. This raises concerns for parents.
OpenAI is currently facing a wrongful death lawsuit related to the suicide of a 16-year-old who allegedly consulted ChatGPT about his plans.
The development comes amid Apple planning to use Gemini for its upcoming AI-enabled Siri, which stirs more worries around teens getting exposed to these risks without proper safety measures.