
Alphabet-owned Google is set to release new online safety features that make it simpler to remove explicit deepfakes from Search at scale and avoid them from showing in search results in the initial place.
When users successfully request the removal of explicit nonconsensual fake content that depicts them from Search, Google's systems will aim to filter out all explicit results on relatable searches about them and remove any duplicate pictures.
Google Product Manager Emma Higham stated: "These protections have already proven successful in addressing other types of non-consensual imagery, and we’ve now built the same capabilities for fake explicit images as well.
“These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future,” Higham added.
Read more: Google Chrome's AI update — 'Drag to Search' revolution
Moreover, Google Search rankings are adjusted to better manage queries that carry a higher risk of surfacing explicit fake content. Sites that receive a substantial amount of removals for fake explicit imagery will be downgraded in Google Search rankings.
Google said that earlier upgrades have reduced exposure to explicit image results on queries that are particularly searching for such deep fake content by over 70% this year.
In addition, the company is working on a way to differentiate between real explicit content — such as an actor’s consensual nude scenes — and explicit fake content so that legitimate images can still surface while downgrading deepfakes.