YouTube's new AI detection feature lets creators flag AI content using their likeness

YouTube's new AI detection feature's functionality is similar to its Content ID, which detects copyright violations
An undated image. — Depositphotos
An undated image. — Depositphotos

In the wake of the unethical use of AI leading to copyright infractions, YouTube has launched a new AI detection feature, allowing users to identify and report unauthorised uploads that use their likeness.

Starting today, only those creators who are part of YouTube’s Partner Program will be able to detect likeness content.

After verifying their identity, creators can check flagged videos in the Content Detection tab on YouTube Studio. In case any unauthorised, AI-generated content is found, they can request its removal.

The notable side of the development is that YouTube's new AI detection feature was first rolled out to a group of eligible creators who were notified via email this morning. The feature will gradually expand to more users over the coming months, The Verge reported.

Since it's currently in the development stage, the tool may flag videos showing their actual face, including clips from their own content, YouTube cautioned.

This system's functionality is similar to YouTube's Content ID, which detects copyright violations.

YouTube initially announced this feature last year and began testing it in December with talent from Creative Artists Agency (CAA), aiming to help influential figures manage AI-generated content that features their likeness effectively.

Besides the new AI detection tool, YouTube has also mandated that creators label uploads containing AI-generated or altered content.

Furthermore, a strict policy has also been introduced regarding AI-generated music that mimics an artist’s original voice.

These anti-AI measures reflect YouTube and Google’s commitment to addressing the challenges posed by AI in video creation and editing, ensuring creators' original identities and content are protected.