Spotify announces new rules to stop AI clones

Spotify notes that AI may help enable creative work but also introduces risks associated with impersonations and spam-generating content
An undated image of Spotify logo. — Spotify Newsroom
An undated image of Spotify logo. — Spotify Newsroom 

Spotify has announced new initiatives to mitigate the risks associated with artificial intelligence (AI) in music, ensuring that it will protect artists' identities and sustain user trust on the platform. 

The company noted that AI may help enable creative work but also introduces risks associated with impersonations and spam-generating content that is damaging the royalties artists receive for their work.

A key change is a new policy specific to AI impersonation. This new policy stipulates that Spotify will only allow artist-voiced AI clones in business relationships if given clear permission from the artist. 

The company is also implementing systems to prevent fake uploads and inaccurate attribution, affording musicians a quicker and easier method to address misuse of their work.

Another major change this year is the development of a spam filter that will detect large volumes of uploads, exceedingly short track duration, and other practices that are manipulative and problematic. 

According to Spotify, the spam filter utility will be rolled out cautiously and will be updated based on how new methods of abuse manifest, with the aim to ensure that legitimate creators are not impacted.

Additionally, the platform is also endorsing an industry cohort to develop a standard for AI disclosures in music credits, allowing listeners and rights holders to understand how AI was involved or utilised in the creation process.