AI visionaries call for heightened deepfake regulation in joint statement

Coalition highlights the current landscape where deepfakes frequently involve sexually explicit material, fraudulent activities, or political disinformation
The image shows a man holding a mask. — Freepik
The image shows a man holding a mask. — Freepik

Artificial intelligence luminaries and industry leaders, among them trailblazer Yoshua Bengio, have joined forces to advocate for increased regulation concerning the proliferation of deepfakes, citing the potential societal risks they pose.

In an open letter spearheaded by Andrew Critch, an AI researcher at UC Berkeley, the coalition highlights the current landscape where deepfakes frequently involve sexually explicit material, fraudulent activities, or political disinformation. Given the rapid advancements in AI technology rendering the creation of deepfakes increasingly accessible, the group stresses the necessity for protective measures.

Deepfakes, which encompass convincingly fabricated images, audios, and videos generated by AI algorithms, have reached a level of realism that makes them nearly indistinguishable from authentic human-created content.

Read moreHow to detect AI images in 2023

Entitled "Disrupting the Deepfake Supply Chain," the letter proposes a framework for regulating deepfakes, advocating for the complete criminalisation of deepfake child pornography, imposing legal consequences for individuals involved in the creation or propagation of harmful deepfakes, and mandating AI companies to implement safeguards against the generation of harmful deepfakes by their products.

The initiative has garnered support from over 400 signatories as of Wednesday morning, spanning various sectors including academia, entertainment, and politics. Notable endorsers include Harvard psychology professor Steven Pinker, former Estonian presidents, as well as researchers from Google, DeepMind, and OpenAI.

The call for regulation underscores the imperative of ensuring that AI systems do not pose threats to society, echoing concerns raised by previous warnings from influential figures. Elon Musk, for instance, endorsed a letter last year advocating for a six-month moratorium on the development of AI systems surpassing the capabilities of OpenAI's GPT-4 model.