
Microsoft has been knocking on Congress’s door to have the use of AI (artificial intelligence) deepfake technology regulated to prevent fraud and manipulation.
The company’s Vice-Chair and President, Brad Smith, has called on policymakers to take urgent action to safeguard elections, the elderly, and children.
“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
Microsoft is rallying behind an ‘AI deepfake status’ that will give law enforcement agencies the legal basis to prosecute AI fraudsters. The Vice-Chair continued, “We must ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”
Read more: Here's how to use Getty's AI image generator
The Senate has already taken note of the matter to some extent, having passed a bill that allows for the prosecution of creators of sexually explicit AI deepfakes for the unwanted use of personal imagery.
The passing of this bill was preceded by reports of misconduct in schools, where classmates were found to have used deepfake technology to produce compromising imagery of their female counterparts.
Microsoft had to go back to the drawing board and install more safety precautions to its AI products following instances of users creating explicit imagery of celebrities like Taylor Swift through a loophole in the Design AI.
The FCC has already banned robocalls made using AI-generated voices. What’s troubling is how easy generating false audio and imagery has become, especially with elections in the backdrop.
Earlier, Elon Musk took a shot at Vice-President Harris by uploading a deepfake, a move which seems to violate X’s own manipulated media policies.
Microsoft wants posts like Musk’s to be clearly labelled as a deepfake. “Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content,” says Smith. “This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”