⚡️ Introducing Rapid Voice Cloning

Q

Resemble AI at US Senate: Key Learnings and Takeaways from the Senate Hearing on Election Deepfakes

Apr 19, 2024

This week, Resemble AI CEO and founder Zohaib Ahmed was invited to testify in front of the United States Senate Judiciary Subcommittee on Privacy, Technology, and the Law to discuss the impact that deepfake technology can have on the US elections. 

Startling incidents like the fake President Biden robocall and recent deepfake of Arizona Republican Senate candidate Kari Lake, highlight the concerning potential for artificial intelligence misuse in politics. These incidents vividly demonstrate how these advanced technologies can propagate election misinformation and confusion to voters, posing a significant threat to the integrity of elections.

Here are some key takeaways from the recent Senate Hearing around election deepfakes:

It’s imperative to establish accountability measures for those who misuse AI.

There was overwhelming support and proposed legislative frameworks that look to hold individuals, companies, media, political groups and international actors accountable for any malicious use of AI in election-related content. 

This could include fines, legal action, and the revocation of broadcasting licenses for media outlets that knowingly distribute AI-generated misinformation. By establishing clear consequences for the misuse of AI, we can deter bad actors and create a stronger incentive for compliance with transparency and labeling requirements.

Combating election misinformation is a collaborative effort.

It was clear that not one body or one company can solve this problem. It is taking the collective approach and agreement that includes experts from the private and public sectors, including representatives from AI companies, government agencies, academic institutions, and civil society organizations. 

This group could make up a national task force that would be responsible for developing best practices and standards for the use of AI in election-related content, as well as coordinating efforts to detect and counter AI-generated misinformation.

We need to do something about it NOW.

U.S. elections are quickly approaching, less than six months away, and there is a consensus from all those that testified and government officials that we need to move quickly and start implementing ways that protect voters and election integrity.

For example, clear labeling on AI-generated content in the election process. Similarly to how disclaimers appear at the end of political ads, voters should be made aware that they are interacting with an AI model or AI-generated content.

At Resemble AI, we know firsthand that deepfake detection technology is a powerful tool, capable of providing crucial context and labeling to identify potentially misleading or AI-generated content. This is also why we made our real-time Deepfake Detector tool available for everyone, so the  can quickly verify the authenticity of widely circulated audio content, making it a valuable asset for journalists, content creators, and the general public who are often on the frontline of combating misinformation. 

Furthermore, voter education initiatives are crucial in promoting transparency. While each state has their own election rules and requirements, public awareness campaigns that inform voters about the existence and potential impact of AI-generated content in elections would help equip voters with the tools to critically evaluate the information they receive. 

What Resemble AI is Doing To Address Election Misinformation

The team at Resemble AI have spent the last five years developing and researching AI voice technology through that work and are uniquely positioned to understand both the remarkable potential and possible risks associated with the rapid advancement of voice synthesis and cloning capabilities.

We have created innovative solutions to address the emerging challenges posed by unauthorized or unethical uses of voice cloning technology, that include:

  • Our PerTh Watermarker, an “invisible watermark” that tackles the malicious use of AI generated voices. Further research of our watermark supports the traceability of data.
  • Resemble Detect, our advanced deepfake detection AI model that provides 98% accuracy in exposing deepfake audio.
  • Our free, real-time Deepfake Detector tool, to quickly verify the authenticity of widely circulated audio content to combat misinformation. 
  • Recent added real-time deepfake detection for Google Meet, to have immediate insight into the authenticity of your communication channels.

We appreciated being able to share our expertise and present our recommendations to the United States Senate Judiciary Subcommittee and we look forward to facilitating partnerships between the private and public sectors to ensure today’s innovation is used responsibly. 

More From This Category

Introducing Rapid Voice Cloning: Create AI Voices in Seconds

Introducing Rapid Voice Cloning: Create AI Voices in Seconds

We're excited to announce the launch of our groundbreaking new feature: Rapid Voice Cloning. This innovative technology allows you to create high-quality voice clones faster and easier than ever before, unlocking new possibilities for your voice-enabled projects....

read more