⚡️ Introducing Rapid Voice Cloning

Q

Video Game Voice Actor Deepfakes

Famous Video Game Voice Actors Raise Concerns Over Social Media Deepfakes

Since the beginning of 2023, generative AI has been making profound inroads into various industries. One industry facing notable challenges from voice AI technology is voice acting. Voice actors, the skilled individuals behind beloved animated Disney characters and Netflix narrations, are finding themselves grappling with the penetration of AI voice generators into their domain. This report delves into a recent Forbes article by Rashi Shrivastava about recent video game voice actor deepfakes. We’ll look into the deepfake-related concerns of the talented voices of top hit games like Genshin Impact and The Elder Scrolls as well as the protective measures that generative AI apps are taking to safeguard their data privacy.

From AI Voice Generator to TikTok Deepfake 

A recent incident that brought the ethical dilemmas of voice actor deepfakes to the forefront was the case of Allegra Clark. The famous video game voice actor is the voice of Beidou from the hit game Genshin Impact. Shockingly while scrolling through TikTok she found a video of Beidou making sexually suggestive comments in her voice. The Genshin Impact voice actor later found out that the voice was cloned without her consent by ElevenLabs AI, a generative voice AI company. Clark promptly requested the video to be taken down, expressing her discomfort at hearing her voice speak words she never uttered. She also requested that ElevenLabs not allow any future voice cloning of her voice. However, ElevenLabs refused to take action and never followed up with her.

Getting Doxxed on Twitter, The Ultimate Data Privacy Fear

Shifting the focus to fellow voice actor Abbey Veffer, known for voicing characters in Genshin Impact and The Elder Scrolls. She recently was doxxed by a random person on Twitter. In case you’re asking what doxxed means, it’s the revealing of information related to someone’s identity.  In this case, the culprit created a Twitter account with her home address as the handle. Then created an AI clone of her voice to speak racist and violent content. The user claimed to have used ElevenLabs as well for voice cloning and text-to-speech AI content generation.

However, when Veffer approached ElevenLabs to share her horrific experience, the company denied any involvement in the creation of the voice clone or generation of the deepfake. They asserted it was part of an orchestrated campaign to tarnish the startup’s reputation. Twitter however responded by suspending the account in question and removing the video.

Deepfake Content: Reputation In The Spotlight

In a similar fashion to Clarke, another voice actor Cissy Jones, a union member of National Association of Voice Actors (NAVA), faced major reputation concerns. She found deepfake TikTok videos in which fans had used the voice cloning AI tool Uberduck AI to create deepfake content of her saying inappropriate things. The reality is that AI-generated voices can be manipulated to say things that voice actors would never endorse. The implications of both Jones and Allegra Clark’s TikTok deepfakes are potentially jeopardizing future job opportunities and damaging their reputation. Below are examples of current Genshin Impact Deepfakes on TikTok using ElevenLabs and Uberduck.

Genshin Impact Voice Actor Deepfakes on TikTok

Genshin Impact voice actor deepfakes on the TikTok platform.

The Voice AI Generator: Innovation At Your Fingertips

How to make a deepfake video? A Google search is all it takes to learn how to create an audio deepfake. The availability of voice AI, thanks to Voice AI Generators, open-source projects, and similar AI tools allows individuals access to voice cloning. Users scrape audio data from the internet and upload that data into an Uberduck or ElevenLabs which are then able to clone the voice. Through text to speech synthesis, AI voice content is generated and applied to video content such as the Genshin Impact deepfakes. This isn’t an isolated incident around video game voice actor deepfakes. There has been a surge in deepfakes across the internet and unethical AI examples. Emmy winner Tom Hanks and MrBeast, the most popular YouTuber both were recently depicted in fake video advertisements.

Voice Ownership In A World With Voice AI

In lieu of these deepfake incidents, the core issue that arises is the lack of personal ownership voice actors have over their voices. Not only are their voices accessible online through voice AI platforms like Uberduck but talent contracts in the voice acting industry typically stipulate that producers own the recordings in perpetuity, throughout the known universe, in any technology currently existing or to be developed. This clause significantly restricts voice actors’ control over how their voices are used, even when trained by text to speech AI models.

How Are Generative AI Platforms Responding?

Generative voice AI platforms like ElevenLabs and the celebrity voice generator Fakeyou have faced scrutiny for enabling misuse of their technology. However, platforms’ responses to concerns have varied, with some dismissing claims as smear campaigns while others like Uberduck and Fakeyou have removed voices upon request. 

While some may question OpenAI’s ethics, ChatGPT will not provide any detailed information when prompted about how to make a deepfake video. Below are LLM responses to two basic prompts related to deepfake content creation. ChatGPT and Google’s Bard declined to provide responses to the prompt. ChatGPT goes as far as calling the deepfake prompt unethical and potentially illegal. Anthropic’s Claude on the other hand goes into full detail about how to make a deepfake, how to distribute the deepfake, and the challenges associated with detecting deepfakes. Clearly continued effort and more robust solutions are needed to help curb the proliferation of deepfake content.

LLM Deepfake AI Safeguards Comparison

Comparison of ‘deepfake’ prompt safeguards between major Large Language Models.

How Resemble AI’s Deepfake Detector Stands In Support of Data Privacy

As the voice acting industry grapples with deepfake content, Resemble AI continues to prioritize AI safety against deepfake audio. Not only does our platform require user-recorded consent that must match their uploaded voice data but our AI security stack helps safeguard digital identities and reputations. Specifically, Resemble Detect, our deepfake detector provides individuals and companies alike with real-time content authentication.

Resemble Detect, Definitive Deepfake Detection: Resemble Detect is a real-time deepfake detection AI model with an advanced deep neural network that can identify fake AI voice content with over 98% accuracy. In the context of deepfake videos, Resemble Detect plays a pivotal role in detecting AI voice early. The AI safety tool can prevent the circulation of deepfake videos. From music to podcasts, Detect recognizes deepfake audio across all forms of media and against all modern generative AI speech synthesis solutions including AI voices from ElevenLabs, Fakeyou, and Uberduck. 

 

The Future of Video Game Voice Actor Deepfakes

In the face of these challenges, voice actors, industry associations like the National Association of Voice Actors (NAVA), and AI technology companies like Resemble AI must collaborate. Together we can chart a path that safeguards the interests of voice actors and ensures responsible AI. Our commitment to responsible AI through cutting-edge tools like Resemble Detect ensures a safer and more secure digital future. In doing so, we take strides toward preserving our digital identities and upholding the principles of ethical AI use.

More Related to This

What Is RAG and How Does It Work?

What Is RAG and How Does It Work?

Large language models are everywhere, revolutionizing fields such as education, content generation, and even scientific publishing. However, these models have their limitations when generating accurate and relevant responses. The limitations of large language models...

read more