⚡️ Introducing Rapid Voice Cloning

Q

Greta Thunberg Deepfake

Deepfakes, The AI Behind Fake Media

A viral deepfake video has surfaced appearing to show climate activist Greta Thunberg advocating for the use of “sustainable” weapons in war. The manipulated video highlighted concerns around the potential of AI misuse of voice cloning and deepfake technology to spread misinformation. These videos which include both synthetic audio and video are making it seem as though public figures said things they did not actually say. We’ll discuss the impact of this video on Greta Thunberg’s reputation as well as deepfake detection’s role in validating content.

A False Call To Arms

The falsified video, which originated on YouTube, used real footage of Thunberg being interviewed by the BBC about her book and climate activism. Her image was then digitally altered with fabricated audio calling for the use of environmentally-friendly weaponry like “vegan grenades” and “biodegradable missiles.” This outrageous statement, not actually said by Thunberg herself, was designed to be satirical in nature by the video’s creator, YouTuber, Snicklink. The one minute video ends by Greta’s deepfake plugging a fake book, “Vegan Wars”. Below is a look at the video posted by a Twitter account with over 1 million followers.

The real Greta Thunberg BBC interview.

Clearing The Air: Fake vs Real Video  

Thunberg’s deepfake video can be found in the Tweet to the left. Looking closely, her mouth’s animation has been altered. There is also a platform notification underneath the video giving users context and warning them that the content contains deepfake video and audio.

The video above is the original BBC interview which covers climate change versus the bogus subject matter in the deepfake. 

A Viral Deepfake That Fooled the Internet

Consequently, the deepfake audio paired convincingly with the genuine visuals of Thunberg speaking. This realistically forged media fooled viewers unfamiliar with Thunberg’s actual views and statements. The viral circulation of the manipulated footage demonstrated how convincing deepfakes can spread false information under the guise of reality.

Greta Thunberg Deepfake Reputation

Social media backlash associated to Greta Thunberg’s deepfake video.

Without disclaimers clearly identifying the video as satire, many perceived the deepfake as authentic. Some of the social media posts and responses to the manipulated video did not hold back. This incident illustrates the need for enhanced digital media awareness of deepfake technology. More importantly, there is an urgent need for protection against voice cloning misuse. Responsible governance of generative technology remains imperative as deepfakes grow more accessible and damaging to reputations.

Deepfake Voice Detection 

While there’s been a lack of government regulation in addressing the deepfake dilemma, our team continues to push the limits on audio deepfake detection. Resemble Detect, a deep neural network AI voice detector, can identify deepfake voices with 98% accuracy. The AI model looks at a unique combination of time and frequency embedded data to identify sound artifacts or manipulations in audio. An engineer on the team ran Greta Thunberg’s deepfake audio clip through Resemble Detect which gave her deepfake audio a 100% score. The score is a prediction of how fake the audio is with 100% being absolutely fake. Below is a visual representation of Detect’s model analyzing the audio in 2-second windows. The bold red line in the middle represents the score or prediction.  

Resemble Detect Analysis: Greta Thunberg Deepfake

Resemble Detect’s model predicts that the audio is a 100% fake. 

The Solution  

The Greta Thunberg deepfake incident has unmasked the urgent need for comprehensive AI safety measures, especially for businesses that deal with voice data. From voicemod and live voice changer applications to more complex systems like an AI voice generator, the role of technology in our lives is undeniable. However, they must be approached with caution and ethical responsibility. At Resemble AI, we believe in responsible AI development and data privacy. Learn more about Resemble Detect by clicking the button below. 

More Related to This

Tips to avoid AI voice scams

Tips to avoid AI voice scams

AI voice scams, a rising concern in the digital age, exploit artificial intelligence (AI) technology to conduct fraudulent activities. Scammers use voice cloning technology to mimic the voices of trusted individuals, tricking victims into revealing sensitive...

read more