The Intersection of AI Advancements and Privacy
The impact of AI technology is being seen across various industries, transforming how we engage and interact with businesses. The advanced capabilities and increased efficiencies that deep learning models have provided companies is very lucrative and a compelling growth driver. However, as these technologies become more sophisticated, they are accompanied by ethical and legal challenges. The recent controversy involving a deepfake advertisement of Scarlett Johansson shines a spotlight on the complex dynamics of AI, entertainment, and individual rights.
Who Was Involved In The Scarlett Johansson Deepfake?
Scarlett Johansson is a renowned Hollywood actress celebrated for her roles in blockbuster movies like Black Widow and Avengers: Infinity War. She is also a Tony Award winner and received multiple Academy Awards nominations, firmly establishing her presence in the global entertainment landscape.
Sharing the spotlight over this controversy is a AI app, Lisa AI, a platform that enables users to create avatars and images through text prompts. This technology is known as text-to-image synthesis, which is at the core of AI tools such as Midjourney and OpenAI’s Dall-E.
Lisa AI app in the App Store.
Details Around The Deepfake Ad
Despite offering innovative solutions, Lisa AI has found itself in hot water due to the unauthorized use of Scarlett Johansson’s likeness. The controversial advertisement seamlessly blends real footage of Johansson from a campaign she spearheaded for the fundraising organization Omaze. Below is the video we believe to be the original content that the deepfake ad manipulated. As viewers engage with the content, there’s a transition where the AI technology from Lisa AI takes the forefront, producing a remarkably realistic representation of Johansson, both visually and audibly. The ad promotes the capabilities of Lisa AI, showcasing how text prompts can generate lifelike avatars. However, the advertisement incorporated a disclaimer, highlighting that the images generated were a product of Lisa AI and bore no connection to Scarlett Johansson. Scarlett Johansson’s attorney, Kevin Yorn, known for representing high-profile clients, is actively pursuing legal action in this case.
Original clip of Scarlett Johansson promoting Omaze’s fundraiser on the set of Black Widow.
Deepfake Voice AI Detection
As copyright infringement and data privacy issues arise, social media platforms like X and TikTok, have taken steps to label synthetic or manipulated media as fake. But more comprehensive measures are needed. In the face of the deepfake technology predicament, Resemble AI offers robust AI Security solutions to stymie the spread of deepfake content. Our AI security stack identifies and authenticates audio data and content. Our AI voice detector, Resemble Detect is a real-time deepfake detection model that utilizes an advanced deep neural network to distinguish genuine from manipulated audio and video content. The AI voice detector could have identified the video early, preventing its circulation and protecting the reputation of a public figure.
The Spread of Manipulated Media
While Johansson’s unauthorized representation is at the crux of this issue, she isn’t the lone celebrity to be exploited in AI endorsements. Notable figures like Tom Hanks and CBS news anchor, Gayle King have recently sounded the alarm about their unauthorized endorsement of products via deepfake AI content. Furthermore, MrBeast, the most popular YouTuber, cautioned his followers regarding a deepfake TikTok scam advertisement. Regardless, the fake content will require expert analysis and ethical intervention. At Resemble AI, we believe in responsible AI development and data privacy. Learn more about Resemble Detect by clicking the button below.