A finance worker at a multinational firm was defrauded of $25 million in an elaborate scam involving deepfake technology. The employee was tricked into participating in a video call with several individuals he believed to be colleagues, including the company’s UK-based chief financial officer as a Deepfake CFO. However, all participants in the call were actually deepfake recreations. The scam was only discovered when the employee later checked with the real CFO and realized that the video call was fraudulent.
The worker initially became suspicious after receiving a message that seemed to be a phishing attempt, discussing a secret transaction. Despite these doubts, the worker was convinced by the video call, where the deepfake participants appeared legitimate, and subsequently agreed to transfer 200 Million Hong Kong Dollars (approximately $25.6M USD).
Hong Kong police have made six arrests in connection with this and similar scams. They reported that AI deepfakes have been used on at least 20 occasions to trick facial recognition systems, and in this case, eight stolen Hong Kong identity cards were involved.
This incident is among several recent cases where fraudsters have utilized deepfake technology, raising concerns about the potential misuse of AI and the challenges it poses to security and verification processes.
What are the consequences of falling for a deepfake scam?
Falling for a deepfake scam can have a range of serious consequences, both for individuals and organizations. Here are some of the potential consequences:
The most immediate and obvious consequence is financial loss. As in the case of the finance worker who transferred $25 million, victims can be deceived into making large financial transactions to fraudulent accounts.
Countries like India have warned that they are prepared to impose legal consequences on tech companies that fail to take action against the spread of deepfakes. This indicates that there could be legal repercussions for failing to prevent or falling victim to deepfake scams.
Deepfake scams can cause significant reputational damage to individuals and organizations. A deepfake video of a CEO making inappropriate comments, for example, could go viral and harm the company’s brand and customer trust.
Breakdown of Trust
Regularly falling prey to deepfake scams could lead to a breakdown in trust within an organization, as employees become unsure of who and what they can trust.
Deepfakes can be used to impersonate individuals like the Deepfake CFO, leading to identity theft. This can have long-term consequences for the victims, including credit damage and personal privacy breaches.
What measures can companies take to prevent deepfake scams?
To prevent deepfake scams, companies can take several measures:
Employee Training and Awareness
- Educate Employees: Teach staff about the existence and risks of deepfake technology. Training should include how to recognize potential deepfakes and the importance of verifying the identity of individuals in communications. Resemble offers a red team exercises using our real-time Voice Cloning and speech-to-speech technologies.
- Promote Skepticism: Encourage employees to be cautious with unexpected requests, especially those involving money or sensitive information.
Secure Communication Channels
- Implement Secure Channels: Use encrypted communication methods, password protection, and multi-factor authentication for sensitive transactions and data.
- Deepfake Detection Software: Consider using specialized software or services that can help identify deepfakes. These tools analyze videos and audio for signs of manipulation. Resemble Detect enables organizations to catch up to 95% of deepfake attempts on their organization. Key personnel like the Deepfake CFO can be caught with high accuracy.
- Robust Security Mechanisms: Maintain strong cybersecurity practices to protect against various threats, including deepfakes
- Update and Adapt: Stay informed about the latest advancements in deepfake technology and update security measures accordingly
Policies and Procedures
- Develop New Security Standards: Implement new security standards within the company to prevent deepfakes
- Response Plan: Have a plan in place for when a deepfake is detected, outlining individual responsibilities and required actions
How would Resemble Detect prevent Generative AI Scams?
Resemble Detect could potentially assist in preventing scams like the one where a finance worker was deceived into transferring $25 million by providing real-time detection of deepfake audio. Resemble Detect is a neural model designed to identify and expose deepfake audio across various types of media. It analyzes audio frame-by-frame to detect any artificially generated or modified content.
In the context of the scam involving the deepfake “chief financial officer,” Resemble Detect could be used to verify the authenticity of audio during video calls, potentially flagging any discrepancies that indicate the use of deepfake technology. This could help in alerting employees to the possibility of fraud before any transactions are made. Additionally, Resemble AI offers an AI Watermarker to protect intellectual property, which could prevent unauthorized use of a company’s data in creating deepfakes.