LastPass revealed that one of its employees was targeted in a sophisticated voice phishing attack. The attackers used deepfake audio technology to impersonate Karim Toubba, the CEO of LastPass, in an attempt to deceive the employee into divulging sensitive information or taking unauthorized actions. The deepfake audio was delivered via WhatsApp, a channel not typically used for business communications, which raised suspicion and led the employee to report the incident to the internal security team.
The attempt to deceive the employee did not succeed, and LastPass treated the incident as a serious security concern. It served as a reminder of the potential risks associated with deepfake technology and the importance of being vigilant against such sophisticated phishing scams.
Deepfake technology, which encompasses synthetic media like fake images and videos, has become more accessible and less costly due to advancements in artificial intelligence (AI) and machine learning (ML). This technology can be used for various malicious purposes, including executive impersonation, financial fraud, and gaining illegitimate access to internal communications and operations.
The LastPass incident is part of a growing trend of deepfake scams. For example, a multinational company lost more than $25 million when an employee was convinced by deepfake video conference calls to transfer funds to scammers. These incidents highlight the need for organizations to update their security practices and training to account for advanced spear-phishing tactics like deepfakes.
How did LastPass respond to the incident?
LastPass responded to the incident involving a phishing attack with sophisticated audio deepfake technology by treating it as a serious security concern. The company revealed that one of its employees was targeted in this phishing attack, which involved deepfake audio spoofing CEO Karim Toubba. Despite the sophisticated nature of the attack, the LastPass employee did not fall for the ruse and reported the incident. This incident underscores the growing threat of deepfake technology in cybersecurity and highlights the importance of vigilance and prompt reporting of suspicious activities by employees to mitigate potential risks.
How could LastPass catch deepfake incidents?
Resemble Detect is an advanced AI model designed to identify deepfake audio in real-time, which can be a crucial tool in protecting against incidents like the LastPass CEO audio deepfake attempt. Here’s how Resemble Detect would be used to protect against such incidents:
Real-Time Detection
Resemble Detect operates in real-time, which means it can analyze audio as it is received. This is particularly important in a fast-paced business environment where decisions are often made quickly, and there may not be time for lengthy analyses. If a suspicious audio message is received, Resemble Detect can immediately analyze it to determine its authenticity.
AI-Based Analysis
The tool uses state-of-the-art neural models to scrutinize audio files. It looks for signs of manipulation by examining the audio data for artifacts that are typically inaudible to humans but can be detected by AI. This includes analyzing the spectrogram of the audio for unusual patterns that may indicate tampering.