AI IMPERSONATION AND DETECTION

AI impersonation detection for every conversation.

Executive fraud, fake candidates, and voice scams at scale — Resemble detects deepfake audio, live face swaps, and synthetic voice in real time. Under 300ms. Cloud or on-prem.

EXECUTIVE + WIRE FRAUD

Voice clone + face swap in a single call

CEO/CFO impersonation via coordinated audio and video. Average wire transfer fraud loss: $1.7M per incident.

INTERVIEW + HR FRAUD

Fake candidates passing live screens

Gartner projects 1 in 4 candidate profiles worldwide will be fake by 2028. Deepfake faces and cloned voices defeat standard liveness checks.

VISHING + VOICE FRAUD

Synthetic voice at automated scale

AI voice clones targeting millions of calls per campaign — zero marginal cost per attempt, indistinguishable from the real entity.

Trusted by
THE THREAT IS ALREADY IN YOUR CALLS

The cost of impersonating anyone, in any meeting, is now effectively zero.

A voice clone takes 4 seconds of audio. A face swap runs on a $200 GPU in a browser. The attacks are happening at scale because the tools are cheap, fast, and require no expertise. Detection has to be faster.
ACCESSIBILITY
A voice clone takes 4 seconds of audio. A face swap runs in a browser. Neither requires expertise.
The barrier to impersonating any executive, candidate, or trusted entity is gone. What required a professional studio now requires a recording and an internet connection.
ATTACK SURFACE
BEC starts with email. It ends on a call. Voice and video have become the final and most trusted verification layer.
980 wire transfer fraud cases via live video deepfake in Q3 2025. Fake candidates passing HR screens. Vishing at automated scale. Every conversation is the attack surface.
RESPONSE WINDOW
By the time a wire transfer clears or an offer goes out, the fraud is locked in. Detection happens in the call.
Investigation after the fact costs 50–100x more than real-time detection. The window to stop executive fraud, interview fraud, and vishing is the conversation itself.
ATTACK VECTORS

Passing liveness doesn't mean the person is real.

Three AI impersonation attack types. The delivery method determines which layer of detection applies — and Resemble covers all of them.

HIGH VALUE • TARGETED
Executive + wire fraud
Attack types
BEC + voice layer
CEO / CFO faud
Real-time video deepfake
Modalities
Image
Video
Audio
Products
Resemble Meetings
Resemble Detect API
What gets faked
Executive voice cloned from 4–20 seconds of public calls, earnings recordings, or media
Face swap applied to live video to defeat liveness checks in the meeting itself
Multiple channels deployed in sequence — WhatsApp, phone, Zoom — to manufacture urgency
How Resemble AI detects it
~4 seconds of audio/video runtime before analysis begins; returns a deepfake detected signal in under 300ms
Audio and video analyzed simultaneously — native to Zoom, Teams, Meet, and Webex
Session log with verdict, modality scores, and Intelligence explanation — audit-ready for incident response
growing • systematic
Interview + HR fraud
Attack types
Deepfake interview fraud
Face swap
Liveness bypass
North Korea IT worker scheme
Modalities
Live video
Audio
Products
Resemble Meetings
Resemble Detect API
What gets faked
Face swap superimposed over the candidate's live video feed — defeats standard "camera on" verification
Voice cloned to match the expected identity during the call
Gartner projects 1 in 4 candidate profiles worldwide will be fake by 2028
How Resemble AI detects it
Returns a deepfake detected signal in under 300ms — before the interview ends
Runs natively inside Zoom, Teams, Meet, and Webex — no workflow change required for HR or interviewers
Flag is surfaced during the session, not after an offer has already been extended
HIGH VOLUME • AUTOMATED
Vishing + voice fraud
Attack types
Vishing
Voice spoofing
AI robocall fraud
Synthetic voice scam
Modalities
Audio
Products
Resemble Detect
Resemble Intelligence
What gets faked
Voice cloned from a trusted entity — a bank, government agency, employer, or known contact
Voice spoofing masks caller identity at the network layer
Low value per target, millions of targets — automated delivery makes the volume viable at zero marginal cost
How Resemble AI detects it
Integrates at the network or application layer — telephony formats supported (G.711, G.723.1, PCMu/PCMa)
~4 seconds of audio before analysis begins; returns a deepfake detected signal in under 300ms
Intelligence surfaces broader threat patterns — anomaly signals and natural language explanation of why audio was flagged
WHAT DETECTION LOOKS LIKE

From live call to API response

Three outputs across the detection stack — meeting alert, explainability report, and raw API return.
Resemble Meetings
Live call alert
RESEMBLE MEETINGS
Resemble Intelligence
Explainability output
Resemble Detect API
Raw response
RESEMBLE DETECT
ONE PLATFORM. EVERY IMPERSONATION VECTOR.

Detect synthetic identity across every conversation channel.

Executive + wire fraud
ATTACK TYPE

CEO/CFO voice clone + real-time face swap

Coordinated audio and video impersonation in a single live call — manufactured to authorize high-value wire transfers.

DETECTION METHOD

Resemble Meetings — native to Zoom, Teams, Meet, Webex

Real-time audio and video stream analysis. Flags synthetic voice and face swap artifacts during the session, not after.

OUTPUT

In-call flag in ~5 seconds

Alert surfaced during the conversation. Session log with verdict, modality scores, and Intelligence explanation for incident response.

Interview + HR fraud
ATTACK TYPE

Synthetic voice mimicking  trusted contacts/institutions

AI voice clone deployed at automated scale — millions of calls per campaign at zero marginal cost per attempt.

DETECTION METHOD

Network or application-layer integration

Supports telephony formats: G.711, G.723.1, PCMu/PCMa, MP3, WAV. Integrates into call center, IVR, or authentication infra.

OUTPUT

Confidence score + Intelligence explanation

Verdict in under 300ms. Intelligence surfaces anomaly signals and explanation of why audio was flagged.

Vishing + voice fraud
ATTACK TYPE

Fabricated financial documents

Altered bank statements and synthetic income documents on loan or reimbursement applications.

DETECTION METHOD

Document + image analysis

Analyzes PDFs, scanned documents, and images for manipulation and synthesis artifacts.

OUTPUT

Confidence score + metadata

Tamper evidence and anomaly flags per document, structured for compliance reporting.

BUILT FOR EVERY CONVERSATION CHANNEL

One API. Every impersonation vector covered.

Real-time in live calls via Resemble Meetings. Async across recorded sessions, telephony, and uploaded media. Every analysis includes an Intelligence explanation and audit-ready output.
Cloud API or air-gapped deployment
<300ms detection across audio, video + image
51 languages validated for audio detection
160+ generative AI tools covered — including real-time face swap
96%+ accuracy across modalities
Native integrations: Zoom, Teams, Meet, Webex
Zero Retention Mode — submitted media deleted after analysis
Frequently asked questions
How does Resemble detect impersonation during a live Zoom or Teams call?
Resemble Meetings integrates natively with Zoom, Teams, Meet, and Webex. During an active session it analyzes audio and video streams in real time, flagging synthetic voice and face swap activity before the call ends. Detection runs at under 300ms — no post-call processing required.
What's the difference between Resemble Detect and Resemble Meetings?
Resemble Detect is the API. It accepts audio, video, and image inputs and returns a detection result. It integrates into any workflow: authentication pipelines, security dashboards, communication infrastructure. Resemble Meetings is built specifically for live video calls, with native platform integrations and real-time liveness detection during the session itself.
How do you detect AI-generated candidates in remote job interviews?
Resemble Meetings analyzes the live video and audio feed during the interview session. It detects face swaps and AI-generated or cloned voice audio in real time so interviewers have a signal during the conversation, not after an offer has been made.
Can Resemble detect vishing and synthetic voice fraud on phone calls?
Yes. Resemble Detect supports telephony audio formats including G.711, G.723.1, and PCMu/PCMa, in addition to standard formats like MP3 and WAV. It integrates at the network or application layer to flag synthetic voice in inbound or outbound calls.
What does Resemble Intelligence add on top of Detect?
Resemble Detect returns a score. Resemble Intelligence explains it in natural language, with context. It surfaces observable anomalies, provides cited examples of similar synthetic media, and produces output that legal, compliance, and security teams can use directly in investigations and regulatory filings.
Is on-prem deployment available for regulated industries?
Yes. Resemble Detect, Resemble Meetings, and Resemble Intelligence are all available for on-premise and air-gapped deployment. Zero Retention Mode is available for organizations that require submitted media to be permanently deleted after analysis completes — meeting financial services, healthcare, and government data residency requirements.
Get complete generative AI security
Join thousands of developers and enterprises securing with Resemble AI