Resemble intelligence

Detection results you can understand

Resemble Intelligence is the explainability layer built to work with DETECT-3B Omni. 

Every detection returns a human-readable forensic breakdown identifying which artifacts triggered the flag, the detected fraud type, and liveness status.

Trusted by
THE PROBLEM

Detection scores flag AI content. They don't offer context or audit trails.

Whether in a live call, compliance review, or onboarding, stakeholders need to document their reasoning and make defensible decisions under time pressure. Intelligence moves beyond a 0–1 verdict to provide a structured case.
STEP 1 - DETECT
Signal acquisition
DETECT-3B Omni returns a confidence score per modality with a binary label.
STEP 2 - ANALYZE
Forensic decomposition
Intelligence examines speaker profile, visual artifacts, fraud patterns, and liveness indicators.
STEP 3 - EXPLAIN
Structured explainability
A structured report is returned containing forensic abnormalities, fraud classification, and a full audit trail.
What intelligence produces

Every detection returns useful insights in one report

Fraud analysts get the attack type and reasoning. Compliance teams get the audit trail. Legal teams get documentation. Trust and safety teams get the triage signal.

Speaker profiling

Identifies language, dialect, emotion, and speaking style to establish a behavioral baseline. This provides the human context required for legal and forensic review without the need for prior enrollment or reference audio. No signs of AI generation detected. What you're seeing appears to be real.

Abnormalities

Names specific acoustic and visual artifacts, such as unnatural prosody, timbral inconsistencies, lip-sync irregularities, and skin texture anomalies. The report identifies exactly what was found, not just that a flag was triggeredSynthetic content identified. Check the breakdown for what the model detected and why.

Content intelligence

Extracts the core message intent and provides a transcription of the dialogue. Intelligence analyzes the narrative context of the interaction to document exactly what was said and the intent behind the communication.No confident conclusion. Common with heavily compressed or low-quality media.

Expected behavior

Determines liveness to confirm if a real person was physically present during capture. This includes anti-cheating detection, screening for behavioral indicators like neutral affect or eye movements.

Malicious behavior

Attack types and vectors are identified with confidence reasoning, covering virtual kidnapping, executive impersonation, and account takeover. Analysts get a full fraud analysis explaining the intent and confidence level of the detected threat. 

Digital alteration

Flags post-capture manipulation, such as editing, splicing, or tampering, even when the content is not AI-generated. It also checks for misinformation and disinformation, identifying staged recordings or synthetic background replacements.

0
Prior enrollment required
3
Media types: audio, video, image
1
API call for detection and intelligence
Built on Resemble AI

Intelligence: a critical component of complete deepfake detection

Detection answers 'is this synthetic?' Intelligence answers 'why, and what kind?' Access both Detect and Intelligence through the same API call. Used together, they close the loop from flag to decision.
DETECT-3B Omni
Resemble's flagship multimodal detection model. Intelligence layers explanation on top of every verdict it produces.
explore detect-3b omni
Resemble Detect
Submit audio, video, and image for detection. Intelligence is available alongside every result.
explore resemble detect
Resemble Meetings
Intelligence runs automatically on every live meeting detection without impacting latency. Every flagged call produces a full forensic report.
explore resemble meetings
SAMPLE REPORT

What an intelligence report looks like

The abbreviated example below is drawn from a real detection run against a video showing President Trump walking unsteadily out of what appeared to be the Walter Reed National Miliatry Medical Center.  The video was scored as synthetic and the audio as real.

View full report
Sample report content
Media + overview
Result:
Deepfake
Confidence score:
100%
Risk level:
High
MEDIA TYPE: VIDEO
SUBMITTED: 07/05/26
20:23 UTC
SIZE: 1763.94KB • 6.1s
What is happening
SPEAKER INFO
Visual subject is Donald Trump, no clear verbal communication from him or any other individuals
LANGUAGE,DIALECT
Language = English, Dialect = N/A
CONTEXT
Setting is outside Walter Reed National Military Medical Center, Trump is being physically supported and led by security/medical personnel, suggesting a health crisis
MESSAGE
Aims to convey a narrative of severe physical decline or a medical emergency involving Trump
TRANSCRIPTION/TRANSLATION
[Ambient outdoor noise, shuffling of feet, and muffled background sounds, no intelligible speech]
EMOTION
Subject appears physically distressed, frail, and unstable, assisting individuals appear focused, urgent
What indicators do we see
 
Abnormalities
CONFIDENCE SCORE
100
%

Deformed hands, fingers merging into fabric, distorted facial features, significant lighting inconsistencies, overly smoothed skin, inconsistent signage using an unofficial logo and gibberish text

Assessment: Deepfake

Anatomical distortions
Deformed hands
Distorted faces
Synthetic lighting
Nonsensical artifacts
Illegible text
 
Liveness detection
CONFIDENCE SCORE
100
%

Video lacks biometric consistency, structural failures in the rendering of hands and background faces are definitive indicators of generative AI

Assessment: Not real person

 
Digital alteration
CONFIDENCE SCORE
100
%
  • Exhibits classic generative artifacts and low-fidelity background rendering
  • SynthID detected — a digital watermark used to identify AI-generated media

Assessment: Fully synthetic AI-generated video

What it means
 
Misinformation/disinformation

The video is AI-generated. It does not depict a real event, confirmed by fact-checkers and official sources.

Deepfake: True
 
Anti-cheating detection
CONFIDENCE SCORE
100
%

The ideo identified as 100% AI-generated. The use of a deepfake to impersonate a public figure or fabricate a scenario is a definitive indicator of cheating and high-level deception.

Risk level: High
 
Fraud analysis
CONFIDENCE SCORE
100
%

This is a synthetic media file created to impersonate a former president in a compromising medical state. It is designed to spread misinformation regarding his health for political influence or to incite public concern.

Attack type: Political Manipulation
FINAL ASSESSMENT
 
Deepfake, High risk
Frequently asked questions
What is the difference between Resemble Detect and Resemble Intelligence?
Resemble Detect returns a confidence score per modality — a mathematical verdict on whether content is synthetic. Resemble Intelligence produces the human-readable explanation behind that verdict: which artifacts were found, what fraud type was identified, and whether a real person was present. Both are available through the same API call.
Does Intelligence require prior enrollment or context?
No. Speaker profiling, language identification, dialect detection, and liveness assessment all run without any prior enrollment, reference audio, or contextual information about the submission.
What fraud types does Intelligence classify?
Current classification covers virtual kidnapping, executive impersonation, account takeover, synthetic media fraud, and presentation attacks. The classification set expands as new attack vectors are identified.
Is Intelligence available for live call detection in Resemble Meetings?
Yes. Intelligence runs automatically on every detection in Resemble Meetings. Every flagged call produces a full forensic report without any additional configuration.
Can Intelligence detect manipulation that is not AI-generated?
Yes. The digital alteration detection layer flags post-capture manipulation — editing, splicing, or tampering even when the content was not generated by an AI model.
How is the audit trail structured?
Every Intelligence report returns consistent fields — speaker info, abnormalities, fraud classification, liveness status, and confidence per modality in a structured format designed for export to legal, compliance, and regulatory workflows.
‍Is Intelligence available for real-time voice agent and contact center calls?
‍Yes. When Resemble Agent Assist flags a synthetic voice on an inbound call, Intelligence provides the forensic context alongside the alert — fraud type, liveness assessment, and confidence reasoning so agents and supervisors have the full picture before deciding how to respond."
Get complete generative AI security
Join thousands of developers and enterprises securing with Resemble AI