Back
Back

X (Twitter)

Flag AI-generated audio and video on X in real time — protect public figures, brands, and newsrooms from deepfake disinformation before it goes viral.

How it works

YOUR APP
X Post Flow
Uploaded video or audio post triggers authenticity scan on X
+
RESEMBLE AI
Deepfake Detection
Post audio scanned for synthetic voice and disinformation signatures
+
YOUR APP
Watermark verification
PerTh watermark checked to confirm originating creator and source
OUTPUT
Trusted timeline
Users see verified posts and flagged synthetic media on X

Overview

X moves fast, and so do deepfakes. Resemble AI gives trust and safety teams, newsrooms, and brands a way to screen video and audio posts for synthetic manipulation using Resemble Detect, our multimodal deepfake detector. Pipe posts through the Detect API and get an authenticity score in seconds.

For content you publish yourself, PerTh watermarking embeds an imperceptible signature into every audio clip. If your post gets reuploaded, edited, or passed off as someone else's, the watermark survives re-encoding and confirms the original source. Together, detection and watermarking cover both directions of the deepfake problem on a fast-moving platform.

Features

Synthetic media detection

Score any X post's audio or video for signs of AI generation. Catch manipulated clips before they spread.

PerTh watermarking

Watermark original audio before posting. Prove you're the source even after compression and reuploads.

Newsroom workflow fit

Verify trending clips before citing them. Detect runs in seconds, so it fits inside a live editorial cycle.

Brand impersonation defense

Monitor mentions and flag synthetic voices or videos passing as executives, spokespeople, or brand accounts.

Frame-by-frame audio analysis

Detect model analyzes audio at frame resolution. Short inserts or subtle edits inside a real clip still get flagged.

API-first integration

Call Detect from any social listening or moderation pipeline. No platform partnership required to screen content.

Use cases

  • Screen trending X video posts for deepfake manipulation before republishing or citing
  • Watermark official brand and executive audio so impersonation attempts can be proven fake
  • Protect political figures and public officials from synthetic voice disinformation
  • Power newsroom verification desks with a real-time authenticity score for viral clips
  • Monitor brand mentions for AI-generated endorsements or fabricated spokesperson statements
  • Flag coordinated disinformation campaigns that rely on synthetic audio or video posts

Related integrations

Get complete generative AI security
Book a demo with our team and build it your way.