We Built a Deepfake Detection Bot for X, Because We All Deserve to Know What’s Real

Jan 30, 2026

Today we’re releasing @resemble_detect, a free bot that lets anyone on X check whether an image or video has been AI-generated or manipulated.

We’re doing this because social media platforms are flooded with synthetic media and most people have no way to verify what they’re looking at. That’s a problem we can help solve. Here’s how it works, and what this tool is for, and what it isn’t.

How It Works

Using @resemble_detect is simple:

  1. Find an image or video on X that you want to verify
  2. Reply to the post and tag @resemble_detect with the phrase “is this fake?”
  3. We’ll analyze the content and reply with a visualization (for images) and a confidence score indicating the likelihood that the content is AI-generated or manipulated

That’s it. Free, public, no account required beyond your existing X profile.

The detection is powered by Resemble AI’s DETECT technology, the same models that rank at the top of independent benchmarks and process millions of verifications for enterprises, governments, and media organizations.

The Problem Is Bigger Than Deepfakes

The conversation around AI-generated content tends to flatten into a simple binary: real or fake, but the reality is messier.

Content exists on a spectrum of authenticity. An image might be entirely AI-generated, or it might be a real photograph with minor edits, or something in between. A real person’s face swapped onto another body. Authentic footage with manipulated audio. A genuine video clipped and recontextualized to misrepresent what happened. Text, images, audio, video, all of it can be altered in ways ranging from trivial touch-ups to complete fabrication.

Misinformation works the same way. Sometimes it’s entirely invented. Sometimes it’s a real story with one detail changed, a date, a name, a number, and that transforms truth into falsehood. The manipulation can be subtle or total, like the recent X posting by the White House.

Our detection technology handles this spectrum. We can identify fully synthetic content, partially manipulated media, and everything in between. We return a confidence score because the answer often isn’t binary, and unfortunately, this kind of content is picked up at the speed of a repost.

But there’s one dimension where there is no spectrum: consent.

Consent Is Binary

You either have permission to share something or you don’t. There’s no gradient here, no “partially consensual,” no gray area.

This matters because the worst uses of synthetic media aren’t about political misinformation or celebrity face-swaps. They’re about non-consensual intimate imagery. They’re about harassment, exploitation, and abuse. They’re about using AI to create content that real people never agreed to, then posting it publicly to humiliate, threaten, or harm them.

We do not and can not verify consent.

What we can tell you is whether content appears to be AI-generated or manipulated. What we cannot tell you is whether the subject of that content agreed to its creation or distribution. That determination requires context, investigation, and often legal process that no API can replicate.

This distinction is critical, we refuse to blur it.

What We Will Not Do

Let us be unambiguous:

We do not condone the creation or distribution of non-consensual synthetic media. Full stop. Whether the content is AI-generated or authentic, if it’s shared without consent, it’s a violation.

We do not process explicit content through this public bot. The @resemble_detect bot is not a tool for analyzing explicit imagery. If you’re trying to use it for that purpose, don’t. Our Terms of Service prohibit it. If this is a recurring issue for you personally, please reach out to us at [email protected] and we will determine whether we can process your requested media.

We find the distribution of CSAM repugnant and will cooperate fully with law enforcement. This should go without saying. We’re saying it anyway.

We built detection technology because we believe people deserve to know when they’re being deceived. We did not build it to enable new forms of abuse. If you’re planning to use our tools to harm someone, find another platform, you’re not welcome here.

Why We’re Releasing This Anyway

We’re aware that any tool capable of detecting synthetic media could theoretically be misused. Someone could use detection to “verify” non-consensual content before sharing it. Someone could use a “real” verdict to add false credibility to manipulated media or claim that if it’s real, it can be posted.

But we do believe the alternative is worse. Synthetic media is already everywhere on X. Deepfakes are already being posted, shared, and believed. The asymmetry between creation and detection has left most people defenseless, unable to question what they see, unable to verify what they’re told.

Giving people a free, accessible way to check content doesn’t solve every problem. But it shifts the balance, it creates friction where there was none. It makes “is this real?” a question people can actually answer, instead of something they have to guess at. And just to reiterate, even if something is real or appears authentic, that doesn’t equate to consent.

We’d rather build tools that help people navigate this landscape than pretend the landscape doesn’t exist.

We All Deserve to Know What’s Real

We’re releasing this tool because the current state of synthetic media on social platforms is untenable. People are being deceived at scale, and trust in the media and each other is eroding.

This is our attempt to put detection capabilities directly in the hands of the people who need it most.

It won’t solve everything. It won’t stop bad actors. It won’t verify consent or intent or context. But it will let you ask “is this fake?” and get an actual answer.


Try it now: Tag @resemble_detect on any image or video post with “is this fake?”

Read our Terms of Service: https://www.resemble.ai/resemble-ai-x-bot-terms-of-service/

Questions? Contact us at [email protected].

More From This Category