The deepfake arms race has created a deeply unfair fight. On one side: increasingly powerful AI creation tools with slick interfaces anyone can use. On the other: detection capabilities hidden behind enterprise paywalls, academic jargon, and technical complexity.
It’s like we’ve given everyone flamethrowers while locking fire extinguishers in special high-security vaults that require advanced degrees to access.
We’ve had enough of this asymmetry, so we built something that flips the script: a deepfake detection service that works through WhatsApp. Text 218-NO-FAKES, send any suspicious media, and get an immediate analysis that doesn’t just tell you if something is manipulated, but how it was likely created.
The Simplicity Imperative
When we started designing this service, we obsessed over one thing: removing friction. Every step, every requirement, every technical hurdle would mean fewer people using it.
So we opted for the most ubiquitous messaging platform on Earth. No downloads. No accounts. No subscriptions. No specialized knowledge required.
Just send media to 218-NO-FAKES.
Our detection technology works across modalities – analyzing images, video, and audio with 94% accuracy. But accuracy isn’t enough if the tools remain inaccessible.
Real-World Verification in Action
We’ve been testing this service across a spectrum of everyday scenarios where the line between authentic and synthetic content gets blurry. Let me walk you through a few examples:
The Digital Receipt Runaround
We took a ChatGPT-generated receipt – the kind of thing someone might use to fake an expense report or establish a false alibi – and sent it to our detector. Within seconds, it flagged the telltale patterns of AI generation. The subtle pixel-level irregularities that human eyes miss? Our system caught them immediately. No more wondering if that invoice is legitimate.
The Forwarded Fear-Mongering
We’ve all been there – someone forwards an alarming image on iMessage that looks just credible enough to cause anxiety. Is that really a hurricane heading toward your city? Did that politician actually do that embarrassing thing? Our system lets you forward that content directly from your messaging apps to 218-NO-FAKES, providing clarity when you need it most.
The “Wait, Was That Really My Bank?” Check
Perhaps most chilling is how easily voice synthesis can now mimic people we trust. We tested a synthetic voicemail that sounded convincingly like it came from a financial institution. Our detector immediately highlighted the voice as synthetic, pointing out the micro-patterns that distinguish AI-generated speech from human voices.
Join the Verification Movement
Text 218-NO-FAKES on WhatsApp and try it yourself. Send that suspicious image. Forward that weird audio clip. Check that too-perfect video.
The more people with verification tools, the less incentive there is to create malicious deepfakes in the first place. It’s a simple equation: when deception becomes pointless, it stops being profitable.
This is just the beginning. We’re already working on enhanced detection capabilities, additional platforms, and deeper educational resources.
Because in a world increasingly shaped by artificial intelligence, the power to discern truth shouldn’t be artificial, too.
It should belong to all of us.