The Deepfake Watchlist is Resemble AI's weekly surveillance of synthetic media incidents, ongoing cases, and disputed content shaping the news cycle. Each week we track confirmed incidents, emerging attack vectors, and claims under investigation, alongside the provenance, detection, and policy threads running underneath them.
★ Featured: The four AI-generated financial journalists
Press Gazette's investigation Prolific finance journalists facing questions over identities, published May 12, documented that four prolific freelance bylines (Nikolai Kuznetsov, Reuben Jackson, Luis Aureliano, and Joe Liebkind) who together published more than 1,000 articles across Forbes, HuffPost, CoinTelegraph, VentureBeat, TheStreet, and more than 30 other outlets, used AI-generated or stolen headshots and consistently promoted cryptocurrency clients of the same Israeli PR firm, Market Across.
- Category: Fraud / Impersonation
- Type: Attack
- Modality: Image
- Policy / Regulatory: No US or EU statute currently addresses synthetic journalistic personas; defamation and breach-of-contract law would govern any direct enforcement, and Section 230 likely shields the publishing platforms from liability for the freelance identity verification failures.
- Trend: AI-generated identity fabrication moving up the credibility stack from individual impersonation to byline impersonation, with publishing infrastructure unable to verify freelance contributors at scale.
- Attack vector: AI-generated profile photos paired with sparse but persistent online identities (LinkedIn, X, Muckrack pages), used to accumulate publication credentials over years before being weaponized for coordinated cryptocurrency promotion tied to a single PR firm.
- What we saw in the content: The headshots and identity infrastructure surrounding these bylines carry the forensic signatures we train DETECT-3B Omni to flag, for example:
- Nikolai Kuznetsov's LinkedIn profile picture scans as 100% AI-generated per the Identifai detection tool, his archived personal site lists a Ramat Gan address shared with InboundJunction (a company that has the same founders and staff as Market Across), and his stated hobby of Brazilian Jiu Jitsu matches the same hobby listed on the LinkedIn page of Market Across co-founder Elad Mor.
- Reuben Jackson's current X profile picture scans as fully AI-generated per the same detection tool, though older images of the persona do not carry AI hallmarks, consistent with an identity that was retroactively rebuilt with synthetic imagery as the byline scaled.
- Luis Aureliano's Investing.com profile image scans as 0% AI-generated but bears close resemblance to a chef who lives in Tel Aviv, consistent with stolen-identity reuse rather than AI synthesis.
- Joe Liebkind's profile image was repurposed from a free-use Flickr photo that has been used in turn on the Wikipedia page for the term "hipster," a credibility shortcut that does not require AI at all, and Investopedia and Tech.EU have now quietly removed Liebkind articles after Press Gazette contacted them.

The deepfake conversation in 2026 has centered on individual impersonation, cloned CEO voices, face-swapped video calls, manufactured intimate imagery. This case is different. The operation built four credible-looking journalists from sparse online profiles, AI-generated headshots, and the gradual accumulation of bylines in publications that could not verify their own freelance contributors. Each byline is a credibility deposit that paid out years later when the same writer started recommending specific crypto tokens.
The financial trail is the part that makes this stick. Kuznetsov's archived personal site shares a Ramat Gan address with InboundJunction, a company with the same founders and staff as Market Across, whose blockchain clients these writers repeatedly promoted. Liebkind, in his Investopedia and TechinAsia bylines, pushed Gladius, a crypto startup that raised $12.7 million through an initial coin offering before collapsing in 2017. Investopedia and Tech.EU quietly removed Liebkind articles after Press Gazette contacted them this week, which is the slowest possible form of detection arriving years after the harm. Press Gazette has now caught two of these in the past few months, the Margaux Blanchard freelancer published in Wired and Business Insider was the first, and two confirmed cases is not yet a pattern, it is also not a coincidence.
1. Iranian-affiliated channels seeded multiple AI-generated US-military-defeat videos in a single week
Two Misbar fact-checks within a week, Video Claiming U.S. Aircraft Carrier Was Set Ablaze by Iranian Missiles Is AI-Generated on May 10 and AI-Generated Video Does Not Show U.S. Aircraft Damaged by Iranian Fire on May 12, confirmed via Hive Moderation analysis that multiple videos circulating on X as evidence of Iranian strikes against US military assets were AI-generated, with one of the videos amplified by an account branded @IRGC_IRAN_News.
- Category: Political / Electoral
- Type: Attack
- Modality: Video
- Policy / Regulatory: No US platform policy required AI labeling on the videos before they spread, and the Iran-affiliated amplifier accounts remain active.
- Trend: AI-generated battlefield imagery is becoming a routine accompaniment to active geopolitical tension, distinct from slower-burning state-actor influence operations like Storm-1516 because the goal is short-window perception management during live conflict rather than long-arc election interference.
- Attack vector: AI video generation depicting recognizable US military assets in flames, distributed via Iran-affiliated and IRGC-branded X accounts during a week of heightened US-Iran tension.
These are not particularly sophisticated videos. The aircraft carrier clip and the second clip both carry the visual flatness of consumer-grade video generators, and Hive Moderation flagged both in the same week they were posted. What makes them worth tracking is not the technical level but the rhythm of the deployment, multiple AI-generated US-military-defeat clips appearing in a single week on accounts that brand themselves as adjacent to Iranian state media, during a period when an audience was actively scanning social media for confirmation of what was happening on the ground.
The pattern echoes the Storm-1516 playbook Bloomberg documented last month, but at a much faster tempo. Storm-1516 produces near-continuous content for slow-burn election interference. The carrier and aircraft videos are the fast-twitch version, generated and pushed during specific geopolitical flashpoints, and the carrier clip accumulated significant view counts before the fact-check arrived. Article 50 of the EU AI Act, whose enforcement timeline shifted in last week's Omnibus deal, will eventually require AI-generated content to be labeled at the point of distribution. Whether that label arrives before the next clip in the cycle, or after it, is the operational question.
2. Paris prosecutors move from summons to seeking charges against Musk and X
The Paris public prosecutor's office announced on May 7 that it has escalated its investigation into Elon Musk and X to a criminal probe, now seeking charges including complicity in possessing and distributing child sexual abuse material, dissemination of non-consensual images, denial of crimes against humanity, and manipulation of an automated data processing system as part of an organized group, following the April 20 no-show by Musk and former CEO Linda Yaccarino for summoned voluntary interviews.
- Category: CSAM / NCII
- Type: Response
- Modality: Image, Video
- Policy / Regulatory: The manipulation-of-data-processing charge under organized-group structure is unusual for European prosecutors targeting a platform CEO, and it carries materially higher penalties than civil platform-liability frameworks.
- Trend: European prosecutors and regulators moving from platform-level fines (the EU's 120 million euro DSA penalty against X in 2025) to potential criminal charges against named individuals, indicating that the dispatcher of enforcement has shifted from administrative agencies to public prosecutors.
- Attack vector: Not applicable as this is a regulatory response to documented platform failures around the Grok AI system that generated sexualized deepfake images and Holocaust denial content.
Last week we wrote that the Paris prosecution was still open with no resolution. This week the resolution arrived. Prosecutors confirmed the investigation has been escalated to a criminal probe, with charges sought against Musk, X, and others for complicity in possessing and distributing child sexual abuse material, dissemination of non-consensual images, denial of crimes against humanity, and manipulation of an automated data processing system as part of an organized group. The April 20 no-show by Musk and Yaccarino did not stop the process, it accelerated it.
The organized-group framing of the data-processing charge is the part to pay attention to. It is what allows prosecutors to argue that the Grok deepfake controversy and the algorithmic content amplification were not isolated failures but an integrated commercial system, and it is the basis for the March 2026 alert prosecutors sent to the US DOJ and SEC suggesting the Grok deepfake controversy "may have been deliberately orchestrated to artificially boost the value of the companies X and xAI." Whichever defensive playbook X mounts in response, every other US platform will be reading it.
3. UK working group asks schools to take down student photos amid documented AI sextortion campaign
The Guardian first reported, and Police Professional confirmed, that the Early Warning Working Group, a UK coalition that includes the NSPCC, the Internet Watch Foundation, Education Scotland, the Welsh Government, the Northern Ireland Safeguarding Board, and the National Crime Agency, issued guidance this week urging schools to review and remove pupil photos from public-facing websites after a single sextortion incident involving an English secondary school produced 150 manipulated images that the IWF classified as AI-generated CSAM, the first time a UK working group has formally recommended de-publishing student imagery as a defensive measure.
- Category: CSAM / NCII
- Type: Response
- Modality: Image
- Policy / Regulatory: The UK in February 2025 became the first country to ban possession of AI tools designed for CSAM, which means the underlying generation is already a criminal offense, but enforcement against international perpetrators remains the gap the guidance addresses; UK Safeguarding Minister Jess Phillips called the pattern a "deeply worrying emerging threat."
- Trend: AI-CSAM reports to the IWF more than doubled year over year, from 199 in 2024 to 426 in 2025, with girls accounting for 94% of victims, and Childline's Report Remove service logged 394 sextortion attempts in 2025, a 34% year-on-year increase.
- Attack vector: Sextortion gangs, with negotiation scripts associated with overseas organized criminal infrastructure, scraping school-published photographs of pupils, generating AI-manipulated nude images, and demanding payment from the depicted children.
The single incident that anchors the guidance is the kind of detail that does not abstract well. One English secondary school. 150 AI-manipulated images of pupils. All of them classified as CSAM under UK law, all of them hashed and shared with major platforms by the IWF. The Confederation of School Trusts said it would carefully consider the guidance with its members, who collectively educate around four million children in England. A school photograph that the school itself published, of a child, became the raw material for organized criminals running scripted negotiation against that child's family.
The guidance asks schools to do something that should not need to be asked, and it is also the kind of intervention that scales poorly, because individual schools cannot be the line of defense for a category of attack that is automated, organized, and international. The UK has been the most aggressive jurisdiction on AI-CSAM legislation, the February 2025 possession ban was a world first. The guidance this week is what the legislative work cannot reach: the photographs already published, of children still in school, of which there are millions.
4. Arctic Wolf publishes the most detailed forensic teardown to date of BlueNoroff's Web3 deepfake meeting campaign
Arctic Wolf Labs' April 27 forensic report BlueNoroff Uses ClickFix, Fileless PowerShell and AI-Generated Zoom Meetings to Target Web3 Sector documented an ongoing North Korean APT campaign in which attackers run multi-month intrusions against Web3 companies, using fake Zoom meetings populated with three classes of participant: stolen footage of previous victims, AI-generated stills with C2PA metadata identifying them as OpenAI GPT-4o outputs, and deepfake composite videos assembled in Adobe Premiere Pro 2021 from a 73-video project file recovered from the attackers' media server.
- Category: Fraud / Impersonation
- Type: Attack
- Modality: Video, Audio, Image
- Policy / Regulatory: Not applicable; this is private-sector forensic intelligence, not a regulatory or legal response.
- Trend: State-actor deepfake operations developing self-reinforcing pipelines where each victim's webcam footage becomes raw material for impersonations in the next attack, distinct from one-shot impersonation fraud because the attack infrastructure compounds with each successful intrusion.
- Attack vector: Multi-stage social-engineering attack starting from a typosquatted Zoom or Teams meeting domain, using deepfake meeting participants generated from GPT-4o stills layered onto motion captured with Windows Game DVR and composited in Adobe Premiere, with payload delivery via fileless PowerShell after victim engagement.
The campaign Arctic Wolf documented began as far back as January 23 against a North American Web3 company, persisted for 66 days, and reached full post-exploitation in under five minutes after the initial click. The attackers identified roughly 100 additional targets, 41 in the US, 11 in Singapore, 7 in the UK, with 80% in crypto, blockchain, or finance, and 45% of named targets at CEO or founder level. The composite pipeline is the part with the most explanatory power: GPT-4o stills for the faces, Windows Game DVR for body motion capture, Adobe Premiere Pro 2021 for the composite, FFmpeg for export, more than 80 typosquatted Zoom and Teams domains for the call infrastructure. The C2PA metadata recovered from the stills identifies OpenAI's GPT-4o as the source generator, the kind of cryptographic confirmation that has been rare in state-actor deepfake reporting.
What this represents at the threat-model level is a self-reinforcing deepfake pipeline. Each successful intrusion produces raw video of a real executive in a real meeting, which becomes source material for the deepfake composite used against the next target in the chain. The attack infrastructure compounds with each intrusion, the persona library grows with every new victim, and the deepfake itself is now one piece of an operating, integrated attack system.
Honorable mentions
LiveLaw, Delhi High Court Protects Entrepreneur Aman Gupta's Personality Rights; Orders Takedown Of AI, Obscene Content, May 11. Two personality-rights rulings landed in the Delhi High Court in a single seven-day window. On May 7, Justice Tushar Rao Gedela granted boAt Lifestyle co-founder Aman Gupta a broad ex parte injunction restraining 44 entities from misusing his likeness, including in AI-generated deepfake content. Days later, Justice Mini Pushkarna of the same court directed X and other platforms to take down AI deepfake videos of Congress MP Shashi Tharoor falsely depicting him praising Pakistan. The closest thing to AI-likeness precedent India has produced so far.
The Aviationist, Russian Victory Day Parade Takes Bizarre Turn with CGI Flyover, May 10. Russian state media broadcast AI-generated CGI footage of the Russian Knights and Swifts during the May 9 Victory Day parade flyover. The NATO flags some viewers flagged as AI errors are in fact real markings the team carries to commemorate past performance locations, but the use of synthetic footage in an official state commemoration is the first instance we have tracked of an authoritarian state self-publishing AI content as part of formal military pageantry.
ComingSoon, Dwayne Johnson Sets the Record Straight Amid AI-Generated Photos of His Wife, May 8. Dwayne Johnson confirmed at the Met Gala that viral AI-generated images of his wife Lauren Hashian appearing pregnant were fabricated. Hashian had already reposted the fakes to her own Instagram with a "Make that TWO! I had TWO BABIES!!" caption. The Brand/Likeness response increasingly looks like this: lighter, faster, public, and absent any legal action because no legal action is meaningfully available at the celebrity-pregnancy-rumor end of the spectrum.
The pattern

- Synthetic identity is moving up the trust chain. The four financial journalists, the Iranian-affiliated military videos, and the BlueNoroff Zoom-meeting personas are not impersonating individuals. They are impersonating institutional categories of trust, the freelance byline, the eyewitness footage of an attack, the executive on a video call. The defensive question shifts accordingly. Detecting whether a face is AI-generated is necessary but no longer sufficient, the harder problem is verifying that the credentialed-looking entity behind the face is real.
- The same forensic primitive is being deployed at radically different attack tempos. AI-generated profile pictures layered on top of accumulated credibility infrastructure drove both the journalist fraud and the BlueNoroff intrusion campaign. Press Gazette caught the journalists after years of byline accumulation. Arctic Wolf caught BlueNoroff during a 66-day intrusion. The same primitive scales from a multi-year credibility play to a 66-day Web3 heist, depending on the patience of the operator and the value of the target. Defensive products need to think in both timeframes.
- The defensive infrastructure is being built one case at a time. This week, Paris prosecutors escalated to seeking criminal charges against Musk and X, UK schools were asked to take down student photos, Delhi's High Court issued two personality-rights takedown orders. None of these is the systemic supply-side ban the EU agreed to last week. All of them are case-by-case, jurisdiction-by-jurisdiction defensive moves. The supply-side ban is the only structural defense any government has tested. Everyone else is fighting it retail, one prosecutor, one school, one trademark filing at a time.
Watching next week
- Press Gazette follow-up. Whether more publications quietly remove byline-attributed articles from the four named journalists, and whether the Margaux Blanchard precedent prompts any affected outlet to publish a formal retraction policy for AI-fabricated freelance bylines.
- Paris prosecutors' next move. Whether formal charges are filed against Musk, Yaccarino, and X following this week's investigation escalation, and whether any US DOJ or SEC public response to the March 2026 alert materializes.
- EU Omnibus formal adoption. The agreed deal must pass the European Parliament and Council before the original August 2026 deadlines kick in by default. Watch for the trilogue calendar and any last-minute changes to Article 50 transparency obligations.
- UK Confederation of School Trusts response. Whether CST and individual academy trusts begin de-publishing pupil imagery in response to the EWWG guidance, or push back on the practicality of the recommendation.
The Deepfake Watchlist publishes every Friday. Subscribe to receive it in your inbox, or follow Zohaib Ahmed on LinkedIn for the weekly social companion. Track every documented incident in the Resemble Deepfake Incident Database, and read the full methodology in our 2025 Deepfake Threat Report.



