Resemble AI gives Audible's security team a two-layer defense against AI-generated audio threats. Resemble Detect scans content in real time, flagging synthetic or manipulated voices before they reach listeners, while PerTh embeds an inaudible watermark in every authorized track so provenance can be verified end-to-end.
Because the watermark survives model training, compression, and re-encoding, Audible can prove that a narrator's voice was used without consent if it later appears in unauthorized AI output. The result is a catalog that stays authentic and narrators whose IP remains protected even as voice cloning tools proliferate.
Resemble Detect scans incoming and published audio for signs of AI generation or manipulation, flagging threats before listeners encounter them.
Embed imperceptible markers into every authorized narration. Watermarks persist through compression, re-encoding, and even model training.
Prove when a narrator's voice has been used to train unauthorized AI models. Give voice talent confidence their likeness stays under their control.
Monitor an entire audiobook library at scale. Batch-scan new releases and re-scan back catalog as detection models improve.
Detect continuously retrains against the latest deepfake techniques, so defenses improve as attack methods evolve.
SOC 2 Type II in progress and GDPR-ready infrastructure. Deploy in cloud or on-prem to match Audible's internal security requirements.