LibriVox's strength is its community of volunteer narrators, and Resemble AI exists to protect exactly that. Resemble Detect monitors the catalog for AI-generated or spliced audio, flagging content that doesn't match the original recording for moderator review before it reaches listeners.
PerTh neural watermarking adds an invisible authenticity layer on top. Every authorized recording can be verified as coming from the original contributor, and if a volunteer's voice ever appears in unauthorized AI training data, the watermark makes the misuse provable — giving narrators confidence that their donated work stays theirs.
Watermark each narrator's contribution so their voice can't be cloned and repurposed without a verifiable trail.
Resemble Detect continuously scans the LibriVox library for AI-generated inserts or replacements, keeping the catalog authentic.
Confirm that a submitted recording came from the listed volunteer — not a synthetic voice standing in for real narration.
PerTh watermarks persist through MP3 compression, streaming re-encoding, and even subsequent AI model training runs.
Detect retrains continuously against the newest speech synthesis systems, staying ahead of novel deepfake techniques.
Simple API integration that respects LibriVox's open ethos. No heavy infrastructure or proprietary player required.