As cliche as it sounds, we understand that with great powers comes great responsibility. Ethics and morals are at the core of Resemble, from how we run the company to how we build the tech. When we launched Resemble, we knew we would have to address the concern that people have about deep fakes aka people impersonating somebody else. With the rise in synthetic voices, ethics are a serious issue and we wanted to share how we at Resemble are approaching this issue as this technology becomes mainstream.
Resemble’s a platform is locked down such that only intended speaker can clone their own voice. When recording, Resemble enforces the user to say an array of particular sentences in your own voice. Misuse of this can be easily detected by our algorithm. Once the voice is created, the user owns all rights to that voice. We do not use that voice data to train other models, nor do we resell the voice data to third party companies.
For customized solutions, we work with companies through a rigorous process to make sure that the voice they are cloning is usable by them and, have the proper consents in place with voice actors.
Resemble Protect – Resemblyzer
Generative models are advancing at a rapid pace. It is our duty to ensure that we create the right tools to prevent misuse of technology wherever we can. We open-sourced Resemblyzer (https://github.com/resemble-ai/Resemblyzer) – a powerful package that uses modern AI and Deep Learning to analyze and compare voices. Resemblyzer will help tackle Fake Speech Detection, Speaker Verification, and Diarization.
While it is difficult to prevent all misuses of generative technology, we urge consumers to be evaluative of everything we hear, see, and even read.
OPEN SOURCE CONTRIBUTIONS