Introducing the Deepfake Detection Dashboard


Ethical Statement

As cliche as it sounds, we understand that with great powers comes great responsibility. Ethics and morals are at the core of Resemble, from how we run the company to how we build the tech. When we launched Resemble, we knew we would have to address the concern that people have about deep fakes aka people impersonating somebody else. With the rise in synthetic voices, ethics are a serious issue and we wanted to share how we at Resemble are approaching this issue as this technology becomes mainstream.

Resemble’s a platform is locked down such that only intended speaker can clone their own voice. When recording, Resemble enforces the user to say an array of particular sentences in your own voice. Misuse of this can be easily detected by our algorithm. Once the voice is created, the user owns all rights to that voice. We do not use that voice data to train other models, nor do we resell the voice data to third party companies.

For customized solutions, we work with companies through a rigorous process to make sure that the voice they are cloning is usable by them and, have the proper consents in place with voice actors.

 Materials used through your integration of Resemble and related metadata must be produced by the publisher itself, correctly licensed from the thirdparty rights holder, used as allowed by the rights holder, or legally used in any other way.

You can not use AI Voices built by Resemble for:

  • claiming to be from any person, company, administration, or entity without explicit authorization to make this statement and/or impersonating to gain illegal information or privileges;
  • propagating hate speech;
  • discrimination, libel, terrorism, or violent activities;
  • spreading unattributed content or misrepresenting sources.
  • exploiting or manipulating children;
  • making unsolicited phone calls, vast communications, postings, or messages;
  • deceiving or deliberately misleading people;

Resemble Protect – Resemblyzer

Generative models are advancing at a rapid pace. It is our duty to ensure that we create the right tools to prevent misuse of technology wherever we can. We open-sourced Resemblyzer ( – a powerful package that uses modern AI and Deep Learning to analyze and compare voices. Resemblyzer will help tackle Fake Speech Detection, Speaker Verification, and Diarization.

While it is difficult to prevent all misuses of generative technology, we urge consumers to be evaluative of everything we hear, see, and even read.



A python package to analyze and compare voices with deep learning. Resemblyzer can be used for speaker verification, diarization, fake speech detection, and more.