A recent incident at Cal Poly has highlighted the growing concern over AI voice cloning and its use in phone scams. A family associated with Cal Poly was targeted by scammers who utilized AI voice cloning technology. This technology allows for the creation of convincing voice replicas from a combination of voice recordings, making it possible for scammers to impersonate individuals known to the victims.
Cal Poly, like many other institutions, has been facing challenges with various forms of phone scams. These scams often aim to trick individuals into giving away personal information or money by pretending to be from trusted entities or even impersonating family members or friends. The university has issued warnings and provided guidelines on how to protect oneself from such scams, emphasizing that Cal Poly will never ask for passwords via email, phone, or non-official web forms.
The use of AI in creating voice clones for scams is a relatively new and alarming development. It represents a significant escalation in the sophistication of phone scams, making them more convincing and harder to detect. The technology behind voice cloning has advanced to the point where very little audio is needed to create a convincing fake voice, which can then be used in scam calls to impersonate someone the victim trusts.
This incident at Cal Poly serves as a stark reminder of the evolving nature of cyber threats and the importance of being vigilant. The university and other institutions continue to educate their communities on the risks of phone scams and the steps individuals can take to protect themselves, such as verifying the identity of callers and being cautious with personal information.
What softwares or services are used to create Voice AI Scams?
To create voice scams, scammers use AI voice cloning tools that can mimic a person’s voice with a high degree of accuracy. These tools require a sample of the target’s voice, which can be as short as 3 seconds, although 10 seconds is preferable for creating a more realistic clone.
Unlike Resemble AI’s consent-driven Voice Cloning solution, software like ElevenLabs’ AI speech software is one example of the technology used for voice cloning. Other services include Speechify, as well as various open-source projects. These services are available for various subscription fees, and many offer free trial periods. The technology has become more accessible and affordable, allowing scammers to easily create convincing voice replicas for fraudulent purposes.
To protect oneself from these scams, it is recommended to be cautious with personal information, verify the identity of callers by contacting the person directly through known contact information, and be skeptical of calls asking for money or personal details.
How to detect if someone’s voice has been cloned
Detecting if someone’s voice has been cloned can be challenging, especially as AI technology becomes more sophisticated. However, there are methods and tools available that can help identify synthetic voices. Here are some ways to detect voice cloning:
- Unnatural Speech Patterns: Listen for any unnatural pauses, robotic-sounding speech, or strange pronunciation. AI voice cloning technology has improved, but it may still struggle with certain nuances of human speech.
- Urgent Requests: Be wary of calls that create a sense of urgency or pressure you to act quickly, especially if they involve sending money or providing personal information
- Voice Cloning Detection Systems: This includes systems like Resemble Detect. These systems are designed to differentiate between real and synthetic speech patterns.