Skip to content

Module 6: Emerging Threats (AI Voice/Video)

Introduction to New Technology Threats

Technology is constantly evolving, and unfortunately, so are the methods scammers use to trick people. In this module, we'll look at some of the newest threats emerging from artificial intelligence (AI) technologies, particularly voice and video manipulation. Understanding these new threats now will help you stay protected as they become more common.

 

Important Note:

The technologies we discuss in this module are relatively new and rapidly evolving. While they may seem frightening, remember that awareness is your best defense. By understanding how these technologies work, you'll be better prepared to recognize and avoid scams that use them.

 

Introduction to AI Voice Cloning

AI voice cloning is technology that can create a very realistic copy of someone's voice after analyzing samples of their speech. Once created, this "cloned" voice can be made to say anything, even things the real person never said.

How Voice Cloning Works

Here's a simplified explanation of how voice cloning technology works:

  1. The AI system is given samples of someone's voice (sometimes just a few minutes of audio)
  2. The system analyzes the unique characteristics of that voice (tone, pitch, accent, speech patterns)
  3. Once trained, the system can generate new speech in that person's voice
  4. The generated speech can say anything, even if the real person never said those words
 

Real-World Example:

In 2023, there were multiple reports of scammers using AI-generated voices to impersonate family members in distress. In one case, parents received a frantic call that sounded exactly like their daughter, claiming she had been kidnapped and needed ransom money. The voice was entirely generated by AI, but it sounded so real that many parents were convinced it was their child.

(additional content coming)

 

Deepfake Videos Explained

Deepfakes are videos where a person's face or body is digitally altered to make it appear as if they are doing or saying something they never actually did. This technology uses artificial intelligence to create very convincing fake videos.

How Deepfakes Work

Here's a simplified explanation of how deepfake technology works:

  1. The AI system is trained on many images or videos of a person's face from different angles
  2. The system learns the details of their facial features and expressions
  3. The AI can then map these features onto another person in a video (face swapping)
  4. Or it can manipulate the original person's mouth and expressions to match new audio
  5. The result is a video that looks like the real person saying or doing things they never did
 

Traditional Photo/Video Editing

  • Usually requires significant skill
  • Often has visible flaws or inconsistencies
  • Typically static (photos) or limited movement
  • Has existed for decades

AI Deepfakes

  • Can be created with little technical skill
  • Often very realistic and hard to detect
  • Can show natural movement and speech
  • Technology is relatively new and rapidly improving
  •  

How These Technologies Are Used in Scams

1. "Grandchild in Trouble" Scams with AI Voices

This is an evolution of the traditional family emergency scam we discussed in Module 4, but now using AI-generated voices.

How It Works:

  1. Scammers gather voice samples of your family member from social media videos, voicemails, or other public sources
  2. They use AI to create a voice that sounds like your loved one
  3. They call you claiming to be your grandchild/child in an emergency situation
  4. The familiar voice creates immediate trust and emotional distress
  5. They ask for money to be sent immediately
 

2. Fake Video Calls from "Friends" or "Family"

As video calling becomes more common, scammers are starting to use deepfake technology to impersonate people in video calls.

How It Works:

  1. Scammers collect photos and videos of someone you know from social media
  2. They create a deepfake that can mimic that person's face in real-time
  3. They initiate a video call pretending to be your friend or family member
  4. During the call, they might claim to be in trouble and need financial help
  5. Or they might try to get personal information from you
 

3. Fake Videos of Celebrities or Officials

Scammers create deepfake videos of celebrities or government officials promoting scams or spreading misinformation.

How It Works:

  1. Scammers create a deepfake video of a celebrity or official
  2. The video might show them endorsing an investment opportunity or product
  3. Or it might show them making an alarming announcement that could cause panic
  4. These videos are shared on social media or via messaging apps
  5. People who trust the person in the video might follow their advice or instructions
 

How to Verify Caller Identity

With the rise of AI voice technology, it's more important than ever to verify who you're really talking to on the phone.

Verification Strategies:

  1. Ask personal questions that only the real person would know the answer to
    • Avoid questions with answers that might be found on social media
    • Example: "What did we have for dinner when you visited last month?"
  2. Call back using a known number
    • Tell the caller you'll call them right back
    • Hang up and call the person's actual number that you already have
  3. Verify through another channel
    • If someone calls claiming to be your grandchild, hang up and text or call their parents
    • Or send a text message directly to the person who supposedly called you
  4. Be suspicious of unusual requests or urgency
    • Even if the voice sounds right, be wary if the request is unusual for that person
    • Be especially cautious if they insist on secrecy or immediate action
 

Establishing Family Verification Codes

One of the most effective ways to protect against voice scams is to establish a family verification system using code words or phrases.

How to Create a Family Verification System:

  1. Choose a unique code word or phrase that all family members know
    • Pick something memorable but not obvious (avoid pet names or birthdays)
    • Example: "purple elephant" or "sunshine pancakes"
  2. Establish a question-and-answer system
    • Question: "How's the weather?" Answer: "Better with umbrella"
    • The specific answer is known only to family members
  3. Agree on when to use the verification
    • Any time money is requested
    • Any time there's an emergency situation
    • Any time something doesn't feel right
  4. Change your code periodically for added security
  5. Make sure all family members understand the system, including grandchildren

Family Meeting Activity:

Consider having a family meeting to discuss these new threats and establish your verification system. Make it a positive, empowering conversation rather than a frightening one. Emphasize that this is something the whole family is doing together to stay safe.

Signs of AI-Generated Content

While AI-generated content is becoming more realistic, there are often still signs that can help you identify it.

Signs of AI-Generated Voice:

  • Unusual cadence or rhythm in speech
  • Slight robotic quality or unnatural pauses
  • Inconsistent breathing patterns
  • Lack of background noise (too "clean" sounding)
  • Voice sounds right but speech patterns or vocabulary seem off
  • Emotional tone doesn't match the situation

Signs of Deepfake Videos:

  • Unnatural eye movements or blinking patterns
  • Blurry or changing areas around the face, especially when moving
  • Inconsistent lighting or skin tone
  • Unnatural head positions or movements
  • Audio and lip movements that don't perfectly sync
  • Hair that moves oddly or has strange boundaries

Important Note:

These signs are becoming harder to detect as the technology improves. Don't rely solely on trying to spot technical flaws. Always verify identity through other means when something important or financial is involved.

Future Trends in Scam Technologies

As technology continues to evolve, we can expect scams to become more sophisticated. Here are some trends to be aware of:

  • More realistic AI voices and videos that are harder to distinguish from real ones
  • AI-generated text messages that mimic the writing style of people you know
  • Interactive deepfakes that can respond to questions in real-time
  • Combination attacks using multiple technologies (e.g., fake voice calls followed by fake documentation)
  • Personalized scams using information gathered from data breaches and social media

Staying Protected in the Future:

The best protection against future scams will continue to be:

  • Maintaining healthy skepticism about unexpected communications
  • Verifying requests through multiple channels
  • Establishing verification systems with loved ones
  • Limiting personal information shared online
  • Staying informed about new types of scams
  • Taking time to think before responding to urgent requests

Creating a Family Verification System

Let's put what we've learned into practice by creating a family verification system to protect against voice and video scams.

Family Verification System Template:

  1. Choose a family code word or phrase: ________________________
  2. Create a question-and-answer verification:
    • Question: ________________________
    • Answer: ________________________
  3. List situations when verification should be used:
    • ________________________
    • ________________________
    • ________________________
  4. Alternative verification method (if code word is forgotten):
    • ________________________
  5. Date to review and update this system: ________________________