Fraudsters have learned to fake people's voices: now a fake son or daughter can call you and ask for money - ForumDaily
The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

Fraudsters have learned to fake people's voices: now a fake son or daughter can call you and ask for money

Impersonating someone to steal money is nothing new. Known as impostor scams, these schemes are the most common type of scam in the United States, according to the Federal Trade Commission. People lost $2,6 billion to scams in 2022, up from $2,4 billion the year before, the study found. But with the advent of new technologies, this type of fraud has evolved. What is happening today, the publication told Business Insider.

Photo: IStock

New technologies are making fraud even more pernicious. In March, the Federal Trade Commission said scammers are beginning to use artificial intelligence to amplify "family emergency" schemes in which scammers convince people that a family member is in distress in order to get cash or personal information.

In an April adult survey conducted in seven countries by global security software company McAfee, a quarter of respondents reported experiencing some form of AI voice scam.

For just a small fee and a few minutes with an internet connection, attackers can use AI as a weapon for their own personal gain. McAfee's report showed that in some cases, all a scammer needs is three seconds of audio to clone a person's voice. And on social media, it's easy to find a snippet of someone's voice that can then be used as a weapon.

Eddie Cumberbatch was sitting in his Chicago apartment in April when his father called him. As soon as he heard his voice, the 19-year-old tiktoker knew that something was wrong. Dad asked if Eddie was at home and if everything was all right.

On the subject: Fraudsters steal money and personal data pretending to be employers: how to recognize fraud

“This is a very strange way to start a conversation,” the son remarked.

When Eddie said he was home and safe, his father asked if he had been in a car accident. Eddie was baffled—not only had he not been in an accident, but he hadn't driven in six months. The father was relieved, but Eddie couldn't understand why the father thought he had been in a car accident?

The father explained that someone had called his home phone from a foreign number. When the grandfather answered the phone, it sounded like the grandson was talking on the phone. This "Eddie" said he was in a car accident and needed money urgently. Fortunately for the family, Eddie's father immediately suspected something was wrong.

He called his son to verify the story, because he knew it was not in his nature to ask for money, and besides, Eddie did not even have a car. The call confirmed that it was not about the son.

While Eddie and his family were able to avoid the scam, many of the victims of these artificial intelligence scams were less fortunate. And, as AI technology becomes mainstream, these scams will only get more sophisticated.

Advanced Fraud

Impostor scams come in many forms, but they usually work the same way: A scammer pretends to be someone you trust in order to convince you to send him money. According to the FTC, there are cases where imposter scammers have posed as partners, IRS employees, guardians, computer technicians, and family members.

Most scams happen over the phone, but they can also happen on social media (via text messages or email). For example, Richard Mendelstein, a software engineer at Google, got a call from his daughter Stella asking for help. He withdrew $4000 in cash as a ransom. And only after he sent the money, he realized that he had been deceived, and his daughter had been safe at school all this time.

Previous iterations of the virtual kidnapping scam targeted by the Mendelstein family used generic voice productions that vaguely matched the child's age and gender. The scammers were counting on parents to panic at the sound of a frightened child, even if the voice didn't actually match their child's. But with AI, the voice on the other end of the phone can now sound like a real one. The Washington Post reported in March that a Canadian couple was scammed out of $21 after they heard an AI-generated voice that sounded like their son's. In another case, scammers spoofed the voice of a 000-year-old girl and posed as kidnappers to collect a $15 million ransom.

As an online creator with over 100 followers on TikTok, Eddie knew that fake accounts imitating him would inevitably pop up. The day before the scam call, a fake Eddie account appeared on Instagram and started messaging his family and friends. But AI takes circuits to the next level.

“Uploading posts to Instagram is one thing,” Eddie said. “But trying to clone my voice is really weird and it scares me.”

Tiktoker called the rest of his family to warn them of the scam and made a TikTok video of his experience to raise awareness.

Most of us probably think that we recognize the voice of a loved one in the blink of an eye. But McAfee found that about 70% of adults surveyed weren't sure how to tell a cloned voice from a real one. A 2019 study found that the brain does not register a significant difference between real and computer voices. Subjects of the study incorrectly identified transformed (modified by software) records as real in 58% of cases, leaving enough opportunities for fraudsters to exploit. In addition, more people are giving scammers their real voice: McAfee reported that 53% of adults shared their voice data online on a weekly basis.

Whether it's a kidnapping, robbery, car accident, or simply being stranded somewhere with no money to get home, 45% of McAfee survey respondents said they would respond to a voicemail or voice note that sounds like their friend or loved one, especially if the message comes from their partner, parent or child. McAfee found that more than a third of victims lost over $1000 to AI scams, and 7% lost over $5000. The FTC reported that victims of impostor scams lost an average of $748 in the first quarter of 2023.

Fake voices

While the AI ​​technology that makes these scams possible is not new, it is improving, becoming cheaper and more accessible.

“The main thing in the development of AI is to make the technology accessible to a large number of people. But this, at the same time, increases the number of cyber fraudsters,” said McAfee Chief Technology Officer Steve Grobman. “Cybercriminals can use generative AI to spoof voices and deepfakes in ways that previously required much greater sophistication.”

He noticed that cybercriminals are like businessmen - they are looking for the most effective ways to make money.

“In the past, these impostor scams have been very lucrative because victims often pay quite significant amounts of money,” Grobman said. “Now there is no need to trick someone for three months in a romance scam to get $10, because you can make a fake audio scam that takes 000 minutes to complete and get the same result.” It will be much more profitable."

Previous phone impostor shenanigans relied on the scammer's acting skills or the level of gullibility on the part of the victim, but now the AI ​​does most of the work. Popular AI audio platforms such as Murf, Resemble, and ElevenLabs allow users to create realistic voices through text-to-speech technology.

Most providers offer free trials and these tools don't require a computer degree to figure it out, which makes them attractive to scammers. A crook uploads an audio file of someone's voice to one of these sites, and the site builds an AI voice model. With a small snippet of audio, scammers can achieve a 95% voice match. The scammer can then simply type whatever he wants and the AI ​​voice will speak what is typed in real time.

After committing a crime, voice scammers are hard to catch. Victims often have limited information for the police, and since voice scammers operate from all over the world, there are many logistical and jurisdictional challenges for law enforcement. With minimal information and limited police resources, most cases remain unsolved. In the UK, only one in 1000 fraud cases ends up with a charge.

However, Grobman believes that if you know about the existence of scammers, you do not need to worry too much. If you receive one of these calls, one thing you want is an opportunity to step back and ask a few questions that only the supposed person on the other end of the line knows the answer to. Or, as the FTC advises, if a loved one tells you they need money, put that call aside and try to call them back yourself to verify the story, just like Eddie's father did.

Even if a suspicious call comes from a family member's number, it can also be faked. Another tell-tale sign is that the caller is asking for money through shady channels that are hard to trace, such as cryptocurrencies or gift cards. Security experts recommend establishing a safe word for communicating with loved ones that can be used to distinguish between a real emergency and a scam.

AI risks

As AI becomes ubiquitous, these kinds of scams are compromising our ability to trust even our closest family members. Fortunately, the US government is trying to limit the ways in which AI can be used.

Supreme Court Justice Neil Gorsuch stressed in February the limited remedies that protect social media from lawsuits when it comes to AI-generated content.

You may be interested in: top New York news, stories of our immigrants and helpful tips about life in the Big Apple - read it all on ForumDaily New York

And Vice President Kamala Harris told CEOs of leading technology companies in May that they have a “moral” responsibility to protect society from the dangers of AI. Similarly, the FTC told companies in February, “You should be aware of the reasonably foreseeable risks and impacts of your AI product before you bring it to market.”

Ellie Armeson, executive director of the Cybercrime Support Network (a non-profit organization that helps businesses and people fight cybercrime), agreed that some regulation is needed.

“Generative AI is developing very quickly,” she said. “Like any technology, generative AI can be used inappropriately or even for harmful purposes, so regulation will certainly be needed as these generative AI tools continue to evolve.”

But while a number of AI cybersecurity solutions are currently being rolled out, Armeson believes that for now, the best thing anyone can do is not lose sight of them and keep the discussion going: “Until all consumers have access to security solutions, it’s up to us to how individuals are responsible for understanding the new dimension of cyber threats and protecting themselves.”

Read also on ForumDaily:

Visa sponsorship: how to get a US visa with the help of a guarantor

What you need to know about skiplagging - a semi-legal life hack that will help you save on plane tickets

Fraudsters steal money and personal data pretending to be employers: how to recognize fraud

How to Brew the Perfect Coffee: 12 Mistakes That Can Ruin Your Drink

What day is better to go to the store to save a lot of money

On vacation on a trailer: how to choose and rent a motorhome in America

scammers calls phone Incidents AI
Subscribe to ForumDaily on Google News

Do you want more important and interesting news about life in the USA and immigration to America? — support us donate! Also subscribe to our page Facebook. Select the “Priority in display” option and read us first. Also, don't forget to subscribe to our РєР ° РЅР ° Р »РІ Telegram  and Instagram- there is a lot of interesting things there. And join thousands of readers ForumDaily New York — there you will find a lot of interesting and positive information about life in the metropolis. 



 
1302 requests in 1,270 seconds.