**Artificial intelligence (AI)-generated phishing calls are becoming a growing threat. This technology is being used by bad actors to make fake calls that are so genuine that it is difficult to distinguish.** According to a report by the US Federal Trade Commission (FTC), impersonation scams have caused loss of $2.6 billion by 2022. With the emergence of many new technologies, (scams) (https://thanhnien.vn/hacker-bi-ket-an-5-nam-tu- vi-lua-dao-vatan-cong-twitter-185230626113247327.htm) impersonation is getting more and more sophisticated. In March, the FTC said scammers had started using (AI)(https://thanhnien.vn/viet-nam-dung-47-the-gioi-ve-nang-luc-ai-1852306229235346909. htm) to create emergency situations, convincing victims that their family members are in distress for money or personal information. In an April survey of adults in 7 countries conducted by security firm McAfee, a quarter of respondents said they had experienced AI voice fraud, while 15% said that an acquaintance of mine had experienced a similar scam. With just a few minutes of internet connection and a small fee, bad guys can “weaponize” AI for personal gain. The report from McAfee shows that, in some cases, scammers only need 3 seconds of the subject’s audio to transcribe the voice. According to *Business Insider*, 19-year-old TikToker Eddie Cumberbatch received a call from his father asking if he had been in a car accident in Chicago (USA). This surprised Eddie because he hadn’t driven in the past 6 months and didn’t even have a car in Chicago. After being explained by his father, Eddie learned that an impostor informed his family that he had a traffic accident to cheat money. Fortunately, his father immediately suspected the call, called his son to verify and realized the bad guy had used an AI-generated voice to fool the Cumberbatch family. As an online creator with over 100,000 followers on TikTok, Eddie Cumberbatch knows fake accounts that mimic him are bound to pop up. The day before the scam call, a fake account of Eddie popped up on Instagram and started messaging his family and friends. Frightened by the scammer’s attempt to use AI to copy his voice, Eddie called the rest of his family to warn them about the scam and made a TikTok video of his experience. themselves to raise awareness. While Eddie and his family were able to avoid the scam, many victims of these AI-powered scams weren’t so lucky. And as AI technology goes mainstream, these scams will get harder and harder to spot. According to the FTC’s website, there are cases of scammers posing as lovers, Internal Revenue Service (IRS) employees, computer technicians and family members. Most scams happen over the phone, but they can also happen on social media, via text or email. A software engineer at Google Richard Mendelstein received a call that sounded like his daughter’s cry for help. After Mendelstein transferred $4,000 to ransom his daughter, he realized he had been scammed and that his daughter was safe at school the whole time. Previous virtual kidnapping scams often used voice recordings tailored to the age and sex of the victim’s children, targeting parents who panicked when they heard a child’s sound. are scared, even if the voice doesn’t really match their child’s voice. But with AI, voices are getting harder and harder to distinguish. The *Washington Post* reported in March that a Canadian couple was scammed of $21,000 after hearing an AI-generated voice calling for help that sounded like their son. In another case this year, scammers copied the voice of a 15-year-old girl and posed as kidnappers to demand a $1 million ransom. Most of us would think we would recognize a family member’s voice in an instant. But a survey from McAfee says about 70% of adults lack confidence in distinguishing between a cloned voice and a real one. A 2019 study found that the brain does not register a significant difference between a real voice and a computer-generated voice. In addition, more and more people are giving their real voices to scammers, McAfee says 53% of adults have shared their voice data online weekly. McAfee also found that more than a third of victims lost more than $1,000 in AI scams, with 7% losing more than $5,000. The FTC reported that victims of impersonation scams lost an average of $748 in the first quarter of 2023. “One of the most important things to note is that AI advances this year bring the technology to more people, including actually enabling the expansion,” said McAfee chief technology officer Steve Grobman. scale in the community of network users. Cybercriminals can use AI to create fake voices and deepfakes more easily than before. Instead of spending 3 months “dating” online and waiting for the target to “fall into the trap”, they can do a fake audio scam that does it in 10 minutes and get the same results. Previous phone call scams relied on the scammer’s acting skills or the victim’s gullibility. But now, AI has done most of that work. Popular AI audio platforms like Murf, Resemble, and ElevenLabs allow users to create realistic voices using text-to-speech technology. Not only that, most of these tools are easy to use and offer free trials. Scammers simply upload someone’s audio file to one of these sites and let the AI simulate their voice. For scam calls, victims often have very little information to give police investigators. Furthermore, the bad guys can operate from all over the world. With scant information and limited police resources, most cases go unsolved. In the UK, only 1 in 1,000 cases of fraud is charged. However, McAfee’s CTO said that when we receive a suspicious call, we should stay calm and ask some questions that only the person we know on the other end of the line knows the answers to. The FTC has also recommended that if a loved one tells you they need money, put the call on hold and try calling the family member privately to verify the story. Even if a suspicious call comes from a family member’s number, it can be spoofed. The government (USA)(https://thanhnien.vn/nhieu-co-quan-chinh-phu-my-bit-tan-cong-mang-185230616161332199.htm) is trying to control AI scams. In February, US Supreme Court Justice Neil Gorsuch emphasized that there are still limitations to preventing AI scams, namely that the majority of websites are not liable for content created by the US. third party posting. In May, US Vice President Kamala Harris told the CEOs of leading technology companies that it is their responsibility to protect society from the dangers of AI. Similarly, the FTC tells companies that they need to know about the risks and impacts of AI products before they go to market.