Hackers are utilizing artificial intelligence (AI), ChatGPT, and other advanced technologies to defraud businesses and people out of millions of dollars in recent years. Deep fakes, fake corporate personalities, and impersonations of current personnel have all been warned against by the FBI as having the potential to seriously harm victim firms’ finances and reputations. These dangers outweigh the corporate scams and spear phishing emails that hackers previously employed.
But how?, I hear you ask? They can create a false/immitated voice using text-to-speech software, but the discussion might not sound natural, until now. Thanks to technological breakthroughs, hackers can now flawlessly duplicate a person’s voice using AI. They accomplish this by breaking into someone’s phone, recording their voice, and feeding that recording to their AI. The software can exactly imitate the person’s voice after months of testing and machine learning, enabling hackers to produce a false chat that seems real.
Hackers can use these deep fakes to trick businesses into transferring large sums of money into their bank accounts. They may pose as a director or other senior executive, asking the employee to make a payment. The employee may receive an email or a call that appears legitimate, but it is all a fake. In one instance, a bank manager in Hong Kong was tricked into transferring $35 million to 17 different bank accounts located throughout the world. The hackers used deep fake technology to make it seem like the bank manager was talking to the director of the bank, but it was all fake.
ChatGPT, an AI-powered system that can engage in full, natural conversations with humans. ChatGPT utilizes reinforcement learning, which means it learns from human feedback to improve its ability to mimic human-like conversations. While ChatGPT has various useful applications, there is a risk of scammers creating fraud-focused bots that can deceive people into sharing sensitive information. Furthermore, ChatGPT could be used to create bot networks that operate around the clock, generating new malware. In the future, it may be possible to develop AI that replicates a specific person’s personality, which could enable individuals to converse with deceased loved ones.
In conclusion, cyber threats are becoming more sophisticated, and businesses and individuals need to be vigilant to protect themselves. Deep fakes and ChatGPT are just two examples of how hackers are using AI to scam people out of money and sensitive information. As AI technology advances, the line between real and fake will become increasingly blurred, and it will be harder to tell the difference. Therefore, it is essential to be cautious and use secure channels of communication when dealing with financial transactions or sharing sensitive information both on and offline.
Sources;
- The FBI’s warning about deep fakes and synthetic personas: https://www.fbi.gov/contact-us/field-offices/portland/news/press-releases/fbi-warns-businesses-of-sophisticated-deep-fake-schemes
- The Hong Kong bank manager scam: https://www.zdnet.com/article/hackers-used-ai-deepfake-to-scam-ceo-out-of-243000/
- ChatGPT and its potential uses: https://openai.com/blog/dall-e-2-and-universal-transformers/
- The potential dangers of ChatGPT and AI: https://www.forbes.com/sites/forbestechcouncil/2021/07/23/the-dark-side-of-ai-the-dangers-of-deepfake-chatbots-and-ai-driven-cybercrime/?sh=177d54c942a1
- The use of reinforcement learning from human feedback to train AI: https://www.techrepublic.com/article/what-is-reinforcement-learning-and-how-does-it-work/
Suggest an edit to this article
Check out our new Discord Cyber Awareness Server. Stay informed with CVE Alerts, Cybersecurity News & More!
Remember, CyberSecurity Starts With You!
- Globally, 30,000 websites are hacked daily.
- 64% of companies worldwide have experienced at least one form of a cyber attack.
- There were 20M breached records in March 2021.
- In 2020, ransomware cases grew by 150%.
- Email is responsible for around 94% of all malware.
- Every 39 seconds, there is a new attack somewhere on the web.
- An average of around 24,000 malicious mobile apps are blocked daily on the internet.