Cybercriminals will stop at nothing in their quest to steal from innocent victims. AI tools have made launching successful attacks against their targets easier than ever. In a new twist on using generative AI in phishing attacks, hackers are impersonating OpenAI, the biggest name in the machine learning and AI landscape.
The New OpenAI Phishing Scam
In this newest widespread phishing scheme, the criminals launched an email impersonation attack in which they purported to be OpenAI. The fraudulent emails contained an “urgent message” claiming an issue with the recipient’s account and recommending that they update their payment information by clicking on a link in the message.
The message reached over 1,000 inboxes, bringing to light several worrying points regarding the use of AI tools for phishing scams. Although this message had many of the hallmarks of a classic scam used for credential harvesting and monetary theft, it still could bypass the industry-standard security protocols (DKIM/SPF) that typically block such messages. This means it came from a server the recipient’s organization authorized to send mail, unlike most phishing emails originating from blocked servers.
What this means in terms of ongoing security is that AI tools are making it increasingly difficult for existing security tools to catch phishing messages. Machine learning makes it easier for criminals to find and exploit flaws in software platforms. Clever cyber thieves are using data captured from machine learning in threat detection to train their AI models to develop attacks that can evade security protocols.
Deepfake Technology Showing Up in Phishing Attacks
Ambitious fraudsters also leverage deepfake technology to launch effective phishing scams, scamming businesses worldwide out of millions of dollars. Deepfake fraud involves using real sounds, videos, and images to create fake versions. Many of the most recent attacks involve voice calls.
Ever-evolving machine learning algorithms are making it difficult to stop AI-driven cyber attacks before they hit inboxes. However, since the vast majority of phishing attacks (over 90%) require some type of human intervention to complete, it’s incumbent upon users to learn how to avoid a scam.
This includes the following standard protocols:
- Check the email address of the sender. Even if it appears legitimate, look closer; scammers often replace letters with digits or incorporate slight misspellings to throw off eagle-eyed victims.
- Never click on a link in the email, but go directly to the source to avoid landing on a fake page.
- Confirm requests for information or payment with the sender via a voice call or text message.
- Stay up-to-date on the most common phishing approaches, knowing criminals will exploit AI tools to create more convincing messages.
In the meantime, OpenAI notes that threat actors use the platform for malicious purposes and that it’s already blocked dozens of large-scale threats. Still, the rapid acceleration of AI-powered threats underscores the need to remain vigilant to keep your business safe.