Scammers targeting businesses is nothing new. Whether it’s to acquire customer data, infect networks with malicious software or simply to defraud them of money, organisations across every sector should be trained in how to prevent, identify and respond to scams.
Artificial intelligence technology is providing opportunities to businesses in a number of ways, including when it comes to cybersecurity, but AI is also set to increase the quantity and quality of scams.
There’s been some high profile examples already.
Benedetto Vigna, CEO for car manufacturer Ferrari, was impersonated by a deepfake messaging scam that was followed by a voice call where Vigna’s voice was recreated—apparently a perfect imitation of the southern Italian accent. The executive receiving the call suspected something wasn’t right and asked a specific question to identify Vigna, foiling the scam.
Three types of AI scams to watch out for
There are three AI scams on the rise:
- Deepfakes
- ChatGPT phishing
- Verification fraud
Deepfakes
Deepfakes have received a lot of media attention, primarily relating to fake videos of celebrities, politicians or media figures. However, where deepfakes pose the biggest threat to businesses is with fake audio, like the Ferrari example mentioned above.
While text messages and emails claiming to be from colleagues or management can be easier to spot (if employees have been trained in what to look for), phone calls from senior leadership that sound exactly like the person it’s purporting to be are much harder to identify.
ChatGPT phishing
AI tools like ChatGPT will help scammers with phishing attacks in two ways:
- Improve the spelling, grammar and tone of voice of the attacks
- Increase the volume they’re able to create
One of the biggest giveaways that an email is a phishing attempt is poor spelling and grammar, and an unprofessional tone of voice. However, scammers can now use ChatGPT for free to craft email messaging in specific styles and, importantly, that are grammatically correct.
They’ll also be able to create more scam messages at scale, and test what works more quickly.
Verification fraud
Linked to deepfakes, verification fraud can be achieved by creating fake photographs or videos claiming to be certain people. For example, setting up accounts with banks such as Monzo require a video showing the person saying a certain phrase first—a security check a deepfake could circumvent.
What can businesses do to mitigate the risks of AI scams?
AI fueling an increase in AI scams is inevitable, but how your organisation prepares and responds to the threats is within your control.
Put in place comprehensive training
The best thing your business can do to avoid falling victim to an AI scam is to train every member of staff. There are training providers that specialise in these areas, but if you’re a smaller company with a limited budget, you’ll need to cover some of the following:
- Explain each of the types of scam they might be exposed to
- What to do if they suspect there’s a scam attempt
- What security tools are in place and how to use them
Look at faces and environments to spot deep fakes
Deepfakes are good and getting better all the time, but they can still be spotted if you pay attention to faces and environments.
Deepfake videos don’t hold up to close scrutiny, so examine whether the small details are reacting how a human face is supposed to react. Deepfakes aren’t great at creating realistic environments yet either, so consider the shadows, lighting and reflections.
Use common sense
This is easier to do with comprehensive training, but employees also need to use their common sense. Does it make sense that their CEO is messaging them directly on a high value project? Are you being asked to sign a document or pay for something that’s outside of your remit?
The key is to always question yourself when communicating online, particularly if there’s transfers of money or sensitive data involved.
What to do if you fall victim to a scam
If you’ve fallen victim to a scam, regardless of whether or not AI was involved you should respond in the same way: secure as much info as possible and report it to your bank, as well as services like Action Fraud. You should also forward suspicious emails to the Suspicious Email Report Service (SERS) via [email protected], while suspicious texts can be sent via text to 7726.