Artificial Intelligence (AI) call fraud is another area of cyber-attacks that are on the increase. Cyber criminals are now using artificial intelligence to generate convincing voice messages that sound like your CEO, your finance director, or your IT provider. These AI generated calls are being used to commit fraud, bypass normal checks, and pressure employees into fast decisions.
This type of attack, sometimes called ‘voice spoofing’ or ‘deepfake audio’, is no longer a future threat. It is already being used in real scams affecting businesses of all sizes. It can be incredibly convincing and naturally more difficult to resist.

How AI Voice Fraud Works
AI tools can now clone a person’s voice using just a short clip of audio. That voice can then be used to create convincing messages that sound like someone you know and trust.
Here are a few common tactics:
- Faking a boss – A staff member receives a voicemail or live call that sounds like a senior executive asking for an urgent payment, authorising a transfer, or requesting login details.
- Pretend IT support calls – A voice claiming to be from your managed service provider asks for remote access to a machine or requests that security software be disabled temporarily.
- Fake supplier conversations – Scammers impersonate regular vendors to discuss invoices or new banking details, often following up with a forged email for credibility.
Why These Calls Are So Convincing
- The voice sounds familiar
Even if the call is short, the voice pattern, tone, and urgency can feel real.
- The timing is strategic
Calls are often made outside working hours, or during busy periods when staff are more likely to act quickly.
- The context seems legitimate
Criminals often research your business beforehand, using LinkedIn, company websites, or stolen data to make the request sound believable.
Business Risks from AI Generated Calls
- Fraudulent payments
Staff may be tricked into transferring funds to criminal accounts.
- Credential theft
Attackers may gain remote access to systems by impersonating IT support.
- Bypassed approval processes
The use of voice can override normal caution, especially when paired with urgency and authority.
- Reputational damage
Falling victim to such scams can affect client trust and internal confidence.

How to Protect Your Business
1. Implement strict authorisation procedures
No financial transaction or system change should ever be approved based solely on a phone call, no matter who it appears to be from.
2. Educate staff about AI voice cloning
Make sure your team knows that these scams exist and that they are becoming more sophisticated.
3. Verify requests through a second channel
Encourage staff to confirm all unusual calls by messaging or emailing the known person directly using a trusted method.
4. Monitor and log call activity
Use phone systems that track inbound and outbound calls to help identify suspicious patterns.
5. Review supplier and payment protocols
Ensure banking changes or payment requests are always verified in writing and through secure channels.
Want to stay ahead of evolving threats?
Amicis Group helps UK SMEs defend against modern social engineering risks using practical staff training, smart policies, and tailored cybersecurity solutions.
Contact us today or call 0333 305 5348 to safeguard your business from AI driven fraud.