Strategic Briefing

Deepfakes

& CEO Fraud 2.0

AI-cloned voices are now authorizing wire transfers. Here's how sophisticated criminals are impersonating your executives, and a few things you can do to protect yourself.

A True Story

February 2024. The video conference looked completely normal. The CFO of a major multinational was on screen, speaking clearly, gesturing naturally. Several other senior executives appeared alongside him. The Hong Kong-based finance worker had been trained to verify unusual requests — so he had asked for exactly this: a live video call before authorizing any large transaction.

What he didn't know was that every face on that call was a fabrication. Every voice was generated by artificial intelligence. Within hours, he had authorized 15 separate wire transfers totaling $25 million. By the time the engineering giant Arup discovered the fraud, the money had vanished into criminal accounts. Source: Arup / CNN, February 2024

The Arup case didn't just represent a financial loss. It shattered the last "safe" verification method executives trusted: seeing and hearing the person make the request. If a live video call with your CFO can now be entirely fabricated in real time, the rules of authorization have changed permanently. This is CEO Fraud 2.0 — and it's targeting financial and business executives right now.

Quick Stats to Understand the Scope of the Threat

3,000%
Surge in deepfake attacks on businesses, 2022–2023
(Onfido)
$40B
Projected AI fraud losses by 2027
(Deepstrike)
4X
Increase in deepfakes detected worldwide from 2023 to 2024
(PR Newswire)

How the Attack Works

The blueprint is deceptively simple. Criminals source audio of your CEO from a LinkedIn video, a podcast appearance, a company webinar — even a 30-second earnings call clip. Modern AI voice-cloning tools can build a convincing replica from as little as 3 seconds of audio, achieving an 85% voice match to the original speaker. From there, attackers research your org chart to identify who controls wire transfers, craft a plausible pretext ("confidential acquisition," "emergency vendor payment"), and call the right employee at the right moment.

What makes this generation of fraud so dangerous is the psychological layer. A 2024 Medius survey found that over 50% of finance professionals in the US and UK had been targeted by a deepfake scam, and a striking 43% admitted they fell for it. These aren't unsophisticated employees — they are trained CFOs, controllers, and treasury directors.

"The emotional realism of a cloned voice removes the mental barrier to skepticism. If it sounds like your boss, your rational defenses tend to shut down."

The threat is accelerating at a pace that should alarm every C-suite. According to Deloitte's Center for Financial Services, generative AI-driven fraud losses in the U.S. are projected to climb from $12.3 billion in 2023 to $40 billion by 2027 — a compound annual growth rate above 30%. In Q1 2025 alone, documented losses from deepfake-enabled fraud in North America exceeded $200 million. Deepfake incidents increased by 257% between 2023 and 2024, and in the first quarter of 2025, the volume already surpassed all of 2024 combined.

The Ferrari Test

In July 2024, Ferrari senior executives received WhatsApp messages from someone impersonating CEO Benedetto Vigna — complete with a deepfake voice that replicated his distinctive southern Italian accent. Rather than comply, a sharp executive asked the caller a personal question: what book had Vigna recently recommended to them? The impersonator had no answer and hung up. Ferrari lost nothing. The lesson: pre-shared, relationship-specific information that no AI can access is now a frontline defense tool.

Compare that outcome to what happened at Arup, where $25 million disappeared, or to a Singapore-based firm in March 2025, where a finance director authorized a $499,000 transfer after a fabricated video call (Link to article).

On Erosion of Trust

Deepfakes — once a curiosity of Hollywood face-swaps and viral internet jokes — have become one of the most destabilizing tools in modern politics.

The Munich Security Conference (2024) warns that the most corrosive effect may not be any single piece of disinformation, but the cumulative erosion of public trust — leaving citizens so overwhelmed by uncertainty that they disengage from the political process altogether.

The rise of deepfake-enabled CEO fraud is not just a financial threat; it's an erosion of trust at the highest levels of business. When executives can no longer be sure that the voice on the other end of the line is real, it undermines confidence in communication channels and decision-making processes.

Five Actions Every Executive Should Take Now

  • Establish verbal code words. Create pre-agreed passphrases between executives and finance teams. AI cannot know what was whispered privately between two people.
  • Mandate dual-channel verification. Any wire transfer request received by phone or video must be confirmed via a separate, independent channel, a direct number, an in-person check, or a secure messaging platform. One channel is never enough.
  • Require multi-person authorization for large transfers. No single employee should be able to execute a significant wire transfer without a second authorized approver, regardless of who appears to be asking.
  • Train your team to treat urgency as a red flag. Deepfake fraudsters exploit time pressure and secrecy. Legitimate executives don't demand that employees bypass controls. Build a culture where "my boss said to do it immediately" is a trigger for pause, not action.
  • Audit your executives' public digital footprint. Every video, podcast, and webinar featuring your leadership is raw material for voice cloning. Implement a policy on minimizing accessible audio/video, and assume anything public can be weaponized.

Learn More & Stay Ahead:

Works Cited

1 - Onfido's Identity Fraud Report 2024 Link

2 - Deepstrike Deepfake Statistics 2025 Link

3 - PR Newswire Link

Next Issue:

Breach Notification Deadlines (The 72-Hour Rule).