December 8, 2025

The new era of financial fraud: 7 steps to thwart deepfakes, scams, and takeovers

Last year, fraudsters tricked a finance worker at a multinational firm in Hong Kong into transferring $25 million to scammers. It started with a message that looked like it came from the company’s CFO. The employee hesitated — until a video call with people who looked and sounded like colleagues eased their concerns. Minutes later, the money was gone.

Cases like this are no longer edge scenarios. Fraudsters have moved past clumsy phishing emails and now use AI to generate lifelike voices, video calls, and documents that blend into normal workflows. Deloitte expects AI-driven fraud losses to hit $40 billion by 2027, more than triple 2023 levels.

“It’s impossible for the human eye to tell the difference anymore. The majority of our team is PhD researchers, engineers from phenomenal schools, and they can't tell the difference anymore,” says Ben Colman, CEO of Reality Defender, a deepfake detection tech firm. “It's not a question of what percentage of tier one, tier two and tier N corporates that have faced this issue, or CFOs — it’s a question of how often.”

The new reality: AI-generated scams hide inside your existing approval paths: vendor changes, urgent requests, password resets, even routine expense submissions. If you run finance, you’re now responsible for building processes where a convincing fake can’t quietly move money.

“It’s impossible for the human eye to tell the difference anymore. The majority of our team is PhD researchers, engineers from phenomenal schools, and they can't tell the difference anymore."
Ben Colman, CEO, Reality Defender

We’ll break down three categories of AI-driven fraud you’re most likely to face and what you can do to make your team harder to trick, including:

  • The red flags your team should watch for.
  • How to tighten access and improve processes.
  • How to leverage AI to fight AI-driven fraud.

Three attack vectors to watch

AI-generated fraud typically shows up in three places: executive impersonation, account takeovers, and AI-generated documents. Each slips through everyday interactions, which makes detecting them harder and raises the stakes for finance teams. This is happening because tools like Google's Veo 3, HeyGen, and ElevenLabs make producing convincing fakes faster, easier, and cheaper than ever.

1. Executive impersonation

Deepfake voices, videos, and messages now look and sound real enough to fool even the most cautious employees. Attackers study how senior leaders communicate and mimic their tone, timing, and urgency. 

“It looks like your CFO, your long-term banking partner, it comes through trusted channels,” says Avani Desai, CEO of cybersecurity assessment firm Schellman.

2. Account takeovers

Takeovers usually start with a password-reset email. Once an employee enters their credentials, attackers can slip into inboxes, redirect emails, or modify payment instructions.

With stolen login information, attackers can log into a system, add a payee, or initiate a transfer, says Jim Mortensen, strategic advisor in the fraud and anti-money laundering practice at Datos Insights. Attackers can also create forwarding rules that send all emails to an outside address without the employee ever noticing.

AI amplifies this pattern by generating realistic emails and automatically monitoring inboxes.

3. AI-generated documents and receipts

Fake receipts and vendor paperwork create opportunities for both outside fraudsters and insiders looking to push through fraudulent payments through your process.

Colman says he sees this frequently.

“Every week I get what looks like an accounts receivable note … sometimes it’s $10 at a time, sometimes it’s $10,000 or more,” he says.

Using AI, fraudsters can scale these attacks. They can generate hundreds of thousands of forgeries in minutes and send them to anyone who handles reimbursements or vendor onboarding.

7 defense tactics that keep deepfakes at bay

Once you understand the attack vectors, the next step is tightening the controls inside workflows. Here are safeguards CFOs and finance teams can put in place to make fraud harder to pull off and easier to catch early:

1. Lock down access

Since account takeovers start with access to credentials, organizations need to cut off the entry points where possible. You can:

  • Set up multifactor authentication for critical technology systems, including email, payments, invoicing software, and more. An extra login step beyond a password chokes off one of the most common pathways for fraudsters to break into a system.
  • Add device-bound passkeys as an extra layer and, depending on the transaction’s risk level, confirmation through another channel like a phone call or Slack message.
  • Monitor for suspicious mailbox rules, such as auto-forwarding to an email address you don’t recognize, says Mortensen.
A foundational rule is to never let one person create and release a big payment alone. You should also ratchet up approval friction based on the size and risk of the transaction.

2. Add guardrails around vendor setups and changes

Fraudulent transactions can slip through when new vendors are added without secondary confirmation. Here are a few first principles:

  • Don’t add a new vendor without verifying their identity and bank account.
  • Centralize the new vendor onboarding process to keep track of due diligence. “Procurement really owns that vendor setup validation,” says Desai.
  • Flag any vendor asking to change payment information for additional review. Companies can also check with the financial institution to confirm whether the bank account belongs to the vendor, says Chris Haller, principal security consultant at Omada Technologies.

3. Tighten approvals based on dollar amount and risk

A foundational rule is to never let one person create and release a big payment alone, says Desai.

You should also ratchet up approval friction based on the size and risk of the transaction. The framework could look like:

  • Under $5,000: Desai says manager approval is fine unless something feels off, including a new vendor, bank account change, after-hours timing, unusual urgency, or being pushed outside the standard tool.
  • $5,000 to $20,000: Escalate if any red flags appear, says Desai. You may want to reach the vendor by phone and loop in extra reviewers if needed, says Haller. “Hey, did you actually move your bank account to Cyprus?” is the type of question you might ask.
  • $20,000-plus: At least two approvers, “with one approver outside of the requesters chain, plus an out-of-band check,” says Desai.
  • $100,000-plus: At least three approvers, and get confirmation through a known channel, says Desai. For transactions that exceed $500,000, “it's probably worth a few grand to fly out there” before moving the money, says Haller.

4. Fight AI with AI — embed deepfake detection into the workflow

Traditional systems can confirm whether a voice or face matches a profile, but not whether the media itself is real. With AI deepfakes getting more convincing, companies can embed AI-based detection tools inside workflows, so suspicious video, voice, and text is flagged automatically.

“CFOs should demand that communications, collaboration, video conferencing, telephony, email — all of that needs to be scanned,” Reality Defender’s Colman says. “Treat everything like it is [by] default risky until proven otherwise.”

AI-based deepfake detection tools analyze audio, video, images, and documents to spot cloned voices, synthetic faces, frame-level anomalies, or manipulated files in real time. Embedding these checks directly into finance workflows adds an automated alarm system for high-risk steps.

“It used to be trust and verify. Now it’s never trust and always verify.”
Ben Colman, CEO, Reality Defender

How to do this:

  • Automatically scan everything at the point of entry. Deepfake detection tools that sit inside the tools teams already use — video meeting platforms, approval-call systems, document-upload portals, vendor-onboarding flows, and know-your-customer software — can automatically scan audio, video, images and PDFs the moment they enter the workflow.
  • Use free tools to establish a baseline. Deepfake detection tools like Reality Defender can run in the background and flag suspicious files before a human reviews them — and they don’t require changes to how teams work.
  • Add checks where fraud hides most often. Bake detection straight into routine flows, such as invoices, onboarding docs, vendor bank account change requests, etc.

5. Train continuously, not annually

Annual training won’t keep up. You need quick, recurring modules with short updates, real examples, and refreshers built into the way your team already works. This can include controlled deepfake drills. Colman says he sees the impact in the ones he runs for companies.

“With their permission, we’ll deepfake their CFO or their CEO … we want to scare you and show you how bad it can be,” he says. Haller notes that these drills work best when they’re opt-in and framed to build trust, not embarrass people.

6. ‘Throw up the timeout’ if needed

Deepfakes often look right but act wrong, so the workflow itself needs built-in friction that catches behavior that feels off.

Desai says that leaders need to make the culture explicit: “If it sounds like me but breaks [the] process, assume it’s not me. You won’t get in trouble for slowing down to verify.”

Haller encourages employees to “throw up the timeout” when something doesn’t make sense, supported by prompts that ask whether a request fits established protocol or needs escalation or a cooling-off period. You might add a step to confirm an executive’s identity, or treat every urgent, off-channel request as high risk, regardless of the amount. It also helps to flag any invoice, receipt, or vendor that does not match historical patterns.

“A little process friction in the right spots kills most of the risk.”
Avani Desai, CEO, Schellman

In short, it should feel normal and safe for people to pause, question, and escalate before money moves.

7. Build horizontal accountability

Deepfake risk shouldn’t sit with one department.

Desai recommends a cross-functional fraud council that could include IT, internal audit, compliance, procurement, customer support, and finance ops. This group should meet regularly and review fraud attempts, vendor setup, access governance, and escalation protocols. The council gives leaders shared oversight without dumping responsibility on one process owner and keeps governance improvements moving across the organization.

As Desai puts it, “A little process friction in the right spots kills most of the risk.”

There is no single silver bullet against deepfakes and other AI scams. Companies that mitigate these threats build systems where identity is verified more than once, risky steps force a pause, and urgent requests don’t get a free pass. When finance, IT, procurement, and security move in lockstep — and employees feel safe raising a hand — even sophisticated fakes hit a wall.

“It used to be trust and verify,” Colman says. “Now it’s never trust and always verify.”

For more on how expense fraud can drive up costs and slow operations, read our in-depth report, Fraud, fines, and forensic audits.

Suman BhattacharyyaContributing Writer, Ramp
Suman Bhattacharyya is a business and technology writer who covers financial services, enterprise technology, retail, management, and related fields. He has written for American Banker, The Wall Street Journal, The San Francisco Business Times, Industry Dive and other outlets.
Ramp is dedicated to helping businesses of all sizes make informed decisions. We adhere to strict editorial guidelines to ensure that our content meets and maintains our high standards.