From Trash Talk to Trickery: How Moderating Toxicity in Games Helped Us Detect Real-World Fraud

June 2, 2025
Mike Pappas
(HE/HIM/HIS)

A surprising connection between gaming and fraud detection

Most people wouldn’t expect a tool built for gaming to be able to stop scammers in the real world. But that’s exactly what happened when our voice moderation tool, ToxMod, uncovered a massive blind spot in fraud detection.

This is the story of how battling toxicity in Call of Duty led us to building VoiceVault — a dedicated solution for detecting voice-based fraud across the gig economy, fintech, and contact centers.

What is ToxMod?

First: back to basics. ToxMod is our proprietary voice moderation platform, designed to keep online multiplayer spaces safer by analyzing live voice chat and flagging toxic and harmful behavior (think threats, sexual harassment, attempts at grooming). It was built on cutting-edge machine learning and emotional intelligence models with one goal: help gaming communities thrive by surfacing context-rich and actionable signals.

Here’s how it works:

  1. Triage: Sifts through live voice chat data to identify conversations worth investigating.
  2. Analyze: Considers tone, intent, and context — not just keywords — to assess toxicity.
  3. Act: Helps moderators take targeted, meaningful action — without over-policing normal trash talking.

In short, ToxMod empowers game studios to protect players while preserving what makes gaming fun. And when players feel safe and supported, they’re more likely to stick around — because strong, inclusive communities don’t just foster better experiences, they drive retention.

In fact, 67% of multiplayer gamers say they’d likely stop playing if another player acted toxically. But identifying bad actors is complex — because in gaming, trash talk and toxicity can sound almost identical.

ToxMod became the first voice-native tool that could tell the difference — and it worked. Today, ToxMod powers voice moderation for some of the biggest games in the world, including Call of Duty which has over 100 million monthly active players.

Here’s what we’ve seen:

  • 25% decrease in toxicity exposure
  • 23% of player reports had actionable violations — proving the need for smarter, proactive tools
  • Repeat offenders dropped by 8% month-over-month

It’s not just about blocking bad behavior — it’s about understanding it at scale. And that insight led us somewhere we didn’t expect.

From gamers to gig workers

​​In early 2024, we started working with gig economy platforms to identify when delivery driver and customer phone calls were going south — you might be surprised at how people react when their pizza is five minutes late! These platforms rely heavily on voice communication, especially when a call precedes a physical interaction. Initially, clients used ToxMod to flag verbal abuse toward gig workers. Drivers and couriers face harassment from customers with little recourse. Our ToxMod tech gave platforms a way to detect, intervene, and protect their workforce. So how did we uncover scams and fraud when we set out to only look for escalating harms? 

Discovering voice fraud detection

About a week into a new trial, we met with a gig platform client to review the first round of data. We expected to discuss moderation wins. Instead, their team looked stunned — and thrilled. They shared that ToxMod had detected over 5x more attempted fraud than their existing fraud tools, without even trying.

How? By identifying the same signals we use to flag toxicity: manipulation, distrust, and emotional tension. Traits that often show up in both a scammer's and victim's voices just as much as in toxic players.

But could we build a version of ToxMod specifically for fraud? Challenge accepted.

Building voice fraud detection – fast

We didn’t have to start from scratch. ToxMod’s modular architecture meant we could quickly configure its detection pipeline for a new purpose: identifying voice-based fraud in real time. While top AI companies often need months or even years to train new models, our team launched a fraud-specific version in under a month.

We call it VoiceVault.

What is voice fraud?

Voice fraud is the act of using spoken communication — typically over phone or voice chat — to deceive, manipulate, or impersonate someone for malicious gain. Unlike phishing emails or fake websites, voice fraud exploits tone, emotion, urgency, and real-time dialogue to manipulate people and bypass traditional security:

  • Impersonation: Posing as a customer or authority to access accounts
  • Emotional manipulation: Faking emergencies to rush decisions
  • Social engineering: Building false trust to extract information
  • Pretexting: Inventing believable scenarios to justify access

What does voice fraud look like? Here are a few examples:

  • A scammer impersonates a customer to reroute a food delivery.
  • A fraudster pressures a rideshare driver for a refund using a fabricated crisis.
  • A caller pretends to be a bank customer to change account details.
  • A claimant lies to their insurance provider to obtain a payout.

And these types of scams are growing fast — especially in industries and settings where trust and speech go hand-in-hand. Voice fraud is hard to detect and harder to prevent, especially in industries where voice interactions are integral to the user experience:

  • Gig economy: Fake calls to manipulate deliveries or extract discounts
  • Contact centers: Impersonators resetting passwords or diverting funds
  • Fintech: Fraudulent calls authorizing transactions under false pretenses

Why does voice fraud slip through the cracks? Because voice fraud isn’t just about what is said — it’s about how it’s said. Traditional fraud systems look at post-event evidence, like transaction monitoring, geolocation mismatching, and responding to victim claims. These traditional systems don’t pick up on key indicators of fraud as they happen at all like tone, stress, urgency, or manipulation. But VoiceVault does.

A tool that started by moderating voice chat in multiplayer games is now protecting people across industries from real-world fraud. That’s the power of cross-industry innovation — and a reminder that solving hard problems in one space can unlock breakthroughs in another. At Modulate, we’re committed to building voice intelligence that keeps people safe — whether they’re in a multiplayer gaming environment, on a delivery route, or calling their bank. Reach out for a demo or learn more about how Modulate is using voice AI to protect real people in real time.