What is Proactive Voice Moderation?

A few short years ago, the majority of game developers focused on creating experiences for individual players. But today, games are primarily social experiences, with features designed first and foremost to enable connecting, making friends, and finding a sense of community. While this new form of community has become a valuable part of many players’ worlds, bringing them closer to others with similar hobbies and interests, it also has a dark side - more connectedness also means that harmful behavior – cyberbullying, verbal abuse, and general toxicity – has become more prevalent in gaming.

A 2020 study revealed just how common this toxicity has become; it found that 65% of surveyed players had experienced some form of severe harassment, including physical threats, stalking, and sustained harassment. But, it turns out that only a small percentage of players are actually the cause of this toxic behavior, meaning that if games begin to step up their initiatives for flagging and banning these instigators, overall gameplay will become a safer and more inclusive place. Proactive voice moderation is a relatively new tactic for combatting this type of toxic behavior, focused on catching offenses as they happen in real-time. Read on to find out more about how it works and how it has the potential to change the gaming community for the better!

Toxicity in games: What it looks like 

Toxicity isn’t just limited to specific games or types of people; 83% of adult gamers report facing toxic behavior online, across every demographic of player. But the truth is that although anyone can be targeted, players that are part of minority groups, in particular, end up being the targets of online harassment more frequently. According to ADL’s survey, 38% percent of women and 35% of LGBTQ+ players in the U.S. face targeted toxicity, based on their gender and sexual orientation. In addition, one-fourth to one-third of players who are black or African American (31%), Hispanic/Latinx (24%), or Asian-American (23%) face racial harassment online. 

Reactive Moderation

Many games studios and developers are aware of the harm happening online and have put what’s known as “reactive moderation” measures in place as a response. Usually, here’s how this works:

  • A studio sets up channels and methods for players to report bad behavior.
  • A player experiences toxic behavior, and has to either pause their game or remember to file a report at a convenient time.
  • The community or moderation teams at that studio then examine each report individually.
  • If the report was accurate and indeed flagged toxic behavior, the moderator then decides whether the offending player should be warned, flagged, banned, or suspended.

Although a good idea in theory, reactive moderation sadly only uncovers a tiny fraction of the offending players. Few players submit reports (between 5-25% of players, depending on the title and genre). This means that at least three out of four instances of toxicity will be missed. This leaves the toxic players unaddressed and unreported, continuing to foster a poor gaming environment for the whole community – or worse yet, leaving serious incidents of child predation, violent radicalization, or influence towards self-harm, unaddressed. In addition, it only takes one negative experience for the majority of players to decide a game is not for them, causing attrition in the game. 

Proactive Voice Moderation for Building Safer Online Communities

Proactive moderation, in contrast with reactive, is all about looking for signs of toxicity as it happens. Rather than relying on players to send incident reports, proactive voice moderation notices when a conversation begins to take a dark turn and automatically captures the key data to escalate to moderators, enabling them to respond faster and more comprehensively to any unfolding toxicity. It’s the way of the future, as gaming communities continue to grow – especially with the growth of virtual reality and the metaverse. 

Today’s machine learning technology also enables proactive voice moderation tools to stay on top of evolving context, slang, cultural norms, and history between participating players. And when the system encounters something it doesn’t understand, it can learn from the moderators analyzing its reports - and sometimes help the moderators themselves catch new jargon as it emerges within the community! In other words, proactive voice moderation systems work synergistically with moderation teams, both learning from each other and together ensuring the platform is truly safe for everyone. This modern approach helps make moderation teams more efficient, takes the onus off of the players to act as de facto community managers, and greatly reduces toxicity in a game community. The ultimate goal: to create safe, inclusive, and positive online experiences for players. 

To learn more about proactive voice moderation, download our white paper or get in touch with the team of voice chat experts at Modulate.