How ToxMod and Modulate Support Studios On Their Compliance Journeys

Disclaimer: This blog post is not, and is not intended as, legal advice; all information herein is for general information purposes only.

As we’ve discussed previously, there’s a major shift in the regulatory landscape underway around online safety and privacy. For most online platforms, this raises serious questions regarding how they can stay - or become - compliant. 

The good news is that, despite there being a double-digit number of relevant regulations to be aware of, most of them share very similar themes and requirements. The better news is customers using ToxMod,  Modulate’s voice-native, proactive moderation platform, are already extremely well-positioned to meet those requirements. The best news is that we have further expertise from working with other studios on these challenges, and can support you across your wider compliance journey as well.

The major requirements for compliance are as follows:

  1. Writing a clear Code of Conduct
  2. Producing regular Transparency Reports
  3. Minimizing terrorism, CSAM, and grooming content
  4. Minimizing other types of harmful content
  5. Offering a User Report portal and responding in a timely manner
  6. Offering an Appeals portal and responding in a timely manner
  7. Explaining to users why they have been actioned against
  8. Conducting regular Risk Assessments
  9. Configuring privacy by default for all kids

Let’s discuss how Modulate can help with each of these.

1. Writing a clear Code of Conduct

The theme of these requirements is that most platforms today do not provide sufficient clarity in their Code of Conduct. In particular, a Code of Conduct should explain:

  • Precisely which types of behaviors are harmful and could result in an action
  • What types of actions can be taken, and in what circumstances
  • What recourse players have if they experience harmful content, feel they’ve been wrongfully actioned, or want to limit the use of their personal data

While these are now regulatory requirements, Modulate customers will also likely agree that they are also just good sense. The vast majority of misbehavior by players is due to unclear guidance on what’s permissible; and much of the trust lost between studios and their users comes from “black box” reporting, appeals, or actioning processes. 

As such, Modulate has long advocated for platforms to provide clear, candid explanations of their Codes of Conduct. Our in-house marketing and community experts consult with customers to help them draft compliant terms, and continue working with studios through the deployment of the updated Code of Conduct with their users. The Modulate team can even help to answer questions from users as they arise, should the studio desire us to do so. For those platforms looking for where to start, we can also point them to good examples of compliant Codes of Conduct used by other studios.

2. Producing regular Transparency Reports

Similar to the Code of Conduct requirements, Transparency Reports are meant to fill a void where regulators and consumers feel platforms have been insufficiently open regarding the severity and prevalence of harmful content on their platforms, and what measures the platform is taking to resolve these issues.

Modulate customers enjoy several advantages in meeting these requirements. First of all, not only does ToxMod automatically track action rates, how frequently appeals result in turnovers, and the accuracy of player reports; but since it’s a proactive voice moderation solution, it also provides insight into the total number of harmful behaviors across the platform, and even the number of individuals exposed to illegal content - both crucial components of an effective transparency report, which most studios currently lack the tools to measure.

In addition, Modulate can assist studios in generating the transparency reports themselves. Our templates leave room for studios to discuss their unique philosophy and community goals while filling in the key details to spare your team days or even weeks of data analysis and cleaning.

3. Minimizing terrorism, CSAM, and grooming content

Most platforms today rely primarily on user reports to identify harmful content, especially for media like voice or images which are difficult to assess automatically. However, regulators see this as insufficient protection, especially when it comes to harms like terrorism, child sexual abuse material (CSAM), and grooming which often take place in less public areas or involve victims who are not in a mental state to submit a report. As such, regulators will be pushing for platforms to be more proactive in detecting these types of harmful behaviors.

Thankfully, this is exactly what ToxMod is built for! As a proactive voice moderation system, ToxMod scans all voice chat content across the platform (while remaining compliant with relevant privacy laws) and identifies any conversations in which harmful or illegal discussions are taking place. These conversations are then logged and escalated to the studio to take action on, preserving a human-in-the-loop to make the final determination about any offenses detected. 

ToxMod’s design ensures that it can notice any illegal behavior across the platform, not just what’s reported by users; and enables it to track how many users have been negatively impacted by that toxicity as well. Finally, regulators are aware of ToxMod and recognize it as best-in-class; enforcement agencies considering which platforms to pursue often consider whether the platform is trying to take action, so merely deploying ToxMod at all may already position your studio more securely, while genuinely helping to protect your community!

4. Minimizing other types of harmful content

While illegal content is the most common focus, several active or proposed laws additionally take steps to hold studios responsible for broader types of harmful conduct on their platform, such as harassment and cyberbullying. In particular, where studios previously could rely on the excuse that they didn’t know what wasn’t reported to them, regulators and enforcement agencies alike are beginning to shift towards a “harm standard”, wherein sufficiently bad outcomes for users can create liability for a studio even if they never received user reports that this was happening.

Once again, the good news is that this is exactly what ToxMod is built for. As a cost-effective, privacy-respecting proactive moderation solution, ToxMod can identify toxic voice chat from across your ecosystem, categorize it, and hand you a prioritized queue of the very worst stuff your users are up to. Playful trash talk, reclamation of slurs, or villainous roleplay can all be distinguished from true harassment and bullying - ensuring you spend your time taking action against the most egregious bad actors, and can swiftly put a stop to anyone seeking to do harm to your community members.

5. Offering a User Report portal and responding in a timely manner

Many regulations include a specific requirement that platforms offer a clear portal through which users can report illegal or harmful content. Most platforms already offer this.

Some of these new regulations also require that platforms respond to every single report, do so in a timely way, and include context about what decision has been made and why. Most platforms do NOT do this yet, as the sheer burden of assessing player reports can be quite costly, especially given that many users submit false reports out of malice or mischievousness. 

ToxMod solves this problem thanks to its proactive design. By default, ToxMod is getting basic context about all the conversations on a platform; thus, when a user submits a report, ToxMod can “zoom in” on the conversation in question and quickly automatically assess whether there is anything worth acting on. If not, the ticket can be closed, with the user informed there was no evidence of a violation; whereas if ToxMod does see something, the report can now be escalated to the studio’s moderation team, with a guarantee that those moderators will have enough available evidence to (efficiently!) make an accurate call about the situation.

6. Offering an Appeals portal and responding in a timely manner

Similar to user reports, these regulations also require a portal for users to appeal actions that have been taken against them or their content. Once again, ToxMod helps here by ensuring the studio has clear evidence of the conversation which led to the action taken. When a user appeals a decision, the studio’s moderators will reliably be able to examine exactly what took place and determine whether or not to uphold the decision.

(Oh, and for anyone worried about ToxMod’s automated reports increasing the number of appeals to process, let your mind be at ease! ToxMod exceeds 98% accuracy in each of the categories of harms it detects. In other words, while ToxMod identifies often as much as 10x the number of bad actors as player reports alone, ToxMod is also tens of times less likely to make an error in judgment compared to the rate of false user reports.)

7. Explaining to users why they have been actioned against

Another popular complaint from users historically has been that they will simply be told they are being banned or muted without clear context as to why. As a result, multiple regulations include provisions requiring studios to explain, when they action a user, what part of the Code of Conduct was violated.

ToxMod makes this process easy, by automatically categorizing harmful behavior into types like Sexual Vulgarity, Violent Speech, Racial/Cultural Hate Speech, etc. ToxMod can also highlight elements of the conversation that were particularly concerning, including keywords or phrases as well as aggressive emotion or interruptiveness. This information can then be used by studios to provide clear justifications for any punishments, which not only ensures compliance but also, according to EA, can massively reduce repeat offenses from players!

Conducting regular Risk Assessments

Modulate helps platforms understand their risk posture in a variety of ways. ToxMod offers comprehensive protections to your players while complying with privacy regulation, has proven accuracy across major types of harms like harassment, and is battle-tested by other top game studios - so simply using ToxMod already is a great step to reduce your risk along a variety of dimensions.. 

ToxMod can also provide insights into the behaviors of your players. Is toxicity mostly coming from new or old players? Kids or adults? Repeat offenders, or a wider swathe of players? These insights can help studios understand the greatest risks to their players’ safety and experience, and think through design improvements as well as moderation strategies to attack the problem at its source. 

Finally, beyond ToxMod itself, Modulate’s team of experts make it their business to know about best practices not just for voice moderation (sign up for our newsletter to hear our latest insights), but for all elements of online platforms, and we work closely with vendors, studios, and regulators to ensure we can provide the best advice to those looking to mitigate risk. If you’re trying to understand where you could be liable, or simply where your players may be facing harm, we are always happy to talk!

9. Configuring privacy by default for all kids

California’s Age-Appropriate Design Act includes a potent requirement that platforms ensure children start with the strictest possible privacy protections enabled. While this is ultimately just a UI update for the platforms, it does raise an important question - how do you know which users are children? 

Typical age assurance like ID or payment checks works okay, but everyone - regulators included - knows it’s not good enough. The recent Epic Games / FTC case is proof, as well, that if you know there are kids on your platform, you need to go the extra mile to identify and protect them. Modulate helps with this by offering a voice-based analysis to identify underage users. While voice analysis, too, can be fooled, Modulate’s system has reached over 98% accuracy, and only gets more confident the more it hears a user speak. 

We strongly recommend including a more traditional method of age assurance early in the setup process; but adding Modulate’s age detection to help identify kids who bypass that brings you to the next level of safety and protection from becoming an ‘example’ for regulators.

Preparing for Compliance

In conclusion, games studios need to be aware of the changing regulatory landscape around online safety and privacy, and take proactive steps as outlined here to comply with the various requirements. Modulate can help developers and publishers along their compliance journey with our voice-native, proactive moderation platform that greatly minimizes harmful content, as well as supporting studios in writing a clear Code of Conduct, producing regular Transparency Reports, and conducting regular Risk Assessments. By partnering with Modulate, studios of all sizes can not only meet the regulatory standards, but also create a safer and more enjoyable online experience for their users.