Back in March, we announced that ToxMod now detects 18 languages, including Spanish, Portuguese, Korean, and many others. Understanding language —especially spoken language — goes beyond simple spelling, vocabulary, and syntax. Language is both a reflection and product of culture. Even regional colloquialisms and subcultures use phrases and terminology differently, and these nuances can make understanding difficult. For example, Korean pejorative and derogatory terms are sometimes based in familial contexts, which is not as common in English or have the same connotation. While Modulate is based in the US, we want to do our best to avoid applying our American culture-centric POV to non-English languages. So to be able to accurately detect harmful speech, we build and improve ToxMod and its multilingual capabilities with cultural awareness top-of-mind.
In today’s blog post I want to share the processes that the Machine Learning team and I undertake to research, validate, and implement new languages in ToxMod, as well as the ongoing systems of checks and balances we’ve put in place to ensure our multilingual capabilities are constantly improving.
Identifying Language Priorities
To kickstart the process of adding a new language to ToxMod, our team listens carefully to customer feedback and suggestions. Our Product and Accounts teams play a crucial role in this phase, helping the Machine Learning team to prioritize languages based on customer input.
For instance, we announced the launch of ToxMod in a handful of Activision’s Call of Duty titles in late August. When the Activision team expressed a need for Spanish language support, it became a top priority for our team to re-validate our Spanish language models. This approach ensures that we cater to the specific needs of our customers.
The heart of ToxMod's language expansion lies in data. We first leverage open source models. While trained on out-of-domain data, this initial framework is incredibly helpful in getting a baseline understanding of a particular spoken language. We want to ensure our models are culturally competent and based on real video game data. So using an initial open source model, we then incorporate our proprietary, in-domain voice chat data.
This robust data foundation is vital for creating accurate language models that can detect and moderate toxicity effectively.
To make sure we get it right, we work directly with native language experts who have knowledge of gaming or consider themselves avid gamers. Luckily, we have many in-house Modulators who provide valuable insights into their native language and culture as we validate and develop new models. For example, when we developed Korean language capabilities, we were fortunate to have the expertise of Mina, our full stack engineer, who provided an initial list of common Korean toxic terms and phrases.
To better understand subcultures and the ways they use terms and phrases, we extend our reach to gaming communities, online resources, like Discord groups, and even Boston-area international students. By engaging a wide array of native language experts and gaming enthusiasts, we aim to ensure that our language models capture cultural nuances, colloquialisms, and the unique voice of each language.
Testing and Validation
Before a new language goes live in ToxMod, thorough testing and validation are crucial. We source target language data and run it through our scoring algorithm. Consultants then rate the accuracy of the transcription and the sample score to detect bugs or inaccuracies in the scoring. This process ensures that our models perform accurately and equitably across languages.
Applying cultural knowledge and language expertise ensures our system is sensitive to the unique characteristics of each language, making moderation more effective.
Launching New Languages
Once our language model is validated with native speakers, our production team swings into action to implement the new language in ToxMod consoles. But implementation is only one part of accurate detection. We also share best practices and advice with customers on how to best utilize multilingual detection.
For example, Modulate encourages customers to hire native speakers for their moderation teams. Since ToxMod is a tool for human moderators to use, those moderators should also have knowledge of and experience in the language and culture they are moderating. This type of customer partnership helps to guarantee that the language support provided by ToxMod is maximally effective.
In the fast-paced world of online gaming and virtual communities, we understand the urgency of adapting to emerging language needs. The English language alone is constantly evolving, with the Merriam-Webster Dictionary adding 690 new words in September 2023. The ToxMod team can complete the research, development, validation, testing, and launch of a new language in as little as two weeks, ensuring rapid response to shifts in language.
While English is currently the most extensive language offering (indeed over 1.5 billion people speak English worldwide, according to a 2023 Statista survey), we are committed to developing and maintaining other languages, helping gaming communities worldwide stay safe and fun.
The addition of new languages to ToxMod is a complex and meticulous process, involving linguistic expertise, data-driven development, and agile implementation. It's all part of our commitment to providing a safe and inclusive online environment for gamers from diverse linguistic backgrounds. As we continue to grow and adapt, we look forward to expanding our linguistic support even further, making online interactions enjoyable for everyone, regardless of their language.