Making Ethics Actionable

In recent years, some truly amazing leaps have been made in all kinds of machine learning technologies - from facial recognition, to text generation, to voice analysis, and to much more. These new innovations enable a variety of rich and exciting applications, but at the same time, they introduce ethical questions that we, as an industry, have a responsibility to take seriously.

Modulate is a company that’s put ethics at our heart from the beginning. When we first started developing VoiceWear, it was obvious that the revolutionary technology also came with a set of novel risks, and our early team members spent huge amounts of time not only debating these tradeoffs internally, but also going out into the world, interviewing experts and ordinary people alike, to better understand concerns which might have lived in our blind spot. You’re never done figuring out the ethical questions around a new technology, but over a couple years we found ourselves landing in a largely stable space, with a good understanding of how to mitigate the risks VoiceWear could present while still ensuring that the benefits of such a novel tool could be maintained.

About a year and a half ago, we began development on ToxMod, a new service within our platform that performs real-time proactive voice moderation. Like VoiceWear, ToxMod was (and is!) a service capable of tremendous good, by helping online communities everywhere build a more inclusive and safe environment. But it also introduced new ethical questions of its own. What factors should be considered when assessing whether someone is acting poorly? Can we consider, say, their gender, especially if the conversation seems to be sexual harassment? How do we classify genders, then - you can’t know someone’s gender identity reliably from their voice, and naive models that classify users as either male or female could erase the reality of LGBTQ+ users of the platform. Etc etc.

The difference from when we started out, though, is that Modulate is now a 16 person company. It’s no longer so easy to guarantee that every member of the company can understand and trust how these decisions are getting made - and it’s crucial to us that every employee knows they can trust Modulate to operate ethically. So we realized it was time to build a more systematic process.

The process we designed is meant to be simple enough that anyone can cleanly understand and follow it; but at the same time powerful enough that it can cause real change and build the kind of trust from our team and our community that we’re committed to earning. And, excitingly, it doesn’t rely too heavily on the specifics of the ethical quandary at hand - meaning that the same process could be used by a huge variety of other companies which may be dealing with their own uncertainties.

So how does it work? The first step is Brainstorming. This phase is focused on answering the question “what harms could this system possibly cause, if misused, misconfigured, or otherwise put towards a problematic purpose?” During this phase, there are no bad ideas, and no discussion of whether a harm is ‘real’, ‘actionable’, or ‘important.’ Anything anyone can think of goes onto a sheet of paper, until you feel confident you’ve really explored the space.

It should go without saying that diversity of thought and perspective is absolutely crucial during the Brainstorming phase. Individuals will identify risks based on their own experiences and mindset, and can also help to direct the group’s attention to nuances that might otherwise have been missed. If you don’t already have a sufficiently broad group to create meaningful coverage of different viewpoints, then reach out to folks who can shed additional light on the problem. In our case, while we’ve generally been able to build a rich and diverse team, we didn’t have anyone in the room that identified as black, and knew that black player communities have their own norms and challenges, so we reached out to experts from those communities so they could lend their perspective.

Once you’ve finished Brainstorming, you might think that your next task would be to brainstorm mitigations - but not yet! The reality is that not everything on that list you created is necessarily equal - you’ll likely need to prioritize which risks you mitigate first. Beyond that, even if you are a giant corporation, you’ll need to be able to tell whether or not you’ve actually solved the problem! Thus, the second phase is Measurement. This doesn’t necessarily mean measuring how bad the problem is in the wild - in fact, it generally shouldn’t mean that. If you’ve already shipped your product out to your community when you’re only starting the second phase of thinking about the ethical implications, then you’ve gone out of order. (That said, even if you did go out of order, it’s still much better to dive into those ethics conversations today as compared to doing nothing!)

Instead, Measurement is about defining how, in practice, you would identify the severity of each individual risk, and likelihood of the bad outcomes you’ve predicted coming to pass. Sometimes you can do smaller tests to get an accurate estimate for these numbers, and begin iterating on solutions that way. Other times, you can measure the severity of the problem in other, related spaces, and use that to get a sense of the priority of each risk you’ve identified. But one way or another, your job is to find a way to quantify each risk. (The metric doesn't have to be perfect, and it won’t be! But at minimum, the number you’ve chosen should increase if your system was harmful, and should decrease or stay the same if you’ve successfully mitigated the risks of your system.) Once you’ve finished this phase, you should have a metric to match with each of the risks you’d identified - and a prioritized list of which problems you’re going to tackle first.

And this finally brings us to Mitigations. During this conversation, you’ll bring in technical experts as well as product and UX designers, and ask “what can we actually do differently with this product, in order to mitigate these risks?” Ideally, you’ll come up with clever solutions that completely circumvent the whole issue; but most of the time, it will be more about making tradeoffs that decrease the risk to a more tolerable level. And then, of course, you’ll actually have to build and test the ideas you come up with.

I should be clear that the end result of this process isn’t that your product is suddenly risk-free and ethically optimal. The real point of the process is clarity. Once you’ve completed these three phases - which might each be a single meeting, or might each take weeks or months to resolve - you and your team will all actually be on the same page about the risks of your technology, and how you’re going to handle them. This is fantastic, not only because it builds trust, but because it enables your team to hold you accountable to following through and really doing the work. On top of this, the ability to speak publicly about each of these concerns, and to know that you’ve genuinely worked through the best way to handle them, makes you much better equipped to actually succeed in a market that’s become increasingly aware of the importance of equity, safety, and inclusivity in recent years.

To reiterate one final point - you’re never done thinking about the ethics of your technology, because the world is never done changing. But by running a simple three-step process, you can improve your ability to notice issues that you would normally miss; get your team and your community aligned with how you’re thinking about the balance between those risks and the benefit of your technology; and ultimately ship a product that has that much more profound a positive impact on the world.