Practicing Ethical Approaches in Machine Learning and AI Research

At Modulate, we care deeply about ethical use of machine learning. While the field of machine learning has experienced remarkable growth in recent years, ethical guidelines and research have often lagged behind. The Modulate machine learning team recently attended the 2023 International Conference on Machine Learning (ICML), where we uncovered a treasure trove of new research exploring the ethical aspects of AI development. Since returning from ICML, we've been dissecting this research with the goal to continue improving Modulate's ethical approach to ML model development. 

In this blog post, let's delve into some of the latest data and insights we've gathered, explore how these findings can positively influence our ongoing work–particularly in the ethical development of ToxMod–and what we hope to do to move the needle on ethics in AI. 

Our Approach to Ethics in AI

At Modulate, we place great emphasis on ensuring the fairness of our internal model design and performance. As we developed our signal models, we also began to investigate how to ethically implement and evaluate such models in order to properly protect all gaming communities. This initiative spurred ongoing company-wide ethics discussions around large-scale changes to ToxMod as well as our models’ Equity Evaluation process. We currently use this framework to evaluate whether our models perform equally across different minority demographic axes via balanced in-domain datasets, all while we closely monitor these metrics before even considering model updates.

We've also been actively monitoring cutting-edge methods for uncovering and mitigating model biases. Here are some of the noteworthy resources that are guiding our efforts:

  • "Fair and Optimal Classification via Post-Processing" (Read Paper)
    This paper discusses novel post-processing methods for selecting fair and optimal classifiers. This research highlights new techniques that we at Modulate can explore implementing for more fair signal modeling.
  • HuggingFace Stable Bias (Explore)
    The HuggingFace Stable Bias initiative focuses on exposing biases in text-to-image systems. This exposure allows us to think more deeply on how to quantify similar biases within models in the audio domain.
  • "Men Also Do Laundry: Multi-Attribute Bias Amplification" (Read Paper)
    This research expands on current bias amplification metrics by introducing a new metric to quantify bias amplification across multiple attributes in the image domain, prompting us to consider exploring how models could exploit correlations with respect to multiple attributes in the audio domain. 
  • "Bias-to-Text: Debiasing Unknown Visual Biases through Language Interpretation" (Read Paper)
    This study investigates the debiasing of unknown visual biases in image models by leveraging language interpretation, encouraging us to consider widening our understanding of the unknown space of bias in the audio domain.

Collaboration and Improving Access to Data

In December 2023, I'll have the privilege of co-organizing the Audio in Machine Learning Workshop at NeurIPS. During this event, we'll provide access to two substantial datasets from Modulate, particularly valuable in the audio research domain.

We recognize the scarcity of audio research data in the field, and our aim is to encourage further exploration and research into ethical and effective machine learning models in the audio space. The workshop will cover a wide array of topics, including speech modeling, environmental sound generation, transcription, source separation, and more.

Our overarching goal is to foster collaboration, discussion, and the forging of new research directions within this critical area of study. By improving access to extensive audio datasets and promoting collaboration, we believe we can drive the development of ethical and impactful machine learning models in the audio domain.

At Modulate, ethics are at the core of our AI journey. We are dedicated to advancing the field responsibly and ensuring that our technology benefits society as a whole. Stay tuned for more updates on our ethical AI endeavors as we continue to evolve and grow.