Microsoft Research Webinar Series
Available on-demand. Register now.
Fairness-related harms in AI systems: Examples, assessment, and mitigation
AI has transformed modern life via previously unthinkable feats, from machines that can master the ancient board game Go and self-driving cars to developments we experience more routinely, such as virtual agents and personalized product recommendations. Simultaneously, these new opportunities have raised new challenges—most notably, challenges that have highlighted the potential for AI systems to cause fairness-related harms. Indeed, the fairness of AI systems is one of the key concerns facing society as AI continues to influence our lives in new ways.
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.
Together, you’ll explore:
- Examples of fairness-related harms and where these harms originate
- Assessment methods for allocation harms and quality-of-service harms
- Unfairness mitigation algorithms, including when they can and can’t be used and what their advantages and disadvantages are
*This on-demand webinar features a previously recorded Q&A session and open captioning.
Already registered? Login here