Microsoft Research Webinar Series
Transparency and Intelligibility Throughout the Machine Learning Life Cycle
People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and anybody affected by these systems—have at least a basic understanding of how they work.
In this webinar led by Microsoft researcher Jenn Wortman Vaughan, explore how to best incorporate transparency into the machine learning life cycle. Here, we will explore these three components of transparency (with examples): traceability, communication, and intelligibility.
The second part of this webinar dives deeper into intelligibility. Building on recent research, we explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, how to empirically test whether intelligibility techniques let users achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.
Together, you’ll explore:
- Traceability: documenting goals, definitions, design choices, and assumptions of machine learning systems
- Communication: being open about the ways machine learning technology is used and the limitations of this technology
- Intelligibility: giving people the ability to understand and monitor the behavior of machine learning systems to the extent necessary to achieve their goals
- The intelligible machine learning landscape: the diverse ways that needs for intelligibility can arise and techniques for achieving and evaluating intelligibility that have been proposed in the machine learning literature
Dr. Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City, and Co-Chair of the Microsoft Aether working group on Intelligibility and Explanation. In recent years, she has turned her attention to fair and intelligible machine learning as part of the Microsoft Research FATE group.
Among her many achievements, she completed her Ph.D. at the University of Pennsylvania and subsequently spent time as a Computing Innovation Fellow at Harvard and an Assistant Professor at UCLA. She is the recipient of Penn's 2009 Morris and Dorothy Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, and a Presidential Early Career Award for Scientists and Engineers (PECASE). She is co-founder of the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.