When algorithms fall short of articulated expectations, people get hurt.
In order to hold those who build AI systems accountable for the consequences of their actions, we can operationalize a system for auditing algorithmic systems. Algorithm audits have for years been part of the conversation in the context of online platforms, but are just beginning to emerge as a mode of external oversight and evaluation regarding the deployment of "automated decision systems" (ADS), and making their way into critical policy proposals as a primary mechanism for algorithmic accountability. However, as we learnt from many other industries, not all audits are made equal. In this talk, Raji will discuss the ongoing challenges involved in executing algorithm audits effectively and highlight the technical, legal, and institutional design interventions necessary for audits to serve as a meaningful approach to accountability. 


Deborah Raji is a Mozilla Fellow and CS PhD student at University of California, Berkeley, who is interested in questions on algorithmic auditing and evaluation. In the past, she worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. Recently, she was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovators.