Blog Post
from Belfer Center for Science and International Affairs, Harvard Kennedy School

Series Explores AI and Algorithm Regulations and Practices

Josephine Wolff
Josephine Wolff, Associate Professor of Cybersecurity Policy at The Fletcher School at Tufts University, discussed the history of cyberinsurance and war exclusions at an AI Cyber Lunch on November 16, 2022.

This fall, the Belfer Center’s Science, Technology, and Public Policy Program (STPP) brought back the popular AI Cyber Lunch seminar series to explore issues at the forefront of technology and, increasingly, public policy. 

The hybrid seminar series, organized by Cyber Project Fellow and HKS Adjunct Lecturer in Public Policy Bruce Schneier and STPP Fellow Cathy O’Neil, brought a wide range of speakers to Harvard Kennedy School to discuss how new and emerging technologies can be harnessed to enhance, rather than harm, society.

While the series marked a return to form after the pandemic for the Belfer Center, which has hosted cybersecurity lunches for many years, this semester was the first to explicitly focus on the theme of artificial intelligence and algorithms. “In doing so, it expanded the landscape of ‘security’ in ways that were useful and meaningful,” said Schneier. “It's not enough to secure physical computers or databases. You have to think about the people and social systems that are affected by them. Or, to put it another way, computer security doesn't end at the keyboard and chair.”

“It's not enough to secure physical computers or databases. You have to think about the people and social systems that are affected by them...Computer security doesn't end at the keyboard and chair.” 

– Bruce Schneier

“[Algorithms] are not pet projects—they are forming the future of our world,” O’Neil remarked during the final lunch of the semester. “A company like LinkedIn, for instance, has a tremendous amount of power, but we don’t even know how to define fairness as it relates to their algorithms.” Many of the speakers touched upon the challenge of defining algorithmic fairness, as well as the systemic rights and dignity violations that can result from even well-intentioned algorithms, particularly for marginalized groups like people of color, women, and people with disabilities. 

Gallery

"[Bug bounty hunting] is highly contingent work: you may or may not get paid depending on if the bug is deemed valid or not. It's displacing the risk away from companies and onto the lives of workers themselves." - Ryan Ellis, Associate Professor of Communications Studies, Northeastern University
1 of 3
Locating the responsibility to protect one's privacy in the individual is wrong. These are societal problems that need systemic solutions. However, individuals can make it more difficult for their privacy to be violated by law enforcement." - Cindy Cohn, Executive Director, Electronic Frontier Foundation
2 of 3
The decisions being made by AI systems are life-affecting... Certain uses of AI must be banned outright, like one-to-many facial recognition and emotion recognition. No amount of oversight can mitigate the harms of these technologies." - Caitriona Fitzgerald, Deputy Director, Electronic Privacy Information Center
3 of 3
Locating the responsibility to protect one's privacy in the individual is wrong. These are societal problems that need systemic solutions. However, individuals can make it more difficult for their privacy to be violated by law enforcement." - Cindy Cohn, Executive Director, Electronic Frontier Foundation
The decisions being made by AI systems are life-affecting... Certain uses of AI must be banned outright, like one-to-many facial recognition and emotion recognition. No amount of oversight can mitigate the harms of these technologies." - Caitriona Fitzgerald, Deputy Director, Electronic Privacy Information Center
"[Bug bounty hunting] is highly contingent work: you may or may not get paid depending on if the bug is deemed valid or not. It's displacing the risk away from companies and onto the lives of workers themselves." - Ryan Ellis, Associate Professor of Communications Studies, Northeastern University

Schneier and O’Neil are already planning the slate of speakers for next semester, with a continued emphasis on the voices of women in AI and cybersecurity—an unintentional but welcome feature of this semester’s lineup, according to the organizers. “I only noticed after I had invited the most qualified people in my field,” said O’Neil. “The truth is, my field of AI fairness is heavily female and in fact women of color.”

“Expertise cuts across a wide diversity of individuals, but sometimes it takes a little more effort to look beyond the obvious names and access that diversity,” added Schneier. “But if you do that, you get a much better conversation.”


To receive announcements about future AI Cyber Lunches, join our distribution list or follow @BelferSTPP on Twitter.

Recommended citation

Hanlon, Elizabeth. “Series Explores AI and Algorithm Regulations and Practices.” Belfer Center Newsletter, Belfer Center for Science and International Affairs.