The overarching question imparting urgency to this exploration is: Can U.S.-Russian contention in cyberspace cause the two nuclear superpowers to stumble into war? In considering this question we were constantly reminded of recent comments by a prominent U.S. arms control expert: At least as dangerous as the risk of an actual cyberattack, he observed, is cyber operations’ “blurring of the line between peace and war.” Or, as Nye wrote, “in the cyber realm, the difference between a weapon and a non-weapon may come down to a single line of code, or simply the intent of a computer program’s user.”
What if racism, sexism, and ableism aren't just glitches in mostly functional machinery—what if they're coded into our technological systems? In this talk, data scientist and journalist Meredith Broussard discusses why neutrality in tech is a myth and how algorithms can be held accountable.
Broussard, one of the few Black female researchers in artificial intelligence, explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.
Broussard argues that the solution isn't to make omnipresent tech more inclusive, but to root out the algorithms that target certain demographics as “other” to begin with. She explores practical strategies to detect when technology reinforces inequality, and offers ideas for redesigning our systems to create a more equitable world.
Meredith Broussard is an associate professor at New York University. Her books include More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech and Artificial Unintelligence: How Computers Misunderstand the World. Her research focuses on artificial intelligence in investigative reporting, with particular interests in AI ethics and using data analysis for social good. She appears in the Emmy-nominated documentary “Coded Bias” and serves the research director at the NYU Alliance for Public Interest Technology.