The overarching question imparting urgency to this exploration is: Can U.S.-Russian contention in cyberspace cause the two nuclear superpowers to stumble into war? In considering this question we were constantly reminded of recent comments by a prominent U.S. arms control expert: At least as dangerous as the risk of an actual cyberattack, he observed, is cyber operations’ “blurring of the line between peace and war.” Or, as Nye wrote, “in the cyber realm, the difference between a weapon and a non-weapon may come down to a single line of code, or simply the intent of a computer program’s user.”
The rapid progress of artificial intelligence and AI-based technologies is occurring on many different fronts. Nations, industries, and corporations are all reaching new limits through groundbreaking advances, and the public entrusts these actors to apply their capabilities responsibly and with measures of accountability. Given the broad power these technologists wield, the development of artificial intelligence—an underpinning of our future lives—ought to anticipate societal impacts and be defined by principles rooted in ethics.
This discussion will explore the following questions: What are the existing principles guiding the development and use of AI? What are the current gaps in AI governance? And how to do we ensure responsible innovation moving forward?
Opening Remarks & Keynote
- Ash Carter, Director, Belfer Center for Science and International Affairs
- Joi Ito, Director, MIT Media Lab
Moderated By: Laura Manley, Director, Technology and Public Purpose Project
- Danny Hillis, Co-Founder, Applied Invention
- Milo Medin, VP, Wireless Services, Google
- Richard Murray, Professor, Caltech
- Other speakers to be confirmed
As space is limited for this event, RSVPs will be accepted on a first come, first served basis.