The overarching question imparting urgency to this exploration is: Can U.S.-Russian contention in cyberspace cause the two nuclear superpowers to stumble into war? In considering this question we were constantly reminded of recent comments by a prominent U.S. arms control expert: At least as dangerous as the risk of an actual cyberattack, he observed, is cyber operations’ “blurring of the line between peace and war.” Or, as Nye wrote, “in the cyber realm, the difference between a weapon and a non-weapon may come down to a single line of code, or simply the intent of a computer program’s user.”
Artificial Intelligence (AI) is right up at the top of today’s sexiest technologies list, with governments, policy makers and industry excited about finding difficult problems they can solve, building collaborative partnerships for technical development (eg. with AUKUS, the Quad), and setting moral and ethical standards for their use in society.
However, a significant challenge for AI adoption, particularly in the Defense and National Security domains, is security from cyber and other malicious attacks. In addition to the usual cyber security risks associated with digital technologies, AI and decision-making systems are uniquely vulnerable – they must learn from somewhere and make decisions based on their “experience”. This opens up a range of additional threat vectors. Learning systems, like young children, need increased protection as they grow, as their learning is only as good as the information provided, how its tailored to their needs and capabilities, and the environment they operate in. And, of course, there is no such thing as ‘AI school’ with associated standards to ensure that learning experience produces safe, resilient, and well-rounded systems that contribute positively to society when they hit the real world. Emerging fields such as adversarial machine learning (AML) are developing ways to test AI systems for vulnerabilities and inoculate systems against certain types of attacks, but this is a new space and there is a huge amount of work to be done to move towards holistic AI security that will stand up against sophisticated cyber attacks.
This talk will introduce key types of attacks against AI systems across the AI development chain, such as poisoning, evasion, information inference and degradation/disruption. It will also discuss how to holistically protect systems of national significance, such as future Defense capabilities where there are significant national security and potential threat-to-life risks. This talk will (try to) stay high level, will not require a background in AI or cyber security (though it will be make it more interesting!) and will focus on the kinds of problems we are trying to solve; why they’re important, some examples of interesting research and aspects of the AI ecosystem that need more investment and wider participation beyond the tech community.
For those who are currently or have taken Bruce Schneier’s cyber security class, this talk can be thought of an applied cyber security case study for AI, in the context of national security.
Karly Winkler is a Recanati-Kaplan Fellow from Australia with the Intelligence Project, and has over 22 years of experience in signals intelligence and cyber security. In a previous role, she ran a team in the Australian DoD that focused on using AML techniques for identifying and mitigating vulnerabilities in AIs, working with Defense AML researchers and engaging with system owners to provide security advice.