The AI Race is On
Intelligence agencies around the world are racing to leverage and adopt the capabilities of Artificial Intelligence (AI). AI tools have already been deployed or are in development by defense, intelligence and law enforcement for a range of functions to include image recognition (facial recognition, object detection), language translation, and insider threat detection. The FBI is using AI enabled technology to evaluate tips to ensure they are accurately identified, prioritized, and processed in a timely manner. The CIA’s Open Source Enterprise launched an internal “Chat-GPT style” AI tool to enable analysts to have “better access to open-source intelligence,” and the NSA opened an “Artificial Intelligence Security Center,” focused on defending “the Nation’s AI through Intel-Driven collaboration with industry, academia, the Intelligence Community (IC), and other government partners.”
The US is not alone in pursuing the benefits of AI in the security space. China’s intelligence services are already using AI to identify foreign intelligence officers and the US is making efforts to deny China, and other adversaries, the ability to gain access and use the most advanced AI related technology and services. Non-state actors, such as fraudsters, are leveraging AI to advance schemes through AI generated phishing attacks and voice-cloning enabled scams, which is likely just a sampling of the schemes to come.
As the US moves to adopt AI technology in an expanding number of security applications, how can the IC best position its analysts to use this technology when it comes to intelligence analysis and how will use of these tools impact analysts’ ability to meet existing analytic standards? This paper considers the implications for meeting IC Analytic Standards when using an AI tool, explores several of the key issues involved and makes recommendations for amending the current standards, taking steps to anticipate challenges, and set the conditions for the IC to be successful in adopting AI.