Policy Brief

Action on AI: Unpacking the Executive Order’s Security Implications and the Road Ahead

| Nov. 08, 2023

On October 30, 2023, President Biden issued Executive Order (14110) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[1] Directing agencies to take action within existing resources and authorities, this order seeks to empower the U.S. to realize the full benefits of AI, while still mitigating critical risks. Alongside a focus on civil rights, consumers, workers, competition and privacy, the Executive Order also aims to promote national security by addressing AI safety and security risks, bolstering U.S. technology leadership, enhancing government AI capacity, and leading international cooperation efforts. 

This article provides an overview of key national security initiatives contained in the new Executive Order and explores issues relevant to their implementation, including: 

  1. Ensuring Safe and Secure AI 
  2. Maintaining U.S. Tech Leadership
  3. Building Government Capacity

1. Ensuring Safe and Secure AI

The Executive Order charts a path towards a strong safety and security foundation for AI models impacting the national security landscape, with a strong focus on cyber, nuclear, biological, and chemical threats.

Dual-use Foundation Models

Dual-use foundation models are defined as AI models that are trained on broad datasets containing at least tens of billions of parameters and pose (or can be modified to pose) serious risks to security, public health, or safety. Specifically, these models are expected to be capable of lowering the barrier to entry for non-experts to design chemical, biological, radiological, or nuclear weapons, as well as enabling cyber attacks against critical infrastructure and permitting the evasion of human oversight through deception. Given these powerful implications, much of the Executive Order focuses on addressing dual-use foundation models, and companies developing such models are now required to inform the federal government of their activities with detailed reports on results and performance (within 90 days of the order’s issuance). 

The Department of Commerce is tasked with analyzing the ways in which these dual-use models can constitute national security hazards by making critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks. Biosecurity threats and opportunities are given strong relevance in the Order, especially in connection with biological weapons, pathogens and omics studies (i.e. pertaining to the biomolecules that make up cellular systems). (For more information on the intersection of AI and biosecurity, see our primer here). Cyber threats, including malicious cyber-enabled activities leveraging the US. Infrastructure as a Service (IaaS) as well as labeling and detection efforts[2] to mitigate harmful synthetic content, are also regarded with concern.

Reporting Mandates 

Stringent reporting mandates now apply to companies developing foundation models that use computing power greater than 1026 integer or floating-point operations (the basic math operations carried out by any machines to execute any calculation) and that leverage a single set of machines connected through data center networking of over 100 Gbit/s. While the Secretary of Commerce will be reviewing and updating these guidelines in the coming months, debate exists within the scientific community as to whether these thresholds, which appeared in previous research[3] and are very close to the current compute load utilized by models like OpenAI’s GPT-4[4], represent the most effective way to establish which models need higher scrutiny and oversight. At least initially, these thresholds will effectively capture the next generation of foundation models. Given the potential for these models to be used for a vast array of tasks and to sometimes possess unexpected capabilities, these requirements are a useful first step in identifying and addressing safety and misuse risks.[5] However, these fixed thresholds might miss smaller yet still powerful models, and could easily become obsolete as algorithms become more and more efficient in their use of compute.[6] 

While these principles represent a good starting point in the establishment of safe and secure AI systems, the actual guidelines proposed by the agencies involved will give a much better sense of what to expect in more practical terms on tools, plans, and AI testbeds for model evaluation. It will also be crucial to see how the principles of continuous testing, monitoring, benchmarking, documentation, and certification will be applied to guarantee resilience and trustworthiness of national security applications.  

Establishing an Artificial Intelligence Safety and Security Board

It is encouraging to see the establishment of an Artificial Intelligence Safety and Security Board – a committee including experts from the private sector, academia, and government – that will hopefully remain independent and actively engaged in the decision making processes on security, resilience and disaster response (rather than just intermittently tapped as advisors). As the Order pays a strong focus on the protection of human rights, building on previous work such as the OSTP’s Blueprint for an AI Bill of Rights[7] and the NIST AI Risk Management Framework[8], it will be critical to ensure that national security concerns are balanced with the crucial need for privacy, including minimum risk-management practices for Government uses of AI that impact people’s rights or safety.

2. Maintaining U.S. Tech Leadership

“When it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyze global action and build global consensus in a way that no other country can.” - Vice President Harris[9]

The Executive Order also outlines a range of initiatives designed to strengthen the U.S. position as a leader in AI, in recognition that technology leadership is crucial for national security, economic, and geopolitical interests. These include establishing additional National AI Research Institutes, enhancing training programs for researchers, and stimulating research and development in the regions.

National AI Research Resource 

A standout initiative is the pilot of the National AI Research Resource (NAIRR), which brings together public and private computational, data, training, and model resources to provide the research community with the tools to undertake innovative AI research. Such a program is vital for researchers to meaningfully contribute to frontier AI research, where the costs are becoming prohibitive – for example, the final training run of OpenAI’s GPT-4 is estimated to have cost $50 million.[10] Broadening access to AI resources, particularly the costly compute power, will help strengthen academia’s contribution to AI advancements and could help ensure developments are geared towards national interests (including security interests), rather than commercial objectives. Reducing reliance on a small number of industry leaders could also enable more informed policy development and help mitigate risks of regulatory capture. However, the success of the NAIRR pilot may be jeopardized due to funding shortfalls. The taskforce that developed the NAIRR plan recommended agencies seek funding from Congress to enable its implementation,[11] but the Executive Order does not provide any. In the context of various competing AI priorities, careful oversight will be key to ensuring the NAIRR pilot remains ambitious and is effectively delivered from within agency resources.

Attracting AI Talent 

The Executive Order also recognises the critical role of international workers in maintaining the U.S.'s AI prowess, and as such, it tasks relevant Secretaries with considering updates to a number of existing visas intended to attract and retain international AI talent. In addition to established visa channels, the Secretaries could also consider the feasibility of attracting AI talent from key strategic competitors. In particular, China has been prolific in its use of talent recruitment programs, actively recruiting scientists and experts from the U.S. and other countries.[12] Targeted recruitment of AI students and experts seeking to permanently move from competitor states to the US could, when carefully undertaken, support the US to retain its technology advantage.

3. Building Government Capacity

Government capacity in AI is key to enhancing security, as well as enabling regulation that balances security with the innovation needed to maintain AI leadership.

Direction to Government Agencies 

The Executive Order sets out the path for agencies to capitalize on AI’s potential to improve cybersecurity. It orders the Secretary of Defense (for national security systems) and the Secretary of Homeland Security (for non-national security systems) to pilot using AI capabilities to discover and help remediate cyber vulnerabilities in critical government software, systems, and networks. In light of emerging AI-enabled cyber threats, ensuring that the government has the potential to benefit from AI-enabled cyber defense will be critical. The Executive Order also directs the development of a National Security Memorandum to explore the governance of AI for national security, intelligence and military missions, as well as to defend against potential use of AI by adversaries.

Ensuring the government’s competence in the realm of AI  is essential for setting controls that promote security and safety without unduly restricting the U.S.’s ability to lead in AI innovation. The Executive Order sets significant goals in this regard. In particular, it orders a national recruitment surge to get AI talent into government, empowered by sharing best practices in recruitment and training and increased flexibility in pay and recruitment for AI-related roles. Despite these efforts, the mission to rapidly recruit AI talent will face significant challenges. Government cannot match the salaries and conditions offered by the private sector, where there already is strong competition for top talent. In the longer-term, increased training courses and study pathways will help ensure more access to skilled AI experts. Until then, leveraging engagement with industry, including through the Executive Order-established AI Safety and Security Board, will be key to accessing expertise despite the shortfall.

A New Era of International AI Cooperation

The Executive Order recognizes the need for greater global coordination and positions the U.S. to play a  leading role. The importance of this focus is underscored by growing international momentum for action on AI.

In the last few weeks, we’ve seen multiple initiatives gain traction: the U.K.’s AI Safety Summit gathered 28 countries in London to sign the Bletchley Declaration[13] on the urgency of monitoring AI threats and managing potential risks. The United Nations recently established a high-level Advisory Body on Artificial Intelligence[14] featuring experts from the international community, and the G7 Leaders established an AI Code of Conduct[15] during the Hiroshima AI Process. The EU has also been drafting its own AI Act[16] since 2021, and it is expected to come into effect no earlier than 2026.  Importantly, the Executive Order, as well as the U.K. AI Summit, open the door for greater engagement with China on shared AI safety and security interests. As the world’s two leading global players, this cooperation will be critical to ensuring a responsible global approach to AI. Yet in the context of U.S.-China competition, this engagement will not be without its challenges. The Department of State will need to work closely with its Chinese counterparts to establish channels of engagement that are resilient and robust in the face of strategic tensions.


The Executive Order outlines a tranche of objectives to simultaneously address the national security challenges and opportunities posed by burgeoning AI innovations. While a commendable first step, the effectiveness of these efforts will hinge on the details of their implementation, which agencies have been tasked to flesh out in coming months. The Executive Order’s lack of funding and enforcement measures will also limit progress in the near term. As President Biden foreshadows, ‘more action will be required.’[17]

The views and judgments in this article are those of the individual writers and do not represent the views or judgments of any current or former employers.

[1]“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The White House, 30 Oct. 2023, www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence. 

[2] Ryan-Mosley, Tate. “The Race to Find a Better Way to Label AI.” MIT Technology Review, 28 July 2023, www.technologyreview.com/2023/07/31/1076965/the-race-to-find-a-better-way-to-label-ai.

[3] Anderljung et al. “Frontier AI Regulation: Managing Emerging Risks to Public Safety.” arXiv.org, 6 July 2023, arxiv.org/abs/2307.03718.

[4] Bommasani et all. “Decoding the White House AI Executive Order’s Achievements.” Stanford HAI, hai.stanford.edu/news/decoding-white-house-ai-executive-orders-achievements.

[5] Anderljung et al. “Frontier AI Regulation: Managing Emerging Risks to Public Safety.” arXiv.org, 6 July 2023, arxiv.org/abs/2307.03718.

[6]  Egan, Janet, and Lennart Heim. “Oversight for Frontier AI Through a Know-Your-Customer Scheme for Compute Providers.” arXiv.org, 20 Oct. 2023, arxiv.org/abs/2310.13625.

[7] “Blueprint for an AI Bill of Rights." The White House, 16 Mar. 2023, www.whitehouse.gov/ostp/ai-bill-of-rights.

[8] “AI Risk Management Framework” NIST, 30 Mar. 2023, www.nist.gov/itl/ai-risk-management-framework.

[9]“Remarks by President Biden and Vice President Harris on the Administration’s Commitment to Advancing the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The White House, 1 Nov. 2023, www.whitehouse.gov/briefing-room/speeches-remarks/2023/10/30/remarks-by-president-biden-and-vice-president-harris-on-the-administrations-commitment-to-advancing-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.

[10] “AI Trends.” Epoch, 11 Apr. 2023, epochai.org/trends.

[11] “Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem.” The White House, 24 Jan. 2023, www.whitehouse.gov/ostp/news-updates/2023/01/24/strengthening-and-democratizing-the-u-s-artificial-intelligence-innovation-ecosystem.

[12] Zhu et al. “Insight: China Quietly Recruits Overseas Chip Talent as US Tightens Curbs.” Reuters, 24 Aug. 2023, www.reuters.com/technology/china-quietly-recruits-overseas-chip-talent-us-tightens-curbs-2023-08-24.

[13] “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.” GOV.UK, 1 Nov. 2023, www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.

[14] "High-Level Advisory Body on Artificial Intelligence" United Nations Office of the Secretary-General’s Envoy on Technology,www.un.org/techenvoy/ai-advisory-body.

[15]“G7 Leaders’ Statement on the Hiroshima AI Process." The White House, 30 Oct. 2023, www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/g7-leaders-statement-on-the-hiroshima-ai-process.

[16] "EU AI Act: First Regulation on Artificial Intelligence." European Parliament. 6 Aug. 2023, www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. 

[17]“FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” The White House, 30 Oct. 2023, www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

For more information on this publication: Belfer Communications Office
For Academic Citation: Egan, Janet and Diletta Milana. “Action on AI: Unpacking the Executive Order’s Security Implications and the Road Ahead .” Policy Brief, November 8, 2023.

The Authors