This is part three of a CETaS Explainer series on the national security implications of international AI regulation initiatives.

All views expressed in this CETaS Explainer are those of the author, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.

This work is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original author and source are credited.

Introduction


The US Executive Order on Artificial Intelligence (AI) has already had a significant impact, promoting responsible AI adoption across the US. The publication of the National Security Memorandum on AI marks another step forward, providing a detailed vision for how the national security community should harness AI safely and ethically for their own objectives, while also setting out ambitious goals for US leadership on AI innovation


This CETaS Explainer identifies key considerations for the global security community regarding the implementation of the US Executive Order on AI, focusing especially on the newly published National Security Memorandum on AI.

 

What is the US Executive Order on AI?


The US Executive Order AI (published as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence; henceforth the ‘Executive Order’) was published in October 2023.[1] It triggered key processes across the US Government to advance trustworthy AI, including the creation of new guidelines for responsible AI acquisition in government,[2] and the introduction of new requirements for AI developers to share results from safety testing with the US Government.[3] Notably, it included a promise to publish a ‘National Security Memorandum’, detailing commitments for responsible AI deployment in national security contexts.

 

What is the National Security Memorandum on AI?


The National Security Memorandum on AI (henceforth the ‘Memorandum’) was published in October 2024. It is the first document of its kind, setting out the commitments of the US national security community to uphold democratic values through their use of AI. It also looks at how the US should approach competition with adversaries on AI and how national security agencies can become more effective at harnessing cutting edge AI.

 

What are the key provisions of the Memorandum?


The Memorandum covers a wide range of topics, focusing on three core aims:

  1. Ensure the US leads the world in developing safe AI.

  2. Enable the US to harness cutting-edge AI, while protecting human rights and democratic values.

  3. Advance international consensus on AI governance.[4]

It is accompanied by a Framework to Advance AI Governance and Risk Management in National Security (henceforth the ‘Framework’).[5] The Framework is designed to enable US Government agencies to use AI in a manner that upholds human rights and civil liberties.

 

What is the scope of the Memorandum?
 

  • The directives set out in the Memorandum are aimed largely at US federal agencies, ranging from the core national security departments to the US AI Safety Institute, Department of Energy and Department of Commerce.  

  • However, the intended audience is broader than this and includes AI companies, US allies and adversaries.[6] 

  • The Memorandum focuses on frontier AI,[7] defined as general-purpose AI systems near the cutting edge of performance. The accompanying Framework on AI risk management considers the impact of AI systems in a more context-specific way, looking broader than frontier AI.[8]

  • The Memorandum looks to the long-term future of AI. However, given its publication so close to the US election, there are significant uncertainties concerning its implementation by the next Administration. The Memorandum is based on an executive order rather than legislation passed by Congress, and the entirety of the Executive Order can be overturned by a sitting US president. 

Where does the Memorandum sit in the broader regulatory landscape?
 

  • The Memorandum is the first initiative of its kind to deal in detail with national security.

  • Several other regulatory efforts globally will have significant impacts, but do not grapple substantively with national security uses of AI:

    • The EU AI Act came into force on 1 August 2024.[9]

    • The Council of Europe Convention on AI was ratified on 5 September 2024.[10]

    • A UK AI Bill is forthcoming but expected to have a narrow focus.[11]

 

The National Security Memorandum


The Memorandum has three main objectives, each accompanied by specific actions that government agencies must take to achieve them. Table 1 provides a summary of the most important directives outlined in the Memorandum.

 

Table 1: Summary of the directives set out in the US National Security Memorandum on AI 

 

Core Aims of the Memorandum

Specific Directives Included in the Memorandum

  1. Ensure the US leads the world’s development of safe, secure, and trustworthy AI

US flag

Directs US Government departments to:

  • Improve the security of chip supply chains.

  • Support AI developers in keeping their innovations secure from adversaries.

  • Update practices to attract AI talent, including through streamlined visa processes.

  • Introduce simplified mechanisms for partnerships between the US AI Safety Institute and national security agencies.

  • Double down on the National AI Research Resource, to ensure civil society, universities and small businesses have the resources they need to conduct impactful technical research.

  • Coordinate an economic assessment of US competitive advantage on AI. 

  1. Enable the US Government to harness AI, while protecting human rights and democratic values

Icon representing US Government

Directs US Government departments to:

  • Publish the ‘Framework on AI Governance and Risk Management’ and update the Framework where needed to keep pace with technological change.

  • Propose streamlined procurement practices for AI, including with non-traditional vendors.

  • Review existing legal obligations and revise policies as appropriate to enable responsible use of AI.

  1. Advance international consensus and governance around AI

Icon representing the world

Directs US Government departments to:

  • Collaborate with allies to establish a responsible and rights-respecting AI governance framework.

  • Promote responsible use of AI in national security contexts on a global stage, in accordance with both the Memorandum and the Political Declaration on Responsible Military Use of AI and Autonomy.

 

Source: Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfil National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence.
 

The Framework for AI Governance and Risk Management in National Security


The Framework is based on four pillars and introduces numerous commitments to ensure AI use within national security agencies is compatible with human rights and civil liberties. The measures introduced by the Framework are summarised in Table 2. 


Table 2: Summary of commitments set out in the Framework for AI Governance and Risk Management in National Security
 

Pillars of the Framework

Specific Provisions of the Framework 

  1. AI use restrictions

Attention sign

 

 

The framework introduces three categories of AI system, ‘prohibited AI use cases’, ‘high-impact AI use cases’ and ‘AI use cases impacting federal personnel’:

  • ‘Prohibited AI use cases’ should not be used by national security agencies. For example, certain use cases relating to the suppression of free speech and use cases that disadvantage individuals due to protected characteristics.

  • ‘High-impact AI use cases’ are governed by additional safeguards. For example, uses of AI to track individuals in real time or determine an individual’s immigration classification. 

  • ‘AI use cases impacting federal personnel’ are also governed by additional safeguards. For example, uses of AI to make hiring decisions or determine performance reviews. 

  1. Minimum risk management practices

Icon representing risk management

Specific safeguards are introduced:

  • For ‘high-impact AI’, safeguards include AI risk assessments, testing requirements, and mandatory training for AI operators.

  • For ‘AI use cases impacting federal personnel’, safeguards include consultation with the workforce and the need to notify individuals about AI use. 

  • Chief AI Officers can waive risk-management practices for specific AI applications in exceptional circumstances.

  1. Cataloguing and monitoring AI use

Icon representing monitoring

Agencies are required to:

  • Conduct an annual inventory of high-impact AI use cases.

  • Update data management policies and procedures to address a number of concerns raised by AI.

  • Appoint a Chief AI Officer and an AI Governance Board.

  1. Training and accountability

Icon representing training and accountability

Agencies are required to:

  • Establish standard training requirements on responsible use of AI.

  • Update policies to ensure adequate accountability for those developing and deploying AI.

  • Update whistleblower protections to clarify procedures for AI systems so that personnel who use AI have sufficient routes to report concerns around human rights and civil liberties. 


Source: Framework to Advance AI Governance and Risk Management in National Security. 

 

How do Provisions for National Security Compare to Other Sectors? 


Until now, global efforts to regulate AI have largely overlooked the national security sector,[12] raising concerns among civil liberties groups about potential discrepancies in human rights protections between national security organisations and the rest of the public sector.[13] 


The publication of the Memorandum by the US Government goes against this trend, to some extent bringing the national security sector in line with other departments when it comes to responsible use of AI. There is now strong alignment between the Framework, designed for national security organisations, and pre-existing guidance for responsible AI across the whole US Government, produced by the Office of Management and Budget.[14]


Nevertheless, some concerns have still been raised around gaps in this national security Framework: 

  1. Waivers: The Memorandum and Framework both propose waiver mechanisms whereby risk management practices can be skipped in exceptional circumstances. These have been critiqued by experts concerned that waivers will be overused.[15] Nevertheless, the Framework does contain a transparency commitment for organisations to publish an unclassified report of the total number of waivers and how many are currently active.[16]

  2. Lack of external oversight: The Memorandum has been criticised for an overreliance on internal oversight mechanisms, in particular through the Chief AI Officer and AI Governance Boards that are set to be introduced,[17] rather than introducing independent oversight or transparency.[18]

 

National Security Implications


If the Memorandum is implemented in full, its implications for US national security will be widespread. Irrespective of the next US administration, the Memorandum and accompanying Framework offer numerous lessons for the global security community as countries around the world continue to grapple with the challenges AI brings. 

 

Guardrails for AI with global relevance


While the Memorandum itself sets in motion many important processes across the US Government, it is the Framework that provides especially relevant lessons for the global security community. 


The US commitment to not only introduce risk assessments and mandatory testing across the board for ‘high-impact AI’, but also to consult on ‘AI use cases impacting federal employees’ is notable. And, perhaps most significantly, the designation of ‘prohibited’ use cases is a step forward. Previous attempts to set out prohibited AI use cases, for example within the EU AI Act, have exempted national security altogether,[19] making this step from the US national security community unprecedented.


Furthermore, the internal oversight structures proposed in the Framework are as relevant internationally as they are in the US, where Chief AI Officers and AI Governance Boards might be effective in spotting AI risks early on.

 

Call for global action


The audience for this Memorandum is global. For allies, it serves as a call to cooperate on AI governance, encouraging collaborative initiatives that build on the work of the UN, G7, the UK AI Safety Summit and the recent AI Seoul Summit. Conversely, for adversaries, it clearly signals the US's intention to take the lead on AI and outlines its approach to doing so, attempting to signal that the US can set bold targets for innovation and competitiveness without compromising on responsible innovation.[20]


However, some critics argue that the Memorandum has overlooked a key opportunity by not placing sufficient emphasis on the US’ role in developing AI ecosystems in other countries, particularly in the Global South, which is essential for advancing national security objectives.[21]

 

Uncertain implications for the future


Despite the ambition of the Memorandum, its implications are uncertain. Given its publication in the lead-up to the US Election, most of the directives it sets out will remain incomplete when a new President takes office. And, while there are some policies that would likely be continued under either candidate, such as those focused on maintaining a competitive edge over China,[22] many of the ambitious provisions may well be scrapped. More time is therefore needed to fully evaluate the extent to which this Memorandum will shape the global discourse on AI and its national security implications.

 

Conclusion


If this Memorandum is implemented in the US, there will soon be many more lessons to learn as government agencies begin to deliver on the tasks they have been set. In any case, the ambitious policies it sets out can still be built upon by governments globally as they continue to consider how national security intersects with their AI regulation strategies. This may signal a period of more proactive thinking globally about how the use of AI in national security is governed, which should be a positive development for consensus-building and public trust in AI systems. 


 

References

[1] The White House, “Fact sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” 30 October 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

 

[2] The White House, “Fact sheet: OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government,” 3 October 2024, https://www.whitehouse.gov/omb/briefing-room/2024/10/03/fact-sheet-omb-issues-guidance-to-advance-the-responsible-acquisition-of-ai-in-government/

 

[3] The White House, “Fact sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” 30 October 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

 

[4] The White House, “Fact Sheet: Biden-Harris Administration Outlines Coordinated Approach to Harness Power of AI for US National Security,” 24 October 2024, https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/.

 

[5] The White House, “Framework to Advance AI Governance and Risk Management in National Security,” 24 October 2024, https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf

 

[6] Gregory C. Allen and Isaac Goldston, “The Biden Administration’s National Security Memorandum on AI Explained,” CSIS Analysis, 25 October 2024, https://www.csis.org/analysis/biden-administrations-national-security-memorandum-ai-explained. 

 

[7] Gregory C. Allen and Isaac Goldston, “The Biden Administration’s National Security Memorandum on AI Explained,” CSIS Analysis, 25 October 2024, https://www.csis.org/analysis/biden-administrations-national-security-memorandum-ai-explained.

 

[8] The White House, “Framework to Advance AI Governance and Risk Management in National Security,” 24 October 2024, https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf.

 

[9] European Commission, “AI Act,” https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

 

[10] Rosamund Powell, “The Council of Europe Convention on AI: National Security Implications,” CETaS Explainers (September 2024), https://cetas.turing.ac.uk/publications/council-europe-convention-ai-national-security-implications

 

[11] Jakob Mokander et al., “Getting the UK’s Legislative Strategy for AI Right,” Tony Blair Institute for Global Change Paper, 12 September 2024, https://institute.global/insights/tech-and-digitalisation/getting-the-uks-legislative-strategy-for-ai-right

 

[12] Rosamund Powell, “The Council of Europe Convention on AI: National Security Implications,” CETaS Explainers (September 2024), https://cetas.turing.ac.uk/publications/council-europe-convention-ai-national-security-implications; Rosamund Powell, “The EU AI Act: National Security Implications,” CETaS Explainers (August 2024), https://cetas.turing.ac.uk/publications/eu-ai-act-national-security-implications.

 

[13] Access Now, “Joint statement – A dangerous precedent: how the EU AI Act fails migrants and people on the move,” 13 March 2024, https://www.accessnow.org/press-release/joint-statement-ai-act-fails-migrants-and-people-on-the-move/; European Disability Forum, “EU’s AI Act fails to set gold standard for human rights,” 3 April 2024, https://www.edf-feph.org/publications/eus-ai-act-fails-to-set-gold-standard-for-human-rights/; European Center for Not-for-Profit Law, “Council of Europe approves AI Convention, but not many reasons to celebrate,” EDRi, 10 July 2024, https://edri.org/our-work/council-of-europe-approves-ai-convention-but-not-many-reasons-to-celebrate/

 

[14] The White House, “Fact sheet: OMB Issues Guidance to Advance the Responsible Acquisition of AI in Government,” 3 October 2024, https://www.whitehouse.gov/omb/briefing-room/2024/10/03/fact-sheet-omb-issues-guidance-to-advance-the-responsible-acquisition-of-ai-in-government/

 

[15] Just Security, “The US National Security Memorandum on AI: Leading Experts Weigh In,” 25 October 2024, https://www.justsecurity.org/104242/memorandum-ai-national-security/. 

 

[16] The White House, “Framework to Advance AI Governance and Risk Management in National Security,” 24 October 2024, https://ai.gov/wp-content/uploads/2024/10/NSM-Framework-to-Advance-AI-Governance-and-Risk-Management-in-National-Security.pdf.

 

[17] Just Security, “The US National Security Memorandum on AI: Leading Experts Weigh In,” 25 October 2024, https://www.justsecurity.org/104242/memorandum-ai-national-security/.

 

[18] ACLU, “ACLU warns that Biden-Harris administration rules on AI in national security lack key protections,” ACLU Press Release, 24 October 2024, https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections.

 

[20] Mohar Chatterjee and Joseph Gedeon, “New Biden policy takes a big swing at AI – and sets political traps,” Politico, 24 October 2024, https://www.politico.com/news/2024/10/24/biden-ai-policy-national-security-00185407. 

 

[21] Just Security, “The US National Security Memorandum on AI: Leading Experts Weigh In,” 25 October 2024, https://www.justsecurity.org/104242/memorandum-ai-national-security/.

 

[22] Gregory C. Allen and Isaac Goldston, “The Biden Administration’s National Security Memorandum on AI Explained,” CSIS Analysis, 25 October 2024, https://www.csis.org/analysis/biden-administrations-national-security-memorandum-ai-explained. 

Authors

Citation information

Rosamund Powell, "The US Executive Order on AI: National Security Implications," CETaS Explainers (November 2024).