Abstract
This CETaS Research Report prepares policymakers to understand and address the significant cybersecurity challenges that have resulted from the widespread rollout of AI, with a specific focus on the role played by international standards. Our findings indicate that many of the challenges associated with securing AI systems are not new and can be resolved by updating existing cybersecurity standards. However, several challenges are new and evolving rapidly, such as those relating to the security of AI supply chains and new threats which have emerged in the context of generative AI. Standards development organisations (SDOs) have begun to introduce international standards which help protect AI systems from attack, but these standards are at a nascent stage. We recommend governments redouble their efforts to support SDOs and to ensure crucial international standards are made available and accessible to those who need to implement them. We also recommend further investment in related research to advance understandings of adversarial AI, ensuring that future international standards don’t just focus on identifying vulnerabilities, but instead offer robust and specific mitigation strategies.
This work is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited. The license is available at: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.
Executive Summary
This CETaS Research Report prepares policymakers to understand and address the significant cybersecurity challenges that have resulted from the widespread rollout of AI, with a specific focus on the role played by international standards.
The range of AI security risks is broad, including instances of privacy attacks on AI systems (e.g. model inversion where sensitive or personal data from training sets can be reconstructed from an AI model) and attacks which result in failed model outputs (e.g. evasion attacks leading autonomous vehicles to not recognise signs). As capabilities progress, these risks will evolve, with new vulnerabilities already emerging in generative AI models, particularly regarding prompt injection.
In response to these risks, some safeguards for secure AI have been proposed – with 2023 seeing new official guidance in both the UK and US on secure AI and AI risk management. However, despite these high-level guidance documents, there remains a lack of specific, operationally focused guidelines for securing AI.
In this context, international standards have significant potential to promote AI security best practice. International standards already play a foundational role informing cybersecurity approaches. Furthermore, they are becoming increasingly essential in AI governance, serving a critical translation function between high-level AI policies and practical implementation.
Our findings indicate that many of the challenges associated with securing AI systems are not new and can be resolved by updating existing cybersecurity standards. However, several challenges are new and evolving rapidly. These include difficulties associated with the security of AI supply chains and new threats which have emerged in the context of generative AI.
Standards development organisations (SDOs) have recognised that securing AI presents a new and important challenge and have begun to introduce international standards which help protect AI systems from attack. These standards are at a nascent stage.
We recommend governments redouble their efforts to support SDOs and to ensure crucial international standards are made available and accessible to those who need to implement them. We also recommend further investment in related research to advance understandings of adversarial AI, ensuring that future international standards don’t just focus on identifying vulnerabilities, but instead offer robust and specific mitigation strategies.
Finally, we recommend governments recognise that international standards will not resolve all of their AI governance challenges but must be accompanied by adjacent efforts such as upskilling initiatives and more agile technical solutions. Without these combined efforts, AI systems’ vulnerabilities to attack will likely increase further, preventing UK society from being able to fully and safely harness the opportunities these technologies bring.
Summary Recommendations
For international standards to meet AI security needs, SDOs, governments, industry, and academia must work together. We make recommendations to each group, focusing on five objectives:
- Set a clear roadmap for future international standards on AI security. We recommend resources within SDOs are prioritised to focus initially on expanding the scope of existing AI terminology standards, on creating new process standards, and on introducing AI security threat mapping standards. Subsequently, attention should turn to mitigation techniques, measurement standards and sector-specific standards. Where possible, existing standards should be updated rather than starting from scratch, and more time should be dedicated to coordination across SDOs to avoid duplication and to maximise alignment in terminology and frame of reference between standards.
- Improve fundamental understandings of how to secure AI. Research funding should be dedicated to fundamental AI security questions. Technical research should focus on identifying new methods to protect AI systems from attack. Social research should explore the human factors preventing AI practitioners from implementing security-aware best practices and aim to improve understandings of who will be most impacted by failures to address these challenges.
- Foster a responsive standardisation ecosystem that is better equipped to tackle the challenges AI brings. We recommend national governments are proactive in international standardisation. Interventions should include government-backed horizon scanning to identify key trends early on and government funding for civil society and small and medium size enterprise (SME) participation in standardisation.
- Introduce incentives to encourage uptake of international cybersecurity and AI standards. Existing cybersecurity incentives should be expanded, with increased focus on accountability. In expanding incentives, we recommend greater integration of UK incentives with international standards. For instance, the UK National Cyber Security Centre (NCSC) should consider introducing an AI-specific tier within the Cyber Essentials accreditation scheme. This new certification tier should be informed by international standards.
- Develop guidelines on AI security, which bring international standards together with analogous resources. We recommend the NCSC works with the Department for Science, Innovation and Technology (DSIT) to produce guidelines on AI security that integrate international standards with national policies, technical solutions, and industry cybersecurity guidelines. Ideally, these guidelines should be produced in collaboration with international partners.
Glossary
Definitions:
AI: AI is defined in line with the OECD as any ‘machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’
Cybersecurity: In line with the NCSC, we define cybersecurity simply as the means through which individuals and organisations reduce the risk of cyber-attack.[2]
AI Security: We define AI security as the process of managing the design, implementation and operation of AI models, systems, and data throughout their lifecycle, to reduce the risk of harm either from deliberate, unwanted, hostile or malicious acts, or failures to act.
Standards: Documents that set repeatable voluntary guidelines for how things should be done (as defined by the AI Standards Hub). Any such reference document may be considered a standard, but our research is focused on those created by standards development organisations (SDOs).
Standards Development Organisation (SDO): SDOs are recognised bodies that develop and publish standards. They are composed of technical committees that oversee the development of standards within a given remit, via procedures designed to achieve consensus among committee members (as defined by NIST).
CIA Triad: The confidentiality, integrity, and availability triad is a commonly used model to determine cyber threats and cybersecurity best practices.
Abbreviations:
ETSI: European Telecommunications Standards Institute
CEN: European Committee for Standardisation
CENELEC: European Committee for Electrotechnical Standardisation
ISO: International Organisation for Standardisation
IEC: International Electrotechnical Commission
NIST: National Institute of Standards and Technology
OWASP: Open Worldwide Application Security Project
BSI: British Standards Institute
NPL: National Physical Laboratory
Introduction
Rapid and widespread adoption of AI, and of large language models (LLMs) in particular, has caused concern in the academic community that we have entered into a ‘cybersecurity crisis of artificial intelligence’.[5] And, while much attention has been paid to the threat of malicious actors using AI to boost their cyberattack capabilities,[6] insufficient emphasis has so far been placed on attacks that target AI systems themselves. These cyberattacks can be associated with a range of harms globally, from the disruption of people’s access to critical services, to the economic impact on businesses who have been targeted, and the harms caused to people whose personal data may have been impacted.
Those working in the field of ‘adversarial AI’ have studied this threat in detail, cataloguing the vulnerabilities of AI systems to a range of attacks – from evasion and poisoning to privacy and abuse attacks.[7] Given the scale and urgency of the threat, especially in high-risk sectors such as defence and critical infrastructure, governments have been called on to implement stricter measures, for example through mandating compliance with robust codes of practice on AI security.[8] However, developing codes of practice that are sufficiently robust, while also being usable by developers, is a significant challenge.
Near-term solutions are likely to come from international SDOs who already play a critical role in operationalising high level AI policies in the form of implementable technical and sociotechnical guidance.[9] For decades, international standards have provided consensus-driven best practice guidance on topics from information security to technological resilience.[10]
SDOs have already released crucial standards on AI, for example specifying how to create an AI management system,[11] and defining key terminology such as AI bias and reliability.[12] Further international standards covering AI security explicitly will be released in the coming months and years as SDOs, including ETSI, CEN/CENELEC and ISO/IEC, have identified this as a priority.[13]
But relying on international standards alone for the security of AI is not a dependable strategy. This is because:
- Significant gaps remain in the international standards landscape. For example, there is minimal coverage of generative AI, and of specific attacks (e.g. model inversion).
- The highly lengthy and resource-demanding process of standards development prevents SDOs from keeping pace with cutting-edge techniques to defend AI against attack.
- Concern about the domination of industry in standards-setting is widespread. Trust in standards must be bolstered to ensure they can be relied on for AI security needs.
- Even when standards are available, they are voluntary. Current UK cybersecurity incentives do not sufficiently cover AI, nor do they introduce sufficiently robust lines of accountability.
- More must be done to integrate international standards with more agile resources (e.g. national policies, academic and industry approaches) as standards alone will not meet everyone’s needs.
This report addresses these obstacles to maximise the efficacy of international standards for AI security. We do so by answering the following research questions:
- Why is AI security such a hard problem for SDOs to tackle? (Section 1)
- How have SDOs addressed the challenge so far and what threats should they tackle next? (Sections 2 & 3)
- How might implementation of international standards be improved, both through changes to how standards are designed and how uptake is incentivised? (Sections 4 & 5)
- How can developers, procurement teams, product owners and others use international standards alongside other AI security techniques to implement a whole-lifecycle approach to securing AI? (Section 6)
International standards stand to play a critical role in minimising harms from cyberattacks on AI systems, and more must be done to ensure their widespread implementation. However, it will also be essential to integrate international standards with agile technical solutions that can be updated as and when new threats to AI systems emerge, and to complement these efforts with upskilling initiatives that provide teams with the skills they need to implement standards effectively.
Research methodology and limitations
Data collection for this study was conducted over a four-month period from October 2023 – January 2024 including four core research activities:
- Literature review covering academic and policy literature on topics such as AI governance, cybersecurity, and adversarial AI.
- Standards mapping to assess current coverage of AI security concerns by SDOs, focusing on existing standards across ISO/IEC, ETSI, CEN/CENELEC, and ITU.
- Semi-structured interviews with 31 participants across government, SDOs, industry and academia.
- Research workshop attended by more than 30 experts on international AI security standards.
The scope of this report is limited to considerations around the cybersecurity of AI systems (i.e. protecting the AI models, systems and data). AI-enabled cyberattacks and AI agents for cyber-defence are not addressed. It is beyond our scope to provide an exhaustive mapping of individual international standards. We do not cover national standards in depth. Finally, bodies producing standards for internal or private use, such as companies and military agencies whose protocols for AI security may be proprietary or confidential, are out of scope.
AI technologies bring new challenges for cybersecurity experts for a number of reasons. For example, the probabilistic nature of many AI systems leads to inherent non-determinism which can be exploited by malicious actors.[14] Furthermore, AI system performance is intrinsically dependent on both training data and input data, meaning it is possible to manipulate the behaviour of the system by indirectly manipulating properties of training or input data.[15] This section explores how novel properties and capabilities of AI systems necessitate new definitions and new methods from those deployed in other cybersecurity domains.
1. Why is Securing AI Such a Challenge?
AI technologies bring new challenges for cybersecurity experts for a number of reasons. For example, the probabilistic nature of many AI systems leads to inherent non-determinism which can be exploited by malicious actors. Furthermore, AI system performance is intrinsically dependent on both training data and input data, meaning it is possible to manipulate the behaviour of the system by indirectly manipulating properties of training or input data. This section explores how novel properties and capabilities of AI systems necessitate new definitions and new methods from those deployed in other cybersecurity domains.
1.1 Defining the scope of AI security
A clear definition of AI security is essential to support the consensus-driven process of standards making.[16] Establishing such a definition is no easy task. The term AI security is already being used in several ways, with different implications for the breadth of our report. AI security can mean anything from:
- A subset of cybersecurity, focusing on vulnerabilities to adversarial attack present within machine learning (ML) systems,[17] or
- A topic extending beyond cybersecurity to overlap with AI safety.[18]
Depending on context, the latter definition can be particularly relevant to cyber—physical systems (CPS) where AI interacts with the control of a physical entity.[19] However, it can also be relevant to harms associated with malicious uses of generative AI, for instance in creating disinformation.
Throughout research engagements, participants frequently preferred the use of narrower definitions. But, while favouring narrower definitions to specifically highlight adversarial threats, it is necessary to acknowledge that AI security is still a sociotechnical rather than purely technical challenge. It is therefore important to look beyond just technical controls to the physical, personnel, and process aspects of security, as well as considering the security of the information employed in AI processes.
Bearing these conflicting priorities in mind, we define AI security throughout as:
‘Managing the design, implementation and operation of AI models, systems, and data throughout their lifecycle, to reduce the risk of harm either from unwanted, hostile or malicious acts, or failures to act by accountable actors (e.g. developers, deployers, procurers, end users).’
1.2 How does AI security differ from cybersecurity?
There is agreement in the academic literature that a significant overlap exists between traditional cybersecurity and the security of AI.[20] The common ground typically relates to security of the computational infrastructure on which AI systems operate, and the protection of models and data. However, experts disagree about the extent to which AI security is new.[21] Below, we explore four ways in which AI security significantly diverges from cybersecurity.
New goals for secure AI
We can no longer rely upon the conventional security goals of confidentiality, integrity and availability (CIA triad) as the sole basis for securing AI applications. For example, integrity addresses the completeness of information not its authenticity, i.e. the difference between completeness and validity. Likewise, there is a difference between availability and utility, the former concerning usability and the latter usefulness.
However, there is still disagreement as to the specific properties that should characterise secure AI. Such disagreement reopens the debate about AI system characteristics such as trustworthiness, dependability, resilience, and robustness, raising questions such as: ‘Should robustness come under security?’ and ‘Is resilience a better characterisation?’[22]
The specific characteristics of secure AI can also differ depending on context. For instance, in discussions around NIST’s AI risk management framework, the goal of ‘secure and resilient’ AI has been broken down simply into the sub-goals of confidentiality, integrity, availability, and security-by-design.[23] In contrast, the properties of secure AI in the context of CPS have been broken down to focus on the more complex goals of confidentiality, integrity, availability, authenticity, safety, resilience, utility, interoperability, and control.[24] Standardisation will be dependent on broad agreement on whether we must look beyond the CIA triad, and if so which additional characteristics are critical for secure AI.
New modes of attack
Some definitions of AI security focus not on defining the properties of secure AI, but instead on cataloguing new ways AI systems can be attacked. Examples include open-source vulnerabilities, supply chain attacks, data vulnerabilities, data poisoning and prompt injection.[25] Some have categorised these new modes of attack according to whether they affect generative or predictive AI systems.[26] Attackers regularly find new ways to undermine AI security, creating difficulties for SDOs wishing to cover the full spectrum of attacks.
New secure-by-design methods to implement
The opacity of complex AI systems makes it more challenging to identify if, and how, secure inputs guarantee secure outputs. Secure-by-design remains a critical principle but must be updated to account for new complex AI systems.
Many approaches to AI security focus on linking solutions to specific stages of the AI lifecycle. For example, NCSC’s principles for the security of machine learning encourage design for security.[27] This can require an in-depth understanding of the relationship represented by input data and the desired behaviours or performance of the wider system. NCSC and the National Protective Security Authority (NPSA) both recommend that organisations developing innovative technologies should manage their assets and should also secure their infrastructure and supply chains.[28] For an AI system, these assets and supply chains will differ from other software.
New governance challenges
Traditional security awareness practices are insufficiently embedded within AI development communities – often due to a widespread culture of rapid innovation within the developer community. One industry respondent suggested that ‘AI practitioners don’t hold themselves to the same rigour as engineers in more regulated, safety crucial industries.’[29]Additionally, because of the pace and scale of AI adoption, particularly generative AI, rapidly emerging security challenges are already causing real-world harm.[30] This poses significant challenges for both developers and security professionals.
2. The Role of AI Security Standards
Initial progress towards securing AI has already been made thanks to international standards. However, this work is nascent and remains fragmented, with distinct SDO committees each focusing on differing aspects of AI and cybersecurity.
2.1 Background on standardisation
What is an SDO, and which SDOs are leading work on AI standardisation?
SDOs are recognised bodies, composed of technical committees, that develop and publish standards, following procedures designed to achieve consensus among committee members.[31]
There are national, regional, and international SDOs. Key international players in AI standardisation include ISO, IEC, IEEE, and, increasingly, ITU.[32] At the European level, the main standards bodies working on AI are the European Standards Organisations (ESOs) CEN, CENELEC, and ETSI.[33] While ETSI is technically a European SDO, its membership model allows its standards to have global reach. The likely division of responsibility between the ESOs, as signalled in the European Commission’s draft standardisation request, will involve the bulk of European AI standards coming from CEN-CENELEC, with some specific AI security work from ETSI.[34]
National standards bodies (NSBs) send delegates to represent them at international SDOs, while at the same time developing national standards and adopting international standards for national use. In the UK, the British Standards Institution (BSI) is leading AI standardisation. The UK’s AI Standards Hub, a partnership between the Alan Turing Institute, BSI, and the National Physical Laboratory (NPL) is further advancing the role of international AI standards via research, stakeholder engagement and capacity-building initiatives.
Many of these SDOs have a committee focused on developing AI standards:
- ISO/IEC JTC 1/SC 42: AI subcommittee of the joint ISO/IEC technical committee on information technology[35]
- BSI ART/1: ‘mirrors’ the work of SC 42 in addition to producing its own standards[36]
- BSI ART/1: ‘mirrors’ the work of SC 42 in addition to producing its own standards[36]
- IEEE AI Standards Committee[37]
- CEN-CENELEC JTC 21: joint committee on AI[38]
- ETSI’s Securing AI: works exclusively on security issues[39]
In addition to AI-focused committees, these SDOs have workstreams on fundamental topics related to AI security, such as cybersecurity, information security, and hardware integrity.
In some cases, key AI standardisation activities are being led by bodies that are not formally SDOs. In the US, for example, ANSI serves as the national standards body, but NIST[40] has published a variety of technical reference materials and other forms of guidance on the use of AI. [41] These are not formally considered standards but are often treated as though they were by governments, industry, and academia. Other bodies that are not SDOs but produce ‘standard-like’ forms of technical guidance on AI include not-for-profit organisations like OWASP.[42]
How are standards developed?
Members of a technical committee work to achieve consensus on a standard’s content, structure, and language. In general, the process involves: proposing and approving a topic for a new standard; drafting the standard; discussing and revising the draft; and in some cases making the standard available for public comment. Once published, standards are periodically updated to reflect scientific and technical advancements.[43] Before publication, committee members must vote to approve a draft standard. At ISO, there must be ‘general agreement’ among the committee members as to the content of the standard, with ‘no sustained opposition to substantial issues’.[44]
SDOs permit a range of stakeholders to participate in standards development and are generally encouraging of multistakeholder involvement.[45] In practice, committees are often dominated by industry,[46] although this varies a lot across SDOs (for example, committees at the ITU have much more government presence).
How are standards used?
Anybody involved in the design, development, deployment, assessment or use of an AI system might use an AI standard. Some standards bodies (ISO/IEC, IEEE, CEN-CENELEC, BSI) make standards available for purchase, while others (ITU, ETSI) make standards free to access. Standards are non-binding and voluntary.
Governments can play an important role in shaping how standards are used. While compliance with particular standards is voluntary, in some jurisdictions, mechanisms exist to strongly incentivise the use of standards, or else to ‘harmonise’ particular standards with regulatory requirements, making adopting those standards the easiest way to comply with the relevant regulation.[47] Use of these mechanisms to incentivise adoption of specific standards is considered in many jurisdictions to be an important component of public policy, especially where environmental, public health, or safety interests are at play.[48]
2.2 What are standards and why are they helpful?
Standards are documents that set detailed voluntary guidelines for how things should be done: how sizes and quantities should be measured, how tools and devices should be constructed, how processes should be undertaken and overseen.[49] Any such reference document may be considered a standard, but our research is focused on those created by SDOs, as these are agreed through expert-led, consensus-driven processes.[50]
In general, SDO standards are seen as industry tools: by setting common rules and expectations for products and services, standards can lower trade barriers, create market efficiencies, and increase reliability and consumer trust.[51] SDO standards bring several advantages (see Table 1).
Table 1: Benefits of international standards
Stakeholder type | Benefits |
Consumers |
|
Industry |
|
Government |
|
Different types of standards serve different functions. For the purposes of this research, four types of standards (adapted from the AI Standards Hub) will be particularly relevant.[52] The purpose of each is illustrated in Figure 1.[53]
Figure 1: Different types of standards and their functions (adapted from the AI Standards Hub)
There is a growing consensus among AI policy experts that SDO standards will be important tools for operationalising complex AI governance objectives.[54] Where regulators may lack the requisite expertise to set detailed guidelines for creating ‘safe’, ‘fair’, or ‘secure’ AI systems, they can point to standards that have been developed by committees of technical AI experts instead. Already, governments have set their sights on key standards bodies to begin this work. In 2022, the European Commission issued a request to the European Standards Organisations CEN and CENELEC to develop standards ‘in support of safe and trustworthy artificial intelligence’, to underpin key requirements of the forthcoming EU AI Act.[55] In the US, meanwhile, NIST has been at the forefront of setting technical frameworks for national AI policy and has received instructions from both Congress[56] and the President[57] to develop such resources.
2.3 Progress towards standards for secure AI
Despite the nascency of ‘AI security’ as a topic for standardisation, with the first official SDO technical committee covering AI security only launched at the end of 2023,[58] several relevant international standards already exist. We do not provide a detailed mapping of existing standards, instead highlighting some of the most important standards.
Cybersecurity standards
Cybersecurity standardisation has been ongoing for decades and is relatively mature. While many of the challenges of securing AI will be new, traditional cybersecurity practice can provide useful groundwork for those wishing to secure AI systems.[59] Key standards include:
- ISO 27000 series which provides guidance on information security management.[60] This series defines a set of common terms and processes for organisations to secure the data they own and handle, promoting long-term cyber-resilience. These standards can support AI security needs by ensuring that, at a baseline, the information used to train an AI system and the information produced by an AI system are secured.
- ISO/IEC 29147:2018 on vulnerability disclosure in the field of information security outlines key requirements for organisations seeking to investigate, disclose, and remedy security vulnerabilities in their IT systems.[61]
- ETSI TC Cyber is also working to produce relevant cybersecurity standards. For example, ETSI TS 103 485 provides guidance on privacy assurance while ETSI TS 103 458 covers attribute-based encryption.[62]
- NIST cybersecurity resources[63] can also be relevant, including their computer security publication series, SP 800;[64] their Cybersecurity Framework (CSF) (last updated in February 2024);[65] and their cybersecurity Risk Management Framework (RMF), published as SP 800-37.[66]
AI standards
Increasingly, SDOs are producing standards on responsible and trustworthy AI. Much of this work has focused on defining key concepts, addressing broader characteristics like ‘ethics’, ‘trustworthiness’, and ‘transparency’, and providing overarching risk management guidance.
Within SC 42, the ISO/IEC AI committee, key foundational and terminology standards for AI have already been agreed, helping to establish a shared vocabulary and conceptual framework for future standards to build on:
- ISO/IEC 22989:2022 on AI concepts and terminology.[67]
- ISO/IEC 23053:2022, a ‘Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)’ provides a more detailed overview of machine learning tasks, as well as the development pipeline.[68] As both were published in 2022, there is limited coverage of more recent developments (e.g. transformer architectures which underpin LLMs).
More recently, two vital process and management AI standards have been published. These standards build on existing ISO management system and risk management standards, including ISO 9001:2015 and ISO 31000:2018.
ISO/IEC 42001:2023 provides requirements for implementing an AI ‘management system’ within organisations that provide or use AI systems.[69] The standard defines organisational responsibilities and is centred around three key activities: risk assessment, risk treatment, and system impact assessment.
- ISO/IEC 23894:2023 focuses on ‘risk management’ for AI systems, pointing to key sources of risk (data, personnel) and explaining how to address risks once identified.[70] Achieving system-level security is considered to be an ‘objective’ of ISO/IEC 23894, which refers to the 27000 series for a definition of information security risk management and to ISO/IEC TR 24028:2020 (Information technology — Artificial Intelligence — Overview of trustworthiness in artificial intelligence)[71] for a taxonomy of key AI-specific security vulnerabilities.[72]
Other important resources relating to AI standardisation include:
NIST’s AI Risk Management Framework (RMF) 1.0 which provides instructions for organisations to identify (‘map’), assess (‘measure’), and manage risks posed by their AI systems and includes guidance on withdrawing systems in case of unacceptable risks. The AI RMF requires risks to be identified separately according to each of seven trustworthiness characteristics of AI systems, one of which is ‘security and resilience’,[73] with trade-offs between the categories to be assessed in order to create a holistic risk management plan.
- The IEEE 7000 series for AI Ethics and Governance, released between 2020 and 2022.[74] These standards cover topics like ethical design (IEEE 7000-2021), transparency (IEEE 7001-2021), and data privacy (IEEE 7002-2022). While they do not specifically tackle AI cybersecurity concerns – except insofar as data privacy and data governance (IEEE 7005-2021) practices contribute to overall security – these standards form part of a growing body of international standards that support responsible AI.
AI security standards
Standards focused explicitly on security of AI remain somewhat rare. An important exception to this is the work of the ETSI Technical Committee (TC) Securing AI (SAI), which is developing a series of technical reference materials to address various AI security aspects.[75]
Before becoming a TC, ETSI’s ‘industry specification group’ (ISG) SAI published several ‘group reports’ (GRs) covering a variety of topics within AI security:[76]
- ETSI GR SAI 001: AI Threat Ontology[77]
- ETSI GR SAI 002: Data Supply Chain Security[78]
- ETSI GR SAI 004: Problem Statement[79]
- ETSI GR SAI 005: Mitigation Strategy Report[80]
- ETSI GR SAI 006: The role of hardware in security of AI[81]
- ETSI GR SAI 007: Explicability and transparency of AI processing[82]
- ETSI GR SAI 009: Artificial Intelligence Computing Platform Security Framework[83]
- ETSI GR SAI 011: Automated Manipulation of Multimedia Identity Representations[84]
- ETSI GR SAI 013: Proof of Concepts Framework[85].
These GRs provide informative content on key terms, concepts, and approaches in AI security.[86] It will be crucial to raise awareness of this work among AI practitioners, as many perceive these topics to be unaddressed by international standards.
Since becoming a TC, SAI has published ‘ETSI TR SAI 104: Collaborative Artificial Intelligence’, a technical report which provides an overview of AI security and performance issues that may stem from interaction or collaboration between AI systems, their stakeholders, and each other.[87]
Beyond ETSI, ISO/IEC have released a few standards covering AI security:
ISO/IEC 27563:2023 (Security and privacy in artificial intelligence use cases — Best practices) outlines security and privacy risks stemming from the AI use cases delineated in ISO/IEC TR 24030:2021 (Information technology — Artificial intelligence — Use cases). This foundational standard provides a taxonomy of risks and controls, outlining how to begin developing a ‘security and privacy plan’. Detailed implementation guidance is not provided.[88]
ISO/IEC 24029 (Artificial intelligence — Assessment of the robustness of neural networks) series, of which two parts have already been published. ISO/IEC 24029-1:2021 (Part 1: Overview) analyses the robustness concept and defines key statistical, formal, and empirical methods of assessing the robustness of neural networks. Detailed guidance on these formal assessment methods is provided in ISO/IEC 24029-2:2023 (Part 2: Methodology for the use of formal methods), with a final standard, ISO/IEC AWI 24029-3 (Part 3: Methodology for the use of statistical methods), currently under development.[89]
- ISO/IEC TR 29119-11:2020 (Software and systems engineering: Software testing) offers guidelines on testing AI-based systems, with an updated version expected in 2024, maps specific AI testing processes to the verification and validation stages of the AI lifecycle.[90]
Two forthcoming standards from ISO/IEC are also set to be particularly relevant:
ISO/IEC CD 27090 (Cybersecurity: Artificial intelligence) is set to offer guidance on identifying and mitigating security threats throughout the AI lifecycle.[91]
- ISO/IEC WD 27091.2 (Cybersecurity and Privacy: Artificial intelligence) is set to help organisations identify privacy risks throughout the AI lifecycle and to treat these risks.[92]
So far, the SDOs producing the most relevant work for those interested in securing AI have been ISO/IEC and ETSI. The work of these SDOs is likely to remain relevant, as many of the key AI standards they are producing will underpin requirements of the forthcoming EU AI Act. CEN-CENELEC, which has been tasked with standards drafting around the EU AI Act, has adopted several ISO AI standards with plans to adopt more,[93] while ETSI’s work will increase in relevance following the Securing AI Group’s elevation from an ‘industry specification group’ to an official ‘technical committee’.
Even as the field of AI standardisation is relatively immature, with considerable work anticipated over the next several years, relevant standards straddling two key areas of interest – cybersecurity and AI – can support stakeholders to secure their AI systems today. A key problem so far has been raising awareness of these standards, which is made harder by their fragmentation across multiple SDO committees and by the cost of many of these standards (excluding those released by ETSI and NIST which are free to access).
3. A Roadmap for Future AI Security Standards
Despite the progress made by SDOs so far, key gaps remain. This section offers a high-level overview of significant standardisation gaps and priorities for future standards development.
3.1 Key gaps in AI security standards
To identify standardisation gaps for AI security, we asked experts to classify perceived gaps according to four major types of international standard. The results of this exercise (shown in Figure 2) demonstrate the need to increase awareness around existing standards at the same time as filling gaps, as several topics cited as future priorities have in fact already been standardised (for example, terminology standards for AI).
Figure 2: Perceived gaps in international standards for AI security.
For more granular analyses of these standards gaps, see the OWASP AI exchange[94] (especially for analysis of ISO standards) and ENISA[95] (especially for analysis of the EU AI Act).
3.2 Priority topics for future standards
SDOs are under pressure, given demand for standards across such a range of topics,[96] and given tight deadlines set by political commitments, particularly the EU AI act and US Executive Order. Consequently, it will be essential to prioritise resources, focusing on topics which are:
- Of high strategic importance (due to the likelihood and severity of impact if not addressed).
- Mature enough to standardise (due to the availability of pre-standardisation research).
- Best addressed by international SDOs, over and above alternative responsible bodies (e.g. national standards bodies, national governments, industry, academia).
Below, we prioritise topics for AI security standardisation, grouping them into five categories:
- Adapt existing standards: Topics where closely related international standards are available and can be expanded.
- Standardise now: Topics which are high priority, where pre-standardisation technical research is available and broad consensus among experts is achievable.
- Standardise in the future: Topics which are of high strategic importance, but where significant research is needed and there is widespread disagreement about best practice.
- Address through other means: Topics which are of high strategic importance, but are better addressed through government policies, or research by standards-adjacent organisations, academic researchers, or industry.
Adapt existing standards:
Participants identified four key standards as having the greatest potential to be adapted to address AI security concerns:
There are limitations associated with this approach, given the extent to which AI security differs from traditional cybersecurity,[97] given that general-purpose AI standards (e.g. ISO/IEC 42001 and ISO/IEC 23894) often only superficially address security considerations, and given the limited coverage of application security in ISO/IEC 27001.
An immediate priority should be to update terminology standards to focus more on security-specific terminology (e.g. confidentiality, integrity, availability, authenticity, safety, resilience, utility, interoperability, and control). Research participants cited this as a gap, highlighting limited awareness of work that has already been done (e.g. ISO/IEC 22989 already covers resilience and robustness).[98] |
Standardise now:
|
Standardise in the future:
|
Address through other means:
|
4. Persistent Challenges to Creating AI Security Standards
To fill these standardisation gaps, several challenges within the standards making process will need to be overcome. This section focuses on two crucial tensions to be resolved: optimising timelines and promoting multistakeholder inclusivity.
4.1 Optimising timelines for AI security standards
The creation of effective standards requires a degree of technological maturity. With a novel research field like AI, it can be challenging to develop standards that ‘meet the practical need of implementers and adopters’, creating a risk that early standardisation will lead to wasted resources from a lack of uptake.[108] Premature standards can also require regularly updates, making it challenging for businesses to keep up.[109] This creates a dilemma for SDOs who would normally ‘wait for a lot of maturity’ before standardisation, but are facing pressure to standardise now.[110]
Research engagements show that while much of the AI security field is insufficiently mature for the development of standards, some efforts can be undertaken immediately. For instance, foundational and terminology standards are already emerging, while process standards covering risk management for AI security are also planned for publication.
Further to disagreements over whether AI security is sufficiently mature to warrant standardisation, many perceive the standards-setting process itself as too slow, arguing the consensus-building requirements risk any new standards becoming quickly outdated.[111] For example, in ISO the average time from first proposal to standard publication is 3 years.[112]
Yet this perception of ‘slowness’ may misrepresent the importance of careful deliberation in standards-setting.[113] Firstly, rushing the process can risk creating standards which are not robust leading some to argue that ‘premature release without enough consultation, consensus and expertise, is the bigger problem’. [114] Second, moving too quickly could create a perceived lack of time for sustainable adoption internationally.[115]This would have knock-on effects on developers who would regularly need to redesign systems,[116] and on government procurement because systems could become ‘non-compliant very quickly’.[117]
Despite disagreement on optimal timelines for standards-setting, several suggestions were made for improving the agility of existing processes. For instance, moving away from consensus-based agreements and towards majority-based voting[118] and introducing streamlined approaches to submitting comments on draft standards.[119] Not all SDOs operate according to strict timelines, and SDOs can often release guidance more quickly if it is not classified explicitly as a standard. For instance, ETSI includes a technical specification route in 12-15 months, which can then be upgraded into a formalised standard.[120]
In some cases, rather than looking to modify the procedures of international SDOs, efforts can be supplemented by national standards, new technical specifications and other AI security research:
BSI: offers a ‘Flex’ standard which operates on a faster timescale, with new iterations of flex standards typically generated within 6 months.[121]
NIST: uploads publicly available guidelines and technical specifications.[122]
- OWASP: guidance can be published in 4 months, since it is based on open-source tools (e.g. Slack and GitHub) and anyone can get involved, including people who have limited time to commit. OWASP has also provided work free of copyright and attribution, to be used by SDOs.[123]
While domestic efforts (e.g. NIST and BSI) may be beneficial in improving agility, they should wherever possible remain aligned with the wider international community.[124] National standards can also themselves be internationalised, with ISO 27001 on information security being a key example of this.[125] Furthermore, while standards-adjacent research is useful in offering a more agile approach to AI security, SDOs must be proactive in coordinating with organisations like OWASP, for example through liaison schemes.[126]
Rather than focusing purely on making SDOs more agile, the most effective solution is to make the content of standards future-proof. This will not be easy, owing to trade-offs between specificity and longevity.[127] The more specific a standard is, ‘the easier it is to use, but the more quickly it becomes obsolete’.[128] To improve the longevity of standards, we recommend starting with high-level standards before narrowing down into specifics.[129] In doing so, SDOs must avoid the proliferation of taxonomies which vary from one another, instead ensuring consistency and longevity.[130] Where possible, standards should not focus ‘on the technology [itself]’ but rather on the ‘areas where the technology is being used’ (e.g. healthcare or defence).[131] However, this will not always be possible as certain AI techniques (e.g. large language models) are associated with specific attack vectors (e.g. prompt injection), requiring technology-specific standards.
Several issues and dilemmas outlined above could be mitigated through horizon-scanning functions being integrated into standards bodies working on AI security. By coordinating working groups across relevant SDOs, key state-of-the-art trends may be identified to inform new standards proposals.[132]
4.2 Multistakeholder inclusivity and accessibility of AI standards
SDOs often suffer from a lack of stakeholder diversity. This has been outlined as a problem in the area of privacy standards,[133] and similar problems are observed in AI security.[134]
In the experience of research participants, industry representatives tend to be the most prominent members of secure AI WGs,[135] and involvement is not evenly distributed across industry. SMEs were highlighted as struggling to shape AI security standards, due to the cost of memberships in certain standards bodies and the amount of time or resources required to devote efforts to the process.[136] Aside from SMEs, civil society organisations face similar barriers to accessing SDO WGs.[137] Commenting on the composition of SDO WGs, one interviewee acknowledged ‘it is diverse to the extent you have government, industry and academia. It is probably fair to say civil society isn’t there’.[138]
Several schemes do exist to improve diversity at SDOs. For instance, ANEC seeks to represent consumer voices in ETSI and CEN-CENELEC,[139] and the Systers programme is intended to promote the role of women and non-binary participants at the IETF.[140] BSI runs a Consumer and Public Interest Network (CPIN) to boost civil society participation and engages with organisations representing SMEs (e.g. the British Chambers of Commerce and the Federation for Small Businesses) to boost SME participation.[141]
Yet, despite these schemes, and despite detailed research addressing the lack of multistakeholder participation in standards setting,[142] there has not been consistent reporting year-on-year detailing the multi-stakeholder composition at SDOs. Further efforts should be made by SDOs to report on what ‘good’ looks like to them when it comes to multistakeholder participation so that in future, the public can obtain a better understanding of the extent to which targets are being met.
When there is a lack of inclusivity within standards-setting processes, the end products can suffer. One of the most obvious concerns is that the perceived lack of involvement by key actors will undermine uptake, which risks leading to inconsistent approaches to AI security.[143] The current dominance of major industry players could also undermine the inclusion of socio-technical perspectives.[144] Finally, these imbalances at SDOs can result in biases regarding which global regions are represented in discussions.[145]
Yet in SDO debates that were perceived as highly technical, the dominance of tech companies was felt to be justified by some interviewees, allowing resources for civil society participation to be directed towards more sociotechnical standards work.[146]
Further action should be taken to improve diversity in standardisation. Firstly, national governments could provide greater support to underrepresented groups, including SMEs, AI researchers and civil society organisations. This could include covering membership fees and raising awareness on how to participate in standards bodies.[147] Secondly, marginalised stakeholders could steer their efforts towards organisations which utilise public engagement mechanisms. The EU has launched open consultations on certain aspects of standards, such as setting a strategic roadmap,[148] while institutions like NIST and OWASP incorporate accessible communication platforms and host multistakeholder workshops to facilitate wider public involvement.[149]
Beyond diversity at the standards development stage, further issues arise when standards are published but remain expensive to access, as is the case for ISO and CEN/CENELEC standards. In accounting for the cost to buy standards, BSI note that ‘on average each standard costs £15,000’ to develop and that they do not make profits from standards development.[150] Furthermore, where standards are free to access, as is the case with ETSI standards, costs can be hidden elsewhere, for instance in the fees paid by industry to contribute to standards setting.[151] Given the challenges around making standards completely free to access without raising costs elsewhere, we recommend government prioritises support for international standards access towards SMEs and non-profits. Where possible this should be done through supporting existing efforts so as not to duplicate work unnecessarily. For example, both OWASP and the AI Standards Hub already do a lot to educate a range of stakeholders on which standards are available.[152]
5. Incentivising, Enforcing and Assessing Standards Adoption
Even when high quality standards are available, adoption is often lacking. We explore the potential for incentives, enforcement, and certification to increase standards adoption.
5.1 The two-fold challenge of standards adoption
Standards adoption is a longstanding challenge in cybersecurity. In 2022, research by the Department for Digital, Culture, Media and Sport found that awareness of cybersecurity standards across industry was low, contributing to limited uptake and certification (only 8% of businesses reported following ISO 27001, 6% adhered to Cyber Essentials, 1% to Cyber Essentials Plus, and 4% followed some form of NIST standard).[153] Despite these figures, there is a lack of broader data available on uptake of specific standards. Our research suggests that even in cases where standards are said to have been followed, their implementation can often be ineffective or insufficient.[154] Incentives for standards adoption must therefore address the numerous factors which lead to either ineffective or insufficient standards adoption (see Table 2).
Table 2: factors hindering adoption of international standards for AI security
Reasons for insufficient standards adoption | Reasons for ineffective standards adoption |
Awareness from developers of standards is insufficient.[155] | Standards implementation is often viewed as a box-ticking exercise.[156] |
The culture of AI is organised to focus on rapid innovation, not robust cybersecurity.[157] | Standards implementation is frequently outsourced.[158] |
Customers for AI don’t know how to ask suppliers for international standards implementation.[159] | Standards represent best practice (rather than strict requirements) and for SMEs can be too resource intensive.[160] |
Without enforcement, the motivation for industry to comply is insufficient. [161] | Standards shopping hinders efficacy as people find standards to suit their priorities.[162] |
5.2 UK strategy on cybersecurity incentives
Several options are available for governments wishing to incentivise cybersecurity, ranging from highly interventionist strategies such as regulation to no intervention at all (see Figure 3).[163]
Figure 3: Incentive hierarchy for standards adoption
The UK Government has long acknowledged it ‘cannot leave cybersecurity solely to the marketplace,’ and must introduce the ‘right mix of regulation and incentives.’[164] As the scale of the cyber threat has grown, so too has the Government’s willingness to intervene.
The UK’s current approach to cybersecurity incentives focuses on four pillars:
- Foundations: Providing guidance on cyber risk management.
- Capabilities: Supporting skilled professionals to implement guidance.
- Market incentives: Creating incentives for organisations to invest in cybersecurity.
- Accountability: Holding organisations accountable for managing their cyber risk.[165]
The emphasis is on education and upskilling over enforcement and regulation.[166] This is mirrored in the UK Government Cyber Security strategy,[167] and is consistent with the government’s pro-innovation AI strategy.[168]
5.3 Encouraging adoption: expanding incentives to cover international standards
Despite acknowledging that cybersecurity incentives need to be strengthened, in part due to new AI-specific security risks, cybersecurity incentives have not been consistently integrated with incentives for standards implementation, nor have they addressed AI-specific concerns.[169] Due to the pace of AI adoption and claims this has led to a ‘cybersecurity crisis’,[170] the need to expand government incentives, bringing them more in line with international standards, should be urgently considered. Our research suggests there is consensus on the need to expand each pillar from the UK’s cybersecurity incentive strategy to explicitly encourage uptake of international standards that cover AI security concerns (see Table 3).
Table 3: New incentives for AI security standards
Incentive pillar | Rationale for expansion | Recommendation for new incentives |
Foundations and capabilities |
|
|
Market incentives and accountability |
|
|
Effective incentivisation will require a context- and sector-specific approach. Different levels of intervention and different international standards will be relevant to distinct sectors. In the immediate term standards incentivisation in high-risk sectors such as security, defence, healthcare, and critical national infrastructure should be prioritised.
5.4 Enforcing adoption: the role of regulation
Regulation can go even further to incentivise adoption of international standards. But the role played by regulation in standards incentivisation can take a number of forms.
In the EU, ‘harmonised standards’ are set to play a critical role in the implementation of the EU AI act.[173] Legislation sets out high-level legal requirements for AI developers which are clarified by secondary legislation. The plan is that standards will then fill the implementation gap by specifying regulatory requirements in the form of best practice guidance.[174] The EU AI act is accompanied by a standardisation request to European SDOs with CEN/CENELEC asked to draft standards which enable organisations to demonstrate they have taken reasonable steps to comply with the Act.[175]
The EU approach comes with pros and cons. On the one hand, it helps SDOs design their workplans, and promotes a ‘good separation of work between policymakers, industry, and technical experts’.[176] However, regulation creates challenges for SDOs, with one expert noting that it will be ‘quite challenging’ to ensure all relevant standards are available by the end of the 3-year transition period.[177] There is also potential for the ‘harsh regulatory environment’ of the EU to stifle innovation,[178] and for regulation to cause EU standards to diverge from global ones.[179]
This strategy can be contrasted with the UK’s sector-specific approach to regulation, and the UK approach to ‘designated standards.’[180] Designated standards in the UK are standards recognised by the government as providing evidence that a particular product or service complies with UK law.[181] The designation process does not involve UK government sending out standardisation requests to SDOs. Instead, the focus is on identifying existing standards that closely align with UK law and then considering whether they are suitable for designation.[182] To support this approach, the UK’s AI Strategy announced the role of the AI Standards Hub to coordinate UK engagement with international standardisation and support AI stakeholders to engage with the standards ecosystem.[183]
At present, our analysis suggests there is limited potential to incentivise AI security standards through regulation in the UK. This is due to both the nascency of AI security standards and the lack of interest from UK government in AI regulation in general, at least for the present.[184] We therefore recommend UK decisionmakers focus on alternative means through which to increase accountability for security of AI (e.g. through broader regulation on cybersecurity and through alternative non-regulatory incentives on AI security), while closely tracking international regulatory approaches. The UK should closely track the EU AI act, particularly its influence on SDO workplans, as there will be knock-on impacts for companies operating in the UK, and on the direction of standardisation more broadly.
5.5 Assessing adoption: the role of certification
In cybersecurity, there has been a long-standing challenge around certification of best practice. Research has found that 32% of businesses and 29% of charities in the UK invest in some form of cybersecurity certification, with the two most common being NCSC’s Cyber Essentials Scheme and compliance with ISO 27001.[185]
Furthermore, external certification is far from a catch-all solution. Outsourcing compliance checks to external consultancies can result in a lack of understanding internally, which does little to help secure AI systems in the long-term.[186] Human factors research suggests that certification around standards such as ISO 27001 has become a ‘box ticking exercise’ with limited quality control and behaviour change from companies claiming to have implemented the standard.[187] Compliance audits need to be robust and independent, but this requires further investment in AI skills for those responsible for certification.[188]
The UK Government is in many ways already a frontrunner on cybersecurity certification thanks to the Cyber Essentials scheme. Recent updates to explicitly support SMEs working on fundamental AI research to become certified under the Cyber Essentials Plus scheme are particularly promising.[189]
Rather than starting from scratch, Cyber Essentials should be updated when relevant standards for secure AI are released. This could involve introducing an additional tier for Cyber Essentials, focusing on AI vulnerabilities explicitly.
When expanding Cyber Essentials, government should carefully consider when and how to align with international standards. Already, NCSC receive regular inquiries about the overlaps and divergences between Cyber Essentials and alternative standards for cybersecurity.[190] On the one hand, divergence between Cyber Essentials and international standards (e.g. ISO 27001) can be viewed as unhelpful, contributing to fragmentation.[191] On the other hand, such divergence can be considered necessary to allow for a more streamlined approach compared to the document-heavy ISO 27001 certification. We recommend that in expanding Cyber Essentials to cover AI explicitly, UK Government prioritises international cooperation to avoid further fragmentation, aligning with international standards on AI wherever possible while aiming to streamline the certification process.
6. Situating Standards within a Broader AI Security Toolkit
This section considers how a holistic AI assurance process, which brings international standards together with more agile AI security techniques, can help to ensure we are not overly reliant on international standards for AI security needs.
6.1 Why are alternative levers for AI security needed?
Decisionmakers involved with the EU AI Act, the US Executive Order, and to a lesser extent, the UK AI Safety Institute, have each placed significant pressure on SDOs to resolve their implementation challenges by operationalising high-level policies in the form of best practice guidance.[192] At the same time, the ‘people steeped in standards development tend to be less ambitious.’[193] They acknowledge that existing standardisation gaps will persist until robust scientific consensus is reached on how we can best secure AI systems from attack.
The urgency of the security challenge in AI, particularly in high-risk sectors, means we cannot simply wait for SDOs to fill all gaps. What is urgently needed is a holistic and whole-lifecycle AI assurance process which incorporates not just standards, but further levers for AI security such as agile technical solutions (e.g. the MITRE ATT&CK framework),[194] government policies (e.g. NCSC AI Security Guidelines),[195] and industry standards (e.g. Google’s Secure AI Framework).[196] Even when standards on AI security are mature, these related techniques for securing AI will continue to be relevant to developers, alongside standards.
Table 4 summarises the advantages and weaknesses of these alternative levers, compared to international standards.[197] Across the board, the key advantage is pace and agility, while the key weakness is the creation of a fragmented landscape containing contradictory guidance.
Table 4: Alternative levers for AI security
Alternative lever for AI security | Advantages compared to standardisation | Weaknesses compared to standardisation |
Non-regulatory guidelines developed by government e.g. NCSC AI security guidelines, NCSC principles for the security of machine learning.[198] |
|
|
Industry consortia and specific industry standards e.g. Frontier AI Forum,[199] Google Secure AI Framework[200] |
|
|
Standards adjacent research e.g. OWASP AI exchange,[201] NIST adversarial AI publications,[202] |
|
|
6.2 Integrating AI security techniques through AI assurance
Each of these levers for secure AI can be complementary to international standards. But to enable a whole-lifecycle approach to AI cybersecurity, information on standards should be collated with analogous resources.[203] AI assurance can help to achieve this goal, bringing international standards for secure AI together with agile technical solutions, national policies, academic research and industry approaches. Assurance can also help to ensure that the cybersecurity of AI systems is not considered in isolation but is weighed up alongside other important concerns, such as fairness, transparency, performance and explainability.
AI assurance is a method for evaluating and communicating reliable evidence about an AI system’s properties.[204] Recent CETaS research has proposed a new AI assurance methodology for use in the national security domain. This method centres on a system card template which, when filled out by an AI developer, compiles all relevant evidence that an AI system is ethical, legally compliant, reliable, and secure.[205] This system card template explicitly asks developers to ‘detail all available evidence that AI security has been considered throughout the project lifecycle’. It also offers examples of what appropriate evidence could look like, including certification of international standards compliance, evidence that national policies have been followed, details of adversarial testing, or references to protocols laid out by OWASP for securing AI systems.[206] This approach should be expanded to cover security concerns in more depth, bringing together an even broader range of levers for securing AI.
The UK Government approach to AI assurance, led by the Responsible Technology Adoption Unit, emphasises the central role that international standards can play, as they ‘provide a consistent baseline’ of evidence that due diligence has been done.[207] Nevertheless, they also acknowledge the need for freedom for stakeholders to determine which approach to AI trustworthiness is most appropriate in their context.[208] The same must be true for AI security.
In light of this, the UK Government should offer guidelines on AI security which collate all of these levers for AI security in one place, detailing their advantages and weaknesses. Wherever possible, these guidelines should direct users towards those frameworks for securing AI systems that have been most widely recognised. Often, this will be international standards. Nevertheless, the final decision of which combination of international standards, national policies, and agile technical solutions to use must be taken on a case-by-case basis, by those who understand the security risks of the specific AI system. We propose any such guidelines on securing AI systems should build directly on DSIT’s initial guidance for regulators ‘implementing the UK’s AI regulatory principles’ which already links to a number of AI safety-related international standards.[209]
Finally, even when all available security levers are deployed to defend an AI system from attack, residual risks will remain, both due to the unpredictability of AI systems and due to the immaturity of adversarial AI as a research field. This inevitability of residual security risks should be emphasised in any government guidance on AI security, and developers of AI systems should be encouraged to set stringent red lines, meaning that in some instances, if there is no adequate method available to secure a system, the AI system should not be deployed at all.[210]
Recommendations
Set a clear roadmap for future international standards on AI security.
Primary target: International SDOs and adjacent research organisations.
- Where possible, existing cybersecurity standards should be updated for AI rather than starting from scratch, to minimise workload and avoid fragmentation. This work should include consideration of a broader set of security goals as proposed in Section 1.
- A cross-SDO coordination function should be established through a consortium of SDO representatives to avoid duplication of work (in particular between those working on AI and those working on cybersecurity), and to avoid contradictions between standards (particularly where there are trade-offs between distinct goals such as security, privacy, explainability and performance).
- SDO resources should be targeted to topics we identify as ready for standardisation now, focusing particularly on developing a taxonomy of threats, and releasing AI-security specific process standards. SDOs should create a secure-by-design standard for AI, building on NCSC and CISA’s AI security guidelines (in addition to NPSA’s information management guidance).
- Standards-adjacent organisations, for example NIST, OWASP and NPL, should prioritise topics we identify as priorities for future standardisation, focusing on developing measurement standards and on building consensus around mitigation techniques. SDOs should coordinate closely with these groups to support them in identifying high-priority security challenges where faster processes could be utilised.
Improve fundamental understandings of how to secure AI.
Primary target: Academic researchers and research funders.
- EPSRC (the Engineering and Physical Sciences Research Council) and STFC (the Science and Technology Facilities Council) should create research grants dedicated to securing AI, addressing questions such as, ‘How can AI vulnerabilities be mapped?’ And ‘How can we prevent AI models from leaking confidential information?’ In doing so, they should encourage multi-disciplinary research methodologies.
- Beyond technical research, academic funding (for example from ESRC) should be directed towards behavioural science research examining why security campaigns so often fail to change human behaviour, with specific research into the human factors leading AI practitioners and data scientists to deprioritise security considerations.
Foster a responsive standardisation ecosystem that is better equipped to tackle AI-specific cybersecurity challenges.
Primary target: SDOs, national governments and industry representatives.
- National governments and industry representatives should encourage SDOs to expand schemes which are more agile than the traditional standards development workflow.
- Horizon scanning functions should be incorporated within SDOs to cover AI security. This horizon scanning function should focus on predicting future developments which will be relevant to SDO committees. It will be essential that SDOs coordinate globally on this horizon-scanning capability as it may be resource intensive, so is only worthwhile if it is designed to save subsequent work across SDO committees.
- SDOs should prioritise international standards which are likely to remain futureproof. Process standards which are made to be technology-agnostic are more likely to withstand rapid technological development in the AI field.
- UK Government standards representatives should identify SDOs which are of the highest strategic importance for engagements on AI security. They should consider which SDOs have expertise on AI security, have agile processes, and are accessible and diverse. For AI security, ETSI should be prioritised, but close attention should be paid to CEN/CENELEC and ISO.
- Dedicated funding should support SMEs, civil society and academics to devote the time required to be involved in standards setting. Where resources are limited, funding should focus on involving civil society in sociotechnical standards committees (e.g. on AI ethics, privacy) rather than technical AI security committees.
Introduce new incentives to encourage adoption of international standards.
Primary target: UK Government.
- NCSC should expand cybersecurity training schemes to directly target AI developers and data scientists who may be unfamiliar with the fundamentals of security.
- NCSC and the Cabinet Office should strengthen cybersecurity incentives to focus on accountability as well as education. In doing so, the UK Government should integrate existing cybersecurity policies and training schemes more closely with available international standards to avoid contradictions.
- AI procurement processes should be updated to ensure there is sufficient focus on AI security concerns and the assurance of AI systems that fulfil public administration functions. Invitations to tender should reference specific, relevant international standards (e.g. ISO 42001), and procurement teams should be upskilled on standards.
- The Cyber Essentials programme should be updated to ensure it A) is consistent with international standards and B) begins to address AI-specific concerns through a third certification track (in addition to Cyber Essentials and Cyber Essentials Plus).
Develop guidelines on AI Security which integrate insights from international standards with more agile levers for secure AI.
Primary target: UK Government.
- DSIT (working together with NCSC and NPSA) should introduce guidelines on AI security which bring international standards together with complementary AI security levers. These guidelines should include information on what national policies, international standards, agile technical solutions, and industry best practice tools can be drawn on by developers throughout the AI project lifecycle.
References
[1] “OECD AI Principles Overview,” OECD, https://oecd.ai/en/ai-principles.
[2] HM Government, “What is cyber security?” National Cyber Security Centre, https://www.ncsc.gov.uk/section/about-ncsc/what-is-cyber-security.
[3] “Standards at a glance,” AI Standards Hub, https://aistandardshub.org/resource/main-training-page-example/4-the-main-stages-of-standards-development/.
[4] “Definition of SDO,” NIST, https://csrc.nist.gov/glossary/term/SDO.
[5] Andreas Tsamados, Luciano Floridi and Mariarosaria Taddeo, “The Cybersecurity Crisis of Artificial Intelligence: Unrestrained Adoption and Natural Language-Based Attacks,” SSRN (September 2023), https://ssrn.com/abstract=4578165.
[6]HM Government, The near-term impact of AI on the cyber threat (National Cyber Security Centre: 2024), https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat#section_3.
[7] NIST, “NIST identifies types of cyberattacks that manipulate behavior of AI systems,” NIST News, 4 January 2024, https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems.
[8] Marcus Comiter, Attacking Artificial Intelligence: AI’s Security Vulnerability and what Policymakers Can Do About It (Belfer Center for Science and International Affairs: August 2019), https://www.belfercenter.org/publication/AttackingAI.
[9] Hadrien Pouget, “What will the role of standards be in AI governance?,” Ada Lovelace Institute Blog, 5 April 2023, https://www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.
[10] “Cyber Security Standard: the most popular cyber security standards explained,” IT Governance, https://www.itgovernance.co.uk/cybersecurity-standards; Karen Scarfone, Dan Benigni and Tim Grance, Cyber Security Standards (NIST: 2009), https://www.nist.gov/publications/cyber-security-standards.
[11] “ISO/IEC 42001:2023,” ISO, https://www.iso.org/standard/81230.html.
[12] “ISO/IEC 22989:2022,” ISO, https://www.iso.org/standard/74296.html.
[13] “ISO/IEC JTC 1/SC 42,” ISO, https://www.iso.org/committee/6794475.html; “Technical Committee (TC) Securing Artificial Intelligence (SAI),” ETSI, https://www.etsi.org/committee/technical-committee-tc-securing-artificial-intelligence-sai; “CEN-CENELEC JTC 21,” CEN-CENELEC, https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/.
[14] Mariarosaria Taddeo et al., “Artificial Intelligence for national security: the predictability problem,” CETaS Research Reports (September 2022).
[15] HM Government, Guidelines for secure AI system development (National Cyber Security Centre: 2023), https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.
[16] Before considering the scope of ‘AI security’, clarity is required about what is meant by security itself. This report adopts the UK Engineering Council’s definition, where security is defined as ‘the state of relative freedom from threat or harm caused by deliberate, unwanted, hostile or malicious acts’. See: www.engc.org.uk/security.
[17] Micah Musser, Adversarial Machine Learning and Cybersecurity: Risks, Challenges and Legal Implications (CSET: April 2023), https://cset.georgetown.edu/publication/adversarial-machine-learning-and-cybersecurity/.
[18] Jessica Newman, “Towards AI Security: Global Aspirations for a More Resilient Future,” CLTC White Paper (February 2019) https://cltc.berkeley.edu/publication/toward-ai-security-global-aspirations-for-a-more-resilient-future/.
[19] While this report discusses AI systems, with a particular focus on ML and neural networks, the authors are cognisant that, where an AI system forms part of a cyber-physical system, there are potential vulnerabilities in the interface between the AI system and its interactions with the CPS.
[20] ENISA, Cybersecurity of AI and Standardisation (ENISA: March 2023), https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.
[21] Interview with academic expert, 1 October 2023.
[22] Interview with industry expert (1), 13 November 2023; Interview with academic expert, 14 November 2023; Interview with industry experts, 16 November 2023.
[23] Jessica Newman, “A Taxonomy of Trustworthiness for Artificial Intelligence,” CLTC White Paper Series (January 2023), 26, https://cltc.berkeley.edu/wp-content/uploads/2023/01/Taxonomy_of_AI_Trustworthiness.pdf.
[24] These goals are based on BSI PAS 1192-5, ISO 19650-5, BSI PAS 185, IET/NCSC Code of Practice covering cybersecurity in the built environment, the latest NPSA CAPSS guidance.
[25] Andrew Lohn, “Poison in the Well: Securing the Shared Resources of Machine Learning,” CSET Policy Brief (June 2021), https://cset.georgetown.edu/wp-content/uploads/CSET-Poison-in-the-Well.pdf.
[26] Apostol Vassilev et al., Adversarial Machine Learning: A Taxonomy of Terminology of Attacks and Mitigations (NIST: January 2024), https://csrc.nist.gov/pubs/ai/100/2/e2023/final.
[27] HM Government, Principles for the security of machine learning (National Cyber Security Centre: August 2022), https://www.ncsc.gov.uk/collection/machine-learning.
[28] HM Government, Principles for the security of machine learning (National Cyber Security Centre: August 2022), https://www.ncsc.gov.uk/collection/machine-learning; “Secure Innovation,” NPSA, https://www.npsa.gov.uk/secure-innovation.
[29] Interview with industry expert, 3 November 2023.
[30] Andreas Tsamados, Luciano Floridi and Mariarosaria Taddeo, “The Cybersecurity Crisis of Artificial Intelligence: Unrestrained Adoption and Natural Language-Based Attacks,” SSRN (September 2023), https://ssrn.com/abstract=4578165.
[31] “Definition of SDO,” NIST, https://csrc.nist.gov/glossary/term/SDO.
[32] Peter Cihon, Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (University of Oxford: April 2019), https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf.
[33] “Standard Setting,” EU AI Act, https://artificialintelligenceact.eu/standard-setting/.
[34] “Standard Setting,” EU AI Act, https://artificialintelligenceact.eu/standard-setting/; ENISA, Cybersecurity of AI and Standardisation (ENISA: March 2023), https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.
[35] ISO/IEC JTC 1/SC 42, https ://www.iso.org/committee/6794475.html.
[36] BSI, “ART/1 – Artificial Intelligence,” https://standardsdevelopment.bsigroup.com/committees/50281655.
[37] IEEE SA, “Artificial Intelligence Standards Committee,”https ://sagroups.ieee.org/ai-sc/.
[38] CEN/CENELEC, “Artificial Intelligence,”https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/.
[39] ETSI TC SAI, https://www.etsi.org/committee/technical-committee-tc-securing-artificial-intelligence-sai.
[40] NIST’s role as a key American standards body precedes its AI-related work. While not an SDO, NIST coordinates federal government policy on the use of standards and oversees key conformity assessment procedures: See NIST, “What we do,” https://www.nist.gov/standardsgov/what-we-do.
[41] NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST: January 2023).
[42] OWASP, “Project spotlight – AI security and privacy guide,” https://owasp.org/projects/spotlight/.
[43] AI Standards Hub, “Standards at a glance,” https://aistandardshub.org/resource/main-training-page-example/4-the-main-stages-of-standards-development/.
[44] “Glossary,” ISO, https://www.iso.org/glossary.html.
[45] “Opportunities to participate,” IEEE, https://standards.ieee.org/participate/; “ISO’s committee on consumer policy,” ISO, https://www.iso.org/copolco.html.
[46] Christine Galvagna, “Discussion paper: inclusive AI governance,” Ada Lovelace Institute Discussion Paper (March 2023) https://www.adalovelaceinstitute.org/report/inclusive-ai-governance/.
[47] “Standardisation policy,” European Commission, https://single-market-economy.ec.europa.eu/single-market/european-standards/standardisation-policy_en; Office of the Federal Register, “Incorporation by Reference Handbook,” (June 2023) https://www.archives.gov/files/federal-register/write/handbook/ibr.pdf.
[48] ISO, Standards and public policy: a toolkit for national standards bodies (ISO: August 2023), https://www.iso.org/files/live/sites/isoorg/files/publications/en/ISO_Public-Policy-Toolkit.pdf.
[49] “Standards at a glance: What are standards,” AI Standards Hub, https://aistandardshub.org/resource/main-training-page-example/1-what-are-standards/.
[50] Ibid.
[51] “Brokering Standards by Consensus,” ITU, July 2021, https://www.itu.int/en/mediacentre/backgrounders/Pages/standardization.aspx.
[52] For our analysis, we have grouped together ‘product testing and performance standards’ with ‘measurement standards’ as these types of standards serve similar functions when it comes to AI security, with both of these standard types still being at a particularly nascent stage for AI systems.
[53] “Standards at a glance: Different types of standards,” AI Standards Hub, https://aistandardshub.org/resource/main-training-page-example/2-different-types-of-standards/.
[54] Hadrien Pouget, “What will the role of standards be in AI governance?,” Ada Lovelace Institute Blog, 5 April 2023. https://www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.
[55] European Commission, Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy AI (December 2022), https://ec.europa.eu/docsroom/documents/52376.
[56] House of Representatives, William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, Division E: National Artificial Intelligence Initiative Act’, Pub. L. No. H. R. 6395, § Division E, p.1164 (2022), https://www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210.
[57] US Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 2023, https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
[58] Sophia Antipolis, “ETSI’s Securing AI group becomes a technical committee to help ETSI to answer the EU AI Act,” ETSI News, 17 October 2023, https://www.etsi.org/newsroom/news/2288-etsi-s-securing-ai-group-becomes-a-technical-committee-to-help-etsi-to-answer-the-eu-ai-act.
[59] Interview with government representative 1 October 2023, Interview with academic expert, 7 November 2023.
[60] “ISO/IEC 27000 family,” ISO, https://www.iso.org/standard/iso-iec-27000-family.
[61] “ISO.IEC 29147:2018,” ISO, https://www.iso.org/standard/72311.html.
[62] “ETSI TC Cyber,” ETSI, https://www.etsi.org/technologies/cyber-security.
[63] NIST, “Cybersecurity and Privacy Program,” Extended Fact Sheet, July 2022, https://www.nist.gov/system/files/documents/2022/07/21/Extended%20Cybersecurity%20Vitals%20Fact%20Sheet.pdf.
[64] NIST, “Information Technology Laboratory,” 21 May 2018, https://www.nist.gov/itl/publications-0/nist-special-publication-800-series-general-information.
[65] NIST, Cybersecurity Framework 2.0 (February 2024), https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf.
[66] NIST, Risk Management Framework for Information Systems and Organizations (December 2018), https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r2.pdf.
[67] “ISO/IEC 22989:2022,” ISO, https://www.iso.org/standard/74296.html.
[68] “ISO/IEC 23053:2022,” ISO, https://www.iso.org/standard/74438.html.
[69] “ISO/IEC 42001:2023,” ISO, https://www.iso.org/standard/81230.html.
[70] “ISO/IEC 23894:2023, ISO, https://www.iso.org/standard/77304.html.
[71] “ISO/IEC TR 24028:2020,” ISO, https://www.iso.org/standard/77608.html.
[72] “ISO/IEC 23894:2023,” ISO, Annex A, https://www.iso.org/standard/77304.html.
[73] NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST: January 2023).
[74] “GET Program for AI Ethics and Governance Standards,” IEEE, https://ieeexplore.ieee.org/browse/standards/get-program/page/series.
[75] TC SAI looks at AI security in four different ways: 1. Security of AI systems themselves, 2. Security from malicious AI (or AI-enhanced) systems, 3. Security of systems with AI techniques, and 4. Broader security concerns related to the use of AI. We are most interested in their third objective. See: https://www.etsi.org/technologies/securing-artificial-intelligence.
[76] ETSI, ISG SAI Activity Report (ETSI: 2022), https://www.etsi.org/committee-activity/activity-report-sai; Sophia Antipolis, “ETSI releases three reports on securing artificial intelligence for a secure, transparent and explicable AI system,” ETSI News, 11 July 2023, https://www.etsi.org/newsroom/press-releases/2259-etsi-releases-three-reports-on-securing-artificial-intelligence-for-a-secure-transparent-and-explicable-ai-system.
[77] “ETSI GR SAI 001,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/001/01.01.01_60/gr_SAI001v010101p.pdf.
[78] “ETSI GR SAI 002,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/002/01.01.01_60/gr_SAI002v010101p.pdf.
[79] “ETSI GR SAI 004,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf.
[80] “ETSI GR SAI 005,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/005/01.01.01_60/gr_SAI005v010101p.pdf.
[81] “ETSI GR SAI 006,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/006/01.01.01_60/gr_SAI006v010101p.pdf.
[82] “ETSI GR SAI 007,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/007/01.01.01_60/gr_SAI007v010101p.pdf.
[83] “ETSI GR SAI 009,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/009/01.01.01_60/gr_SAI009v010101p.pdf.
[84] “ETSI GR SAI 011,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/011/01.01.01_60/gr_SAI011v010101p.pdf.
[85] “ETSI GR SAI 013,” ETSI, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/013/01.01.01_60/gr_SAI013v010101p.pdf.
[86] “Types of standards,” ETSI, https://www.etsi.org/standards/types-of-standards.
[87] “ETSI TR 104 032,” ETSI, https://www.etsi.org/deliver/etsi_tr/104000_104099/104032/01.01.01_60/tr_104032v010101p.pdf.
[88] “ISO/IEC TR 27563:2023,” ISO, https://www.iso.org/standard/80396.html.
[89] “ISO/IEC TR 24029-1:2021,” ISO, https://www.iso.org/standard/77609.html; ISO/IEC 24029-2:2023, https://www.iso.org/standard/79804.html.
[90] “ISO/IEC TR 29119-11:2020,” ISO https://www.iso.org/standard/79016.html.
[91] “ISO/IEC CD 27090,” ISO, https://www.iso.org/standard/56581.html.
[92] “ISO/IEC WD 27091.2,” ISO, https://www.iso.org/standard/56582.html.
[93] ENISA, Cybersecurity of AI and Standardisation (ENISA: March 2023), https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.
[94] “OWASP AI Exchange,” OWASP, https://owaspai.org/.
[95] ENISA, Cybersecurity of AI and Standardisation (ENISA: March 2023), https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.
[96] CETaS workshop, 17 January 2024.
[97] ENISA, Cybersecurity of AI and Standardisation (ENISA: March 2023), https://www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.
[98] “ISO/IEC 22989: 2022,” ISO, https://www.iso.org/standard/74296.html.
[99] HM Government, Guidelines for secure AI system development (National Cyber Security Centre: 2023), https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.
[100] Apostol Vassilev et al., Adversarial Machine Learning: A Taxonomy of Terminology of Attacks and Mitigations (NIST: January 2024), https://csrc.nist.gov/pubs/ai/100/2/e2023/final.
[101] “ISO/IEC 29147:2018,” ISO, https://www.iso.org/standard/72311.html.
[102] “ETSI GR SAI 002, Securing Artificial Intelligence (SAI) Data Supply Chain Security,” ETSI, August 2021, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/002/01.01.01_60/gr_SAI002v010101p.pdf.
[103] Rosamund Powell and Marion Oswald, “Assurance of Third-Party AI Systems for UK National Security,” CETaS Research Report (January 2024), https://cetas.turing.ac.uk/publications/assurance-third-party-ai-systems-uk-national-security; Ian Brown, “Expert Explainer: allocating accountability in AI supply chains,” Ada Lovelace Institute Paper (June 2023), https://www.adalovelaceinstitute.org/resource/ai-supply-chains/; Jennifer Cobbe, Michael Veale and Jatinder Singh, “Understanding accountability in algorithmic supply chains,” in FaccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (New York: Association for Computing Machinery, 2023), 1186-1197.
[104] Semiconductor devices,” IEC TC 47, IEC, https://www.iec.ch/dyn/www/f.
[105] “ETSI GR SAI 006, Securing Artificial Intelligence: The role of hardware in security of AI,” ETSI, March 2022, https://www.etsi.org/deliver/etsi_gr/SAI/001_099/006/01.01.01_60/gr_SAI006v010101p.pdf.
[106] “PWI NWIP AI system logging AI system logging,” CEN, April 2023, https://standardsdevelopment.bsigroup.com/projects/9023-08548#/section.
[107] Apostol Vassilev et al., Adversarial Machine Learning: A Taxonomy of Terminology of Attacks and Mitigations (NIST: January 2024), https://csrc.nist.gov/pubs/ai/100/2/e2023/final.
[108] Charles M. Schmidt, Best Practices for Technical Standard Creation (MITRE: April 2017), 1, https://www.mitre.org/sites/default/files/publications/17-1332-best-practices-for-technical-standard-creation.pdf.
[109] Interview with standards experts (2), 1 December 2023; Interview with academic expert, 31 October 2023.
[110] Interview with academic expert, 16 November 2023.
[111] Interview with government representative, 23 October 2023; Interview with academic expert, 7 November 2023; Interview with standards body representative, 7 November 2023; Interview with regulator, 9 November 2023; Interview with academic expert, 14 November 2023; Interview with standards expert, 20 November 2023.
[112] “Developing Standards,” ISO, https://www.iso.org/developing-standards.html.
[113] Interview with regulator, 9 November 2023; Interview with academic expert, 14 November 2023.
[114] Interview with government standards expert, 13 November 2023.
[115] Interview with government representative (1), 24 October 2023; Interview with industry expert, 17 November 2023.
[116] Interview with industry expert, 31 October 2023.
[117] Interview with industry expert, 31 October 2023.
[118] Interview with academic expert, 3 November 2023.
[119] Interview with standards expert, 27 October 2023.
[120] Interview with government representative, 1 October 2023; Interview with government expert, 21 November 2023.
[121] Interview with government representative, 1 October 2023; BSI, “Principles of BSI Flex Standardisation,” 2021, https://www.bsigroup.com/siteassets/pdf/en/insights-and-media/insights/brochures/bsi-flex-0-v1.0-2021-11.pdf.
[122] Interview with industry experts, 16 November 2023.
[123] Interview with standards body representative, 7 November 2023.
[124] Interview with government expert, 21 November 2023; Interview with standards expert, 20 November 2023.
[125] “The History of ISO 27001,” SecureFrame, https://secureframe.com/hub/iso-27001/history
[126] Interview with standards expert, 7 November 2023.
[127] Interview with standards experts, 1 December 2023; Interview with standards body expert, 7 November 2023.
[128] Interview with government security expert, 3 November 2023.
[129] Interview with industry expert, 31 October 2023; Interview with academic expert, 16 November 2023; Interview with academic expert (2), 16 November 2023.
[130] Jeferson O. Batista et al., “Ontologically correct taxonomies by construction,” (May 2022), https://www.sciencedirect.com/science/article/abs/pii/S0169023X22000246.
[131] Interview with industry expert, 3 November 2023.
[132] Interview with academic expert (2), 16 November 2023.
[133] Sam Stockwell et al., “The Future of Privacy by Design Technology: Policy Implications for UK Security,” CETaS Research Reports (September 2023): 30-33, https://cetas.turing.ac.uk/publications/future-privacy-design-technology.
[134] Hadrien Pouget, “What will the role of standards be in AI governance?,” Ada Lovelace Institute Blog, 5 April 2023, https://www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.
[135] Interview with government representative, 23 October 2023.
[136] Interview with government security expert, 3 November 2023; Interview with government standards expert, 13 November 2023; Interview with academic expert, 16 November 2023; Interview with academic expert (2), 16 November 2023.
[137] Interview with standards expert, 27 October 2023; Interview with government standards expert, 13 November 2023.
[138] Interview with government expert, 21 November 2023.
[139] “About ANEC,” ANEC, https://www.anec.eu/priorities/digital-society.
[140] “IETF Systers,” IETF, https://www.ietf.org/about/groups/ietf-systers/.
[141] “Join the BSI Consumer & Public Interest Network (CPIN),” BSI, https://www.bsigroup.com/en-HK/About-BSI/uk-national-standards-body/how-to-get-involved-with-standards/Become-a-consumer-representative/.
[142] Christine Galvana, Inclusive AI governance (Ada Lovelace Institute: March 2023), https://www.adalovelaceinstitute.org/report/inclusive-ai-governance/.
[143] Interview with government representative, 1 October 2023.
[144] Hadrien Pouget, “What will the role of standards be in AI governance?,” Ada Lovelace Institute Blog, 5 April 2023, https://www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/; Interview with academic expert, 16 November 2023.
[145] Interview with government security expert, 3 November 2023.
[146] Interview with academic expert, 31 October 2023; Interview with government expert, 21 November 2023.
[147] Neil Brown et al., The Role of Standardisation in Support of Emerging Technologies in the UK (Department for Business, Energy & Industrial Strategy: May 2022), 9-10, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1080614/role-of-standardisation-in-support-of-emerging-technologies-uk.pdf; Peter Cihon, Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (Future of Humanity Institute: April 2019), 3, https://www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf; Interview with government representative, 1 December 2023.
[148] Interview with standards expert, 27 October 2023.
[149] Interview with academic expert, 1 November 2023; Interview with standards body representative, 7 November 2023.
[150] “How to buy and access standards,” BSI, https://www.bsigroup.com/en-IL/Standards/how-to-buy-and-access-international-standards-and-regulatory-information/.
[151] “What does membership cost?,” ETSI, https://www.etsi.org/membership/dues.
[152] “OWASP AI Exchange,” OWASP, https://owaspai.org/; “About the AI Standards Hub,” AI Standards Hub, https://aistandardshub.org/the-ai-standards-hub/.
[153] These figures do rise for large businesses (more than 250 employees) where 23% have adhered to ISO 27001, 20% to a NIST standard, 35% to cyber essentials and 17% to cyber essentials plus. See: HM Government, Cyber Security Breaches Survey 2022 (Department for Digital, Culture, Media & Sport: 2022), https://www.gov.uk/government/statistics/cyber-security-breaches-survey-2022/cyber-security-breaches-survey-2022#chapter-2-profiling-uk-businesses-and-charities.
[154] Interview with academic expert, 14 November 2023; Interview with legal expert, 2 November 2023; Interview with standards body expert, 7 November 2023; Interview with standards experts, 1 December 2023.
[155] Interview with academic expert, 14 November 2023.
[156] Interview with legal expert, 2 November 2023.
[157] Interview with standards body expert, 7 November 2023.
[158] Interview with academic expert, 14 November 2023.
[159] Interview with legal expert, 7 November 2023.
[160] Interview with standards experts, 1 December 2023.
[161] Interview with legal expert, 2 November 2023.
[162] Interview with academic expert, 3 November 2023.
[163] A range of stakeholders may use these levers to increase standards uptake, for instance standards development bodies or multilateral organisations. We focus primarily on the role the UK government can and should play to increase uptake of international standards for AI security.
[164] HM Government, 2022 cyber security incentives and regulation review (Department for Digital Culture Media & Sport: January 2022), https://www.gov.uk/government/publications/2022-cyber-security-incentives-and-regulation-review/2022-cyber-security-incentives-and-regulation-review.
[165] HM Government, Cyber Security Regulation and Incentives Review (December 2016), https://assets.publishing.service.gov.uk/media/5a7f944940f0b62305b87ffb/Cyber_Security_Regulation_and_Incentives_Review.pdf.
[166] Within government, enforcement of cybersecurity is more widespread as government departments must base their practices on policies published by both the National Cybersecurity Centre (NCSC) and the Government Security Group (GSG).
[167] HM Government, Government Cyber Security Strategy (Cabinet Office: 2022), https://assets.publishing.service.gov.uk/media/61f0169de90e070375c230a8/government-cyber-security-strategy.pdf.
[168] HM Government, A pro-innovation approach to AI regulation (Department for Science Innovation and Technology: March 2023), https://assets.publishing.service.gov.uk/media/64cb71a547915a00142a91c4/a-pro-innovation-approach-to-ai-regulation-amended-web-ready.pdf.
[169] The notable exception is government support for the AI Standards Hub. The AI Standards Hub hosts an observatory of AI standards, helps track existing and forthcoming standards, and provides research and training on international AI standards. See: https://aistandardshub.org.
[170] Andreas Tsamados, Luciano Floridi and Mariarosaria Taddeo, “The Cybersecurity Crisis of Artificial Intelligence: Unrestrained Adoption and Natural Language-Based Attacks,” SSRN (September 2023), https://ssrn.com/abstract=4578165.
[171] CETaS workshop, 17 January 2024.
[172] Interview with regulator, 9 November 2023.
[173] Claire O’Brien, Bennett Borden, Mark Rasdale and Daisy Wong, “The role of harmonised standards as tools for AI act compliance,” (DLA Piper: January 2024), https://www.dlapiper.com/en-ae/insights/publications/2024/01/the-role-of-harmonised-standards-as-tools-for-ai-act-compliance.
[174] “Standard Setting,” EU AI Act, https://artificialintelligenceact.eu/standard-setting/.
[175] CEN-CENELEC, “ETUC’s position on the draft standardisation request in support of safe and trustworthy AI,” CEN-CENELEC News, 1 June 2022, https://www.cencenelec.eu/news-and-events/news/2022/newsletter/issue-34-etuc-s-position-on-the-draft-standardization-request-in-support-of-safe-and-trustworthy-ai/.
[176] Interview with industry expert, 17 November 2023.
[177] Interview with standards expert, 27 October 2023.
[178] Interview with legal expert, 2 November 2023.
[179] Interview with government expert, 21 November 2023.
[180] HM Government, A pro-innovation approach to AI regulation: government response (Department for Science, Innovation and Technology: February 2024), https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.
[181] “Designated Standards,” HM Government, Department for Business and Trade and Office for Product Safety and Standards, 3 December 2020, https://www.gov.uk/guidance/designated-standards.
[182] Ibid.
[183] HM Government, National AI Strategy (September 2021), https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf.
[184] HM Government, A pro-innovation approach to AI regulation: government response (Department for Science, Innovation and Technology: February 2024), https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.
[185] HM Government, Cyber security longitudinal survey: wave 1 (DCMS: January 2022), https://www.gov.uk/government/publications/cyber-security-longitudinal-survey-wave-one/cyber-security-longitudinal-survey-wave-1; Note, these figures differ somewhat from those included from the “Cyber Security Breaches Survey” as they are based on distinct research from UK government.
[186] Interview with academic expert, 14 November 2023.
[187] Interview with academic expert, 14 November 2023.
[188] Interview with standards body expert, 7 November 2023.
[189] “Funded Cyber Essentials Programme,” HM Government, NCSC, 19 December 2022, https://www.ncsc.gov.uk/information/funded-cyber-essentials-programme.
[190] Chris Ensor, “Cyber Essentials: are there any alternative standards?” NCSC Blog, 23 January 2024, https://www.ncsc.gov.uk/blog-post/cyber-essentials-are-there-any-alternative-standards.
[191] CETaS workshop, 17 January 2024.
[192] Interview with academic expert (1), 16 November 2023.
[193] Ibid.
[194] “MITRE ATT&CK,” MITRE, https://attack.mitre.org/.
[195] HM Government, Guidelines for secure AI system development (National Cyber Security Centre: 2023), https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.
[196] “Google’s Secure AI Framework,” Google, https://safety.google/cybersecurity-advancements/saif/.
[197] Findings in Table 4 are partially based on a CETaS workshop, 17 January 2024.
[198] HM Government, Guidelines for secure AI system development (National Cyber Security Centre: 2023), https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.
[199] “How we work,” Frontier AI Forum, https://www.frontiermodelforum.org/how-we-work/.
[200] “Google’s Secure AI Framework,” Google, https://safety.google/cybersecurity-advancements/saif/.
[201] “OWASP AI Exchange,” OWASP, https://owaspai.org/.
[202] Apostol Vassilev et al., Adversarial Machine Learning: A Taxonomy of Terminology of Attacks and Mitigations (NIST: January 2024), https://csrc.nist.gov/pubs/ai/100/2/e2023/final.
[203] CETaS workshop, 17 January 2024.
[204] HM Government, The roadmap to an effective AI assurance ecosystem (CDEI: December 2021), https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem.
[205] Rosamund Powell and Marion Oswald, “Assurance of third-party AI for UK national security,” CETaS Research Reports (January 2024), https://cetas.turing.ac.uk/publications/assurance-third-party-ai-systems-uk-national-security.
[206] Ibid.
[207] HM Government, The roadmap to an effective AI assurance ecosystem (CDEI: December 2021), https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem.
[208] Ghazi Ahamat, Madeleine Chang and Christopher Thomas, “Types of assurance in AI and the role of standards,” CDEI Blog, 17 April 2021, https://cdei.blog.gov.uk/2021/04/17/134/.
[209] HM Government, Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators (Department for Science, Innovation & Technology: February 2024), https://assets.publishing.service.gov.uk/media/65c0b6bd63a23d0013c821a0/implementing_the_uk_ai_regulatory_principles_guidance_for_regulators.pdf.
[210] Interview with academic expert (1), 16 November 2023.
Authors
Citation information
Rosamund Powell, Sam Stockwell, Nalanda Sharadjaya and Hugh Boyes, "Towards Secure AI: How far can international standards take us?," CETaS Research Reports (March 2024).