Abstract

This report focuses on the risks related to the potential lack of predictability of AI systems – referred to as the predictability problem – and its implications for the governance of AI systems in the national security domain. Predictability of AI systems indicates the degree to which one can answer the question: what will an AI system do? The predictability problem can refer both to correct and incorrect outcomes of an AI system, as the issue is not whether the outcomes follow logically from the working of the system, but whether it is possible to foresee them at the time of deployment.

In this report, we first analyse the predictability problem from technical and socio-technical perspectives and then focus on relevant UK, EU and US policy to consider whether and how they address this problem. From a technical perspective, we argue that given the multi- faceted process of design, development, and deployment of an AI system, it is not possible to account for all sources of errors or emerging behaviours that could result. Moreover, even in an ideal scenario where no errors at design or development stage can be assumed or detected, once deployed an AI system may still develop formally correct (but unwanted) outcomes, which were not foreseeable at the time of deployment.

Our policy analysis includes eight recommendations to mitigate the risks related to the predictability problem. The key suggestions are to centre governance approaches on human-machine teams (HMT-AI), rather than only AI systems, and to conceptualise the predictability problem as multi-dimensional, with solutions focussed on shared standards and criteria for the composition of HMT-AI. Among these standards and criteria, requirements of trustworthy AI are particularly relevant and should be coupled with standards and certification schemes assessing the predictability of AI systems and procedures to audit HMT-AI. Cost-benefit analyses and impact assessments underpinning the decision to use HMT-AI in national security should account for the predictability problem and its potential impact on human rights, democratic values, and risk of unintended consequences. To ensure sufficient risk management when deploying potentially unpredictable AI systems, we suggest adapting the ALARP principle – as low as reasonably practical – as a foundation for developing an AI-specific risk assessment framework of the predictability problem in HMT-AI.

The proposed ALARP-based framework would offer useful practical guidance, but alone would not be sufficient to identify and mitigate the risks posed by the predictability problem. Additional policy, guidance and training is required to fully account for the risks presented by the AI predictability problem. The higher the impact of the decisions that an AI system supports, the greater is the duty of care on those designing, developing, and using that system, and the lower the acceptable risk threshold. The analysis and recommendations should be read as actionable insights and practical suggestions to support relevant stakeholders to foster socially acceptable and ethically sound uses of AI in the national security context.

This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited.

Citation information

Mariarosaria Taddeo, Marta Ziosi, Andreas Tsamados, Luca Gilli and Shalini Kurapati, "Artificial Intelligence for National Security: The Predictability Problem," CETaS Research Reports (September 2022).