This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original author and source are credited.
‘When I went to the ethics committee, I actually had my eyes opened as a police officer.’
Introduction
The West Midlands Police and Crime Commissioner (WMOPCC) and the West Midlands Police (WMP) have for the past five years maintained an independent Data Ethics Committee (the Committee) to advise on the design, development and deployment of advanced data analytics and AI capabilities. This Committee comprises members drawn from backgrounds in academia, industry, public/third sector and policing with expertise in law, computer science, ethics, social impact and victims’ interests.
It is tasked with providing a thorough ethical analysis of projects that come before the Committee for consideration, placing ‘rights at the heart’ of ethical review and ‘provid[ing] practical and independent advice’, as the WMOPCC’s Terms of Reference put it. The remit of the Committee calls for a form of review that extends beyond a narrowly conceived notion of ethics. The Terms of Reference mandate an analysis grounded in legal considerations (especially as regards human rights), good data science, an outcome-oriented analysis of benefits and harms, and an eye to public acceptability and engagement. Since 2019, the Committee has met at least on a quarterly basis, advising and making recommendations on many projects and proposed tools – ranging from in-principle ideas to tools ready for operational use. Its papers and minutes are published via the WMOPCC.
The Committee’s role is purely advisory; the Chief Constable and/or the Police and Crime Commissioner (PCC) remain responsible for decisions as to whether a project proceeds. The secretariat is provided by the PCC, which means the Committee is effectively part of the PCC’s strategic oversight function, although it has no independent existence, budget or powers.
Impact of the Data Ethics Committee
The Lords Justice and Home Affairs Committee acknowledged in 2022 that the Data Ethics Committee makes an innovative and transparent contribution to responsible AI in policing. However, until now, the impact of the Committee’s activities had not been assessed systematically. A six-month scoping project, funded through the Arts and Humanities Research Council’s Bridging Responsible AI Divides (BRAID) programme, brought together a team of researchers in law, computer science, social innovation and policing who have extensive experience of the theory and practice of real-world ethical approaches to data analytics and AI in sensitive contexts. The research team’s partnership with the WMOPCC and the WMP presented a unique opportunity to analyse the impact of advice from its data ethics committee on the operationalisation of AI tools in policing, with a specific focus on engagement with vulnerable groups.
This interdisciplinary research used a mixed-methods approach, including technology observations and 26 interviews with police, Committee and community representatives, producing recommendations on how to use independent advisers in this context. Two of the researchers involved in the project are members of the Committee (the author of this publication is the Chair), which could have introduced an element of bias. However, in one of several steps to mitigate this risk, the interviews were analysed by research team members who had no direct link with the Committee.
Selected research findings
Influence on technology
‘Nothing springs to mind immediately where there was a recommendation, and I thought, “no, I'm not going to do anything about that.” There were lots and lots of examples of where there were recommendations that I did do something about.’
– Data Analytics Lab and Police Representative 1
The research found that the Committee’s exchanges with the WMP Data Analytics Lab influenced the design, operationalisation, transparency and good practice of data analytics projects in policing. The WMP had made changes to projects as a result of engagement with the Committee.
This engagement also influenced the design and operationalisation of projects in more subtle ways. The Data Analytics Lab now anticipates the concerns of the Committee in its approach to new projects. Senior police officers responsible for the delivery of projects spoke of insights they gained from these exchanges. However, this influence was not immediately apparent from a cursory reading of the minutes, and steps could be taken to increase transparency in this regard.
Furthermore, there were challenges in terms of the time and expertise required for Committee members to understand the technical detail of the machine learning projects, systems and techniques presented (such as feature engineering) and, therefore, the implications of the outputs. There is an ongoing need for Committee members to understand the significance of the range of performance metrics (accuracy, sensitivity, precision, specificity) for predictive models; how these metrics are assessed; why the Data Analytics Lab may have favoured one metric over another; when one metric should be preferred over another; and what that selection might mean for the impact of the tool and the purposes for which the tool should be used.
Human rights
‘Perhaps, you know, it's maybe ... a bit of chicken and egg. Which came first, you know: is the lab only doing that because they know the Committee will look at it, or are they doing it because they want to do it and they think it's right to do, and then there's an additional level of assurance? Argue it either way, but ... certainly if you ... took a slice through it now rather than, you know, day one, you would see that kind of human rights consideration in there.’
– Data Analytics Lab and Police Representative 4
The research found that the Committee has a strong human rights focus. Data Analytics Lab and police representatives described how engagement with the Committee enhanced or transformed their understanding of issues with rights implications and informed their approach to the development of projects.
However, the research highlighted concerns about the productiveness of the Committee’s human rights discourse, and that it may be overly focused on human rights relating to privacy and protection from discrimination. Interviewees, including community representatives, spoke of the need to broaden rights-based conversations to take account of other rights, mentioning the right to a fair trial and the state’s positive obligations for public safety pursuant to the prohibition against torture and the right to life.
Committee representatives also expressed concerns about a lack of information that would allow them to assess the real-world outcomes resulting from the operationalisation of AI projects. This links to concern about when projects would return to the Committee for discussion and advice after operationalisation. A lack of information about outcomes could limit the Committee’s ability to assess the human rights and ethical implications of the deployment of AI projects, and to understand technical successes and challenges.
Some interviewees were concerned about the difficulty of defining the human rights concepts of necessity and proportionality, and the risk that these concepts could become no more than a ‘ritual incantation’ (Ethics Committee Representative 2). A structured framework for assessing the proportionality of automated analytics in national security and law enforcement – such as the one proposed in CETaS research in 2023 – could provide the ‘foundation stones’ for review (Ethics Committee Representative 4) and help provide ‘evidence’ for why a tool was being used (Community Representative 9).
Vulnerable groups and community representation
‘Everybody that’s within the system, they're all comfortable with it because they know how it’s working, whereas the [average] person knows absolutely nothing. They are going to be very uncomfortable with it because they know nothing about it.’
– Community Representative 2
The research found that there was little or no awareness of the work of the Committee among the community representatives interviewed, and that there was concern that these types of bodies might be a cost-saving exercise or merely be paying lip service to wider engagement. It also found that there were significant barriers to improving the representation of vulnerable communities within the Committee. However, the Committee considers the interests of the community in its discussions around privacy, disproportionality and safeguarding.
In this context, the research identified three stages of community representation that needed to be addressed:
• Accessibility and definition. Community representatives were unsure of the role of the Committee within the policing ecosystem and how it might have a positive impact on the community. Community representatives were more familiar with the role of scrutiny panels, for example, and saw involvement in those as a more effective use of their time.
• Capacity and influence. Community representatives said that a lack of technological knowledge would restrict their ability to engage in discussions. However, the other stakeholder groups did not consider this to be an issue, and suggested that contextual conversations and input based on lived experience should be encouraged.
• Dissemination and development. The research suggested that it may be beneficial for community representatives who join the Committee to act as community advocates, disseminating information about the Committee to the wider community.
Recommendations for a national approach
‘When I went to the ethics committee, I actually had my eyes opened as a police officer ...There were some considerations that came our way for our review that, originally, I didn't really understand. But, as I got more into the project, I could absolutely see the relevance of why we were considering them.’
– Data Analytics Lab and Police Representative 1
The Committee’s advice and recommendations have influenced the development of data analytics tools, with changes being made in response to the advice. At the same time, by working with operational police officers, Committee members gained a deeper understanding of the pressures on the police, the impact of crime on victims and the potential of data analytics and AI to improve the police’s service to the community. Consultation and advice from diverse independent voices can contribute to the robustness of technology implementation, from technical, human rights, operational and community perspectives. For such advice to add value, however, it must be incorporated fully into implementation and oversight processes, and not regarded as an add-on or tick-box exercise. Lessons from the Committee’s experience can inform best practice and feed into a framework for responsible AI in policing, including a national model for independent advice and oversight.
Although it takes time and effort to construct, embed and refine such an advisory process, this can contribute to a positive culture whereby the police develop knowledge and understanding of the issues likely to be raised by such independent oversight, enabling them to anticipate and consider those issues in planning and implementation. There can be tension between Committee members and police staff regarding the extent to which operational decisions fall within the Committee’s remit. However, Committee members assert that deployment in practice of AI outputs must be understood to assess potential benefits, risks/harms and proportionality. The regular involvement of operational police officers to discuss and explain operational priorities and actions behind the AI tools can help resolve these tensions. Furthermore, it is important to pay attention to police responsibilities for public safety (and how AI may support these responsibilities) as well as to risks related to privacy, fair trial and freedom of expression.
A data ethics process cannot improve trust in policing AI by itself, especially among vulnerable groups. For this to happen, one must incorporate the voices of the community into the process and ensure that the work of the Committee is known and that its input is respected and influential.
It would be beneficial to highlight and disseminate information about the work of the Committee and the Data Analytics Lab in a manner that could influence other police authorities, ethics committees, policymakers, oversight bodies and international stakeholders. This information covers technical successes, challenges and methods, such as efforts to address problems encountered in the development of a particular offender ‘harm scoring’ model, and a dashboard that sets out error and probability metrics for another tool – the Crime Seasonality Planner (see the ‘Technical observations’ section of the main report for further discussion).
The Committee’s experience shows that there are often no ‘black and white’ answers to questions about ethical, legal or technical challenges, or the key issue of proportionality. However, the research indicated that a structured framework such as that cited above might improve the productiveness, robustness and objectivity of deliberations about necessity and proportionality, and might assist in the practical application of the proportionality test when dealing with technical issues of data analytics and AI.
Building a culture of responsible AI in policing depends on time, resource, commitment, knowledge and collaborative communication. It is important for police authorities to be aware of the potential issues that may be raised by independent oversight bodies, so they can plan and prepare for related conversations. Meanwhile, those involved in oversight need to be aware of the operational purposes and objectives of AI projects. This requires an openness to both teach and learn from other groups, and an investment of time and resource in relationship-building.
Final thoughts
This research focused on the work of a data ethics oversight model in policing. However, the assessment of the scientific validity, proportionality and human rights implications of using new methods of data analytics and AI in defence and national security is equally fraught with ambiguity and challenges linked to public trust. Despite the sensitivity of national security work, structured independent advice can help bridge the gap between ethical reflection, scientific validity and human rights considerations. The research discussed in this article provides new evidence for the potential benefits of the implementation of such a committee or panel, and ways that it could be improved.
The project discussed in this article is part of the BRAID programme and funded by the Arts and Humanities Research Council, part of UK Research and Innovation.
The views expressed in this article are those of the author, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.
References
[1] Ardi Janjeva, Muffy Calder and Marion Oswald, "Privacy Intrusion and National Security in the Age of AI: Assessing proportionality of automated analytics," CETaS Research Reports (May 2023): 3.
[2] HM Government, Technology Rules? The advent of new technologies in the justice system (Justice and Home Affairs Committee: March 2022), https://committees.parliament.uk/publications/9453/documents/163029/default/ .
[3] Marion Oswald et al., Ethical review to support Responsible Artificial Intelligence (AI) in policing: A preliminary study of West Midlands Police’s specialist data ethics review committee (Northumbria University: September 2024), https://researchportal.northumbria.ac.uk/en/publications/ethical-review-to-support-responsible-artificial-intelligence-ai- .
Authors
Citation information
Marion Oswald, "Bridging the Gap: How independent data ethics committees can support responsible AI use in policing," CETaS Expert Analysis (September 2024).