AI Risk and Risk Management
AI's impact on elections is being overblown
"While Meta has a vested interest in minimizing AI’s alleged impact on elections, it is not alone. Similar findings were also reported by the UK’s respected Alan Turing Institute in May. Researchers there studied more than 100 national elections held since 2023 and found “just 19 were identified to show AI interference.” Furthermore, the evidence did not demonstrate any “clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.”"
- MIT Tech Review, 3 September 2024.
Is AI a threat to the UK general election?
- The Science or Fiction Podcast (KCL), 30 June 2024.
How much is AI meddling in elections?
- Reuters, 27 June 2024.
This election is a maze of confusing policies, but here's how AI could help
- Evening Standard, 21 June 2024.
How Worried Should We Actually Be About Election Interference?
- The Huffington Post, 19 June 2024.
OpenAI is very smug after thwarting five ineffective AI covert influence ops
"OpenAI's determination that these AI-powered covert influence campaigns were ineffective was echoed in a May 2024 report on UK election interference by The Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute.
"The current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system," the CETaS report found, noting that of 112 national elections that have either taken place since January 2023 or will occur in 2024, AI-based meddling was detected in just 19 and there's no data yet to suggest election results were materially swayed by AI.
That said, the CETaS report argues that AI content creates second-order risks, such as sowing distrust and inciting hate, that are difficult to measure and have uncertain consequences."
- The Register, 30 May 2024.
Electoral Commission to warn voters of online disinformation amid foreign interference election fears
"A new hub will be created on the Commission’s website and include information urging voters to think critically about information they may see or hear online, particularly on social media.
It comes after The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, warned about the potential threats of artificial intelligence (AI) during the election campaign.
A new study from the institute said there was little evidence that AI had directly impacted election results. There have, however, been early signs of the damage the technology had caused to democratic systems more broadly through a “polarised information space”.
This included confusion over whether AI-generated content is real, damaging trust in online sources; deepfakes inciting online hate against political figures, threatening their personal safety; and politicians exploiting AI disinformation for potential electoral gain."
- The Independent, 30 May 2024.
General Election 2024: How to spot misinformation and fakes
"This all comes as the Alan Turing Institute warned that action was needed to protect the UK election from AI disinformation, calling on the Electoral Commission and Ofcom to create guidelines and get agreements from political parties on how AI might be used in campaigning.
However, the study did find that there was "limited evidence" that AI would impact election results, but that it could still be used to incite hate and spread disinformation online."
- ITV, 30 May 2024.
More than 90% of UK public have encountered misinformation online, study says
- Yahoo!News, 30 May 2024.
Election risks, safety summits and Scarlett Johansson: the week in AI – podcast
- The Guardian Science Weekly Podcast, 30 May 2024.
Deepfakes and AI 'unlikely to swing the election result'
- The Times, 29 May 2024 [print].
Tories fail to dent support for Labour
- The Times, 29 May 2024 [print].
"The use of deepfakes and Al to spread misinformation is not likely to affect the outcome of the general election, experts have concluded, but could still "erode trust in democracy". The Alan Turing Institute analysed 112 elections around the world and found attempts to use AI to trick voters. A deepfake is a video or audio clip that has been manipulated using AI tools to replicate a person's face or voice. However, the researchers said "to date there is limited evidence that Al has prevented a candidate from winning compared to the expected result". The study said: "The current impact of Al on specific election results is limited but these threats show signs of damaging the broader democratic system."
Poll briefing: AI 'threat' to election
- Daily Mail, 29 May 2024 [print].
Warning over political deepfakes
- the i, 29 May 2024 [print].
AI institute raises alarm about election deepfakes
- The Daily Telegraph, 29 May 2024 [print].
AI disinformation 'could disrupt election'
- Western Daily Press, 29 May 2024 [print].
Warning on use of AI to mislead voters
- Yorkshire Post, 29 May 2024 [print].
BBC Radio 5 Live - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election. You can listen to the interview clip here.
BBC Radio Scotland - Good Morning Scotland - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election. You can listen to the interview clip here.
BBC Radio 4 - Today - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election. You can listen to the interview clip here.
LBC News with Martin Stanford - 29 May 2024
Interview with Sam Stockwell on the threat of AI to the UK general election [segment: 30:55–36:38].
Action needed to protect election from AI disinformation, study says
- Evening Standard, 29 May 2024.
"In its study, CETaS said it had created a timeline of how AI could be used in the run-up to an election, suggesting it could be used to undermine the reputation of candidates, falsely claim that they have withdrawn or use disinformation to shape voter attitudes on a particular issue.
The study also said misinformation around how, when or where to vote could be used to undermine the electoral process.
Sam Stockwell, research associate at the Alan Turing Institute and the study’s lead author, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information. That’s why it’s so important for regulators to act quickly before it’s too late.”
Dr Alexander Babuta, director of CETaS, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face. Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process."
AI-generated photos and videos pose threat to General Election as 'deep-fake' images could be used to attack politicians' characters, spread hate, and erode trust in democracy
- Daily Mail, 29 May 2024.
UK Government Urged to Publish Guidance for Electoral AI
- Bank InfoSecurity, 29 May 2024.
Alan Turing Institute warns of AI threats to general election
- UK Authority, 29 May 2024.
AI deepfakes and election misinformation
- Science Media Centre, 29 May 2024.
Time running out for regulators to tackle AI threat ahead of general election, researchers warn
- Sky News, 29 May 2024.
"Sam Stockwell, research associate at the Alan Turing Institute and lead author of the report, said online harassment of public figures who are subject to deepfake attacks could push some to avoid engaging in online forums.
He said: "The challenge in discerning between AI-generated and authentic content poses all sorts of issues down the line… It allows bad actors to exploit that uncertainty by dismissing deepfake content as allegations, there's fake news, it poses problems with fact-checking, and all of these things are detrimental to the fundamental principles of democracy."
The report called for Ofcom, the media regulator, and the Electoral Commission, to issue joint guidance and request voluntary agreements on the fair use of AI by political parties in election campaigning.
It also recommended guidance for the media on reporting about AI-generated fake content and making sure voter information includes guidance on how to spot AI-generated content and where to go for advice."
'Unintended harms' of generative AI pose national security risk to UK, report warns
- Tech Monitor, 16 December 2023.
UK AI National Institute Urges 'Red Lines' For Generative AI
- GovInfoSecurity, 15 December 2023.
House of Lords AI in Weapon Systems Committee report 'Proceed with Caution: Artificial Intelligence in Weapon Systems'
- UK Parliament, 1 December 2023.
Frontier AI: capabilities and risks
- Gov.uk, 25 October 2023.
Future risks of frontier AI
- Gov.uk, 25 October 2023.
UN Report on 'AI and International Security: Understanding the Risks and Paving the Path for Confidence- Building Measures'
- UNIDIR, 12 October 2023.
Intelligence Tradecraft
Report sees AI as key to national security decision making
- ADS News, 25 April 2024.
AI Key for National Security, Says New Turing Institute Report
- Digit News, 24 April 2024.
Alan Turing Institute report advises AI to be embraced in security
- UK Authority, 25 April 2024.
AI is coming to help national security – but could bring major risks, official report warns
- The Independent, 23 April 2024.
Using AI for national security decision making vital but risky, report warns
- The Standard, 23 April 2024.
“We are already taking decisive action to ensure we harness AI safely and effectively, including hosting the inaugural AI Safety Summit and the recent signing of our AI Compact at the Summit for Democracy in South Korea,” said Oliver Dowden, the deputy prime minister. "We will carefully consider the findings of this report to inform national security decision makers to make the best use of AI in their work protecting the country.”
“Our research has found that AI is a critical tool for the intelligence analysis and assessment community. But it also introduces new dimensions of uncertainty, which must be effectively communicated to those making high-stakes decisions based on AI-enriched insights,” said Alexander Babuta, director of The Alan Turing Institute’s Centre for Emerging Technology and Security. “As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research, to maximise the many opportunities that AI offers to help keep the country safe.”
“AI is not new to GCHQ or the intelligence assessment community, but the accelerating pace of change is,” said Anne Keast-Butler, director of GCHQ. “In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security.”
Alan Turing Institute says UK’s national security decision makers need training on AI's limits
- The Stack, 23 April 2024.
AI will be key to future national security decision making – but brings its own risks
- Gov.uk, 23 April 2024.
House of Lords Communications and Digital Committee’s report on Large Language models and Generative AI
- UK Parliament, 2 February 2024.
AI chief urges Britain to build a rival to ChatGPT for UK's security service
- Daily Mail, 29 October 2023.
James Bond’s job safe as GCHQ scientist says AI can only do ‘extremely junior’ spying
- Telegraph, 6 August 2023.
"According to a paper jointly written by the chief data scientist at GCHQ, Britain's Cheltenham-based eavesdropping agency, chatbots such as ChatGPT are only good enough to replace "extremely junior" intelligence analysts.
The GCHQ official, identified only as Adam C, and Richard Carter, a computer scientist at The Alan Turing Institute, said the software technology on which chatbots are based - known as a large language model - is not ready to be widely deployed in the secret world of intelligence gathering."
Regulation and Governance
Police use of facial recognition in Britain is spreading
- The Economist, 3 September 2024.
“Surveys suggest that Britons accept the arguments for facial recognition. A poll taken in March by the Centre for Emerging Technology and Security and the Alan Turing Institute found that 60% of Britons are comfortable with the police’s use of the technology in real time to identify criminals in a crowd.”
UK needs new biometrics strategy: Scotland Biometrics Commissioner
- Biometric Update, 9 July 2024.
“The research reinforces my own view that the UK’s legal framework (and strategy) for biometrics is inadequate and in need of reform principally because it is failing to keep pace with rapid changes to biometric technology,” Plastow writes. “The research also highlights evidence of public anxiety over the adequacy of safeguards to protect individuals from a range of risks, such as data misuse and the discriminatory implications of certain ‘novel’ emerging use cases.”
Looking forward: Sam Stockwell and Megan Hughes explore future Biometric trends for policing and law enforcement
- International Security Journal, 13 May 2024.
Commissioner welcomes CETaS report
- Scottish Biometrics Commissioner, 4 April 2024.
"The Scottish Biometrics Commissioner Dr Brian Plastow has today welcomed a research report published by the Centre for Emerging Technology and Security (CETaS). The report examines ‘The Future of Biometric Technology for Policing and Law Enforcement: Informing UK Regulation’.
In welcoming the research, the Commissioner notes that it touches on many important aspects of biometrics in policing and security including the need to ensure that new and emerging technologies are scientifically valid and reliable, observe human rights, deliver public trust, and are subject to independent oversight and sound governance regimes. The Commissioner particularly welcomes the recommendation that any changes to UK biometrics regulation should consider the distinct legal frameworks of devolved administrations to apply a more consistent governance approach and reduce the risk that new measures conflict with separate biometric laws in Scotland or Northern Ireland."
Public worried by police and companies sharing biometric data
- Computer Weekly, 4 April 2024.
Brits support police use of biometrics technology but only if its regulated: survey
- Biometric Update, 29 March 2024.
Survey shows mixed views on police use of biometrics
- UK Authority, 28 March 2024.
What Does the Public Think of Sharing Biometric Data to Tackle Crime?
- Digit News, 28 March 2024.
AI Fringe Summit Conference Report: Perspectives from the AI Fringe
- AI Fringe, 2 February 2024.
Lords Committee questions legality of Live Facial Recognition Technology
- UK Parliament, 27 January 2024. (CETaS / Turing evidence cited extensively in House of Lords Justice and Home Affairs Committee letter to Home Secretary).
Parliament Debate: Advanced Artificial Intelligence. Volume 832 - debated on Monday 24 July 2023
- UK Parliament, 24 July 2023.
Lord Anderson of Ipswich, 4.41pm:
"IPCO’s Technology Advisory Panel—a body recommended in my bulk powers review of 2016 and ably led by the computer scientist Professor Dame Muffy Calder—is there to guide the senior judicial commissioners who, quite rightly, have the final say on the issue of warrants. The CETaS research report published in May, Privacy Intrusion and National Security in the Age of AI, sets out the factors that could determine the intrusiveness of automated analytic methods. Over the coming years, the focus on how bulk data is acquired and retained may further evolve, under the influence of bulk analytics and AI, towards a focus on how it is used. Perhaps the Information Commissioner’s Office, which already oversees the NCA’s use of bulk datasets, will have a role."
Agenda: A new way to balance the risks of AI
- Herald Scotland, 3 July 2023.
Independent review of the Investigatory Powers Act 2016
- Gov.uk, 23 June 2023.
Alan Turing Institute publishes framework for automated analytics in in security and law enforcement
- UK Authority, 31 May 2023.
Cybersecurity and Digital Privacy
Wind farms cyberattack problems
- My Broadband, 8 September 2024.
How cyberattacks on offshore wind farms could create huge problems
- The Conversation, 5 September 2024.
UK offshore wind arsenal susceptible to cyberattacks, warns think tank
- HVAC, 24 June 2024.
UK offshore wind farms vulnerable to cyberattacks
- Energy Live News, 21 June 2024.
"Anna Knack, Lead Researcher for CETaS and report author, said: “New regulation, innovative technical solutions and international collaboration across sectors will be crucial to making these systems more resilient in the future and ensuring the nation can safeguard its access to an important source of renewable energy.”
Dr Alexander Babuta, Director of CETaS commented: “The UK’s offshore wind production is set to significantly increase over the coming years. However, the more it becomes integrated into our energy supplies the greater the potential for serious disruption if it were to come under a cyberattack. Incorporating AI into these systems is one way that cybersecurity could be improved. However, to make offshore wind more resilient we need to consider the robustness of the entire system, such as rapid power recovery, as well as eliminating cybersecurity threats.”
Could AI Help Protect Offshore Wind Farms From Cyber-attacks?
- Digit News, 21 June 2024.
AI, memory safety are real threats to IoT security
- Embedded.com, 7 November 2023.
"Another keynote at the IoTSF conference took the concept of AI in IoT security further, talking about autonomous cyber defence; Anna Knack, a researcher at the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), outlined the research in this area. She provided some of the areas covered in their report, “Autonomous Cyber Defence – A roadmap from lab to ops,” produced jointly by Georgetown University’s Center for Security and Emerging Technology (CSET) and The Alan Turing Institute’s CETaS.
It looks at current state-of-the-art in autonomous cyber defence and its future potential, identifies barriers to progress and recommends specific action that can be taken to overcome those barriers. The findings and discussion are of relevance to cybersecurity practitioners, policymakers and researchers involved in developing autonomous cyber defence capabilities."
AI must have better security, says top cyber official
- BBC, 18 July 2023.
The Information Environment
AI disinformation: lessons from the UK’s election
- ASPI, 16 August 2024.
NATO must recognize the potential of open-source intelligence
- Atlantic Council, 13 August 2024.
UK elections and AI misinformation (in Arabic)
- BBC World Service, 4 July 2024.
Were Fears Of Disinformation During The Election Exaggerated?
- PoliticsHome, 22 June 2024.
Technology: legal gaps expose UK election to disinformation threat
- International Bar Association, 17 June 2024.
Understanding and mitigating the threat of AI-enabled information operations to UK election
- UK Parliament, 20 March 2024.
Is AI really a threat to democracy?
- Dazed, 29 February 2024.
The Global Security Ecosystem
South Korea’s President calls semiconductors a field of “all-out war”; announces $19bn support package for industry
- DCD News, 23 May 2024.
"Speaking in January when the plan was first unveiled, President Yoon said the government had already attracted initial investments of 622 trillion won ($471 billion) to support the development – a figure that industry analyst Dylan Patel described as a “nothing burger” giving the funding was to be spread over a 23-year period and actually amounted to a reduction in annual support for the sector.
In April, a report from the Centre for Emerging Technology and Security recommended that the UK should seek to explore an “ambitious bilateral approach” with South Korea in order to shore up its semiconductor supply chain."
UK Semiconductor Institute launched
- Tech Monitor, 20 May 2024.
"The government’s creation of the UK Semiconductor Institute was a key recommendation of a joint report on the health of the country’s chip industry published last month by the Centre for Emerging Technology and Security (CETaS) and The Alan Turing Institute. That report also strongly recommended that the UK double down on its relationship with South Korea to better pair its strengths in chip design with the latter’s manufacturing capabilities. Even so, it said that the UK was highly unlikely to be able to produce the world’s most advanced semiconductors, the manufacture of which is largely monopolised by TSMC and Samsung in East Asia."
Report highlights shortcomings in UK chip plans
- Computer Weekly, 10 April 2024.
UK Semiconductor Strategy: Gaps and Opportunities
- Digit News, 10 April 2024.