Cover image source: Republic of Korea Flickr, AI Seoul Summit May 21, 2024. Cheong Wa Dae, Jongno-gu, Seoul, Office of the President. Official Photographer: Kang Min Seok.

This publication is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use, provided the original authors and source are credited.

Introduction 

The AI Seoul Summit took place between 21-22 May in South Korea, resulting in no less than five new agreements, pledges or statements. These build on the first Global AI Safety Summit at Bletchley Park in November 2023. 

Over the last year, the global discussion on AI safety has progressed from an initial desire to get countries speaking the same language on AI to having newly established, purpose-built AI safety institutes setting the safety agenda and working closely with developers on model evaluation. In a field where governments are said to move too slowly compared to industry, the agility demonstrated by the governments of the UK and South Korea should be commended. 

However, there is plenty of work still to be done. Many of the agreements to date rely on goodwill from a wide range of actors (particularly AI developers) rather than being enforceable through legislation. Divergences between innovation and safety may also become more pronounced unless concrete measures are taken to balance the two. This remains a tenuous policy position for governments, considering the outsized influence of a handful of companies and tech leaders on the AI sector’s trajectory. In this vein, it is notable that the next AI Summit in France (planned for early 2025) is now being labelled the ‘AI Action Summit’. 

Reflecting on the commitments made at the Seoul Summit 

1. Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity

Much of the framing of the Seoul Summit focused on the trifecta of safety, innovation and inclusivity. The safety lens continued the spirit of Bletchley both in terms of the malicious AI risks being prioritised (such as capabilities to assist the development and use of chemical or biological weapons) and via a commitment to develop shared risk thresholds for severe AI risks. This was complemented by a mechanism to agree when model capabilities could pose severe risks without appropriate mitigations (see Frontier AI Safety Commitments below). 

The incorporation of innovation and inclusivity into the Ministerial Statement perhaps indicates that safety will no longer be the sole focus of subsequent summitsWithout equitable access (inclusivity), there is a greater likelihood of backlash against innovation itself and increased indifference to safety. Inclusivity is not only an important goal in itself, but also an effective means of creating a virtuous cycle between innovation and safety. Yet care must be taken to ensure that there is substance behind terms like innovation and inclusivity: merely paying lip service to them risks diluting the laser focus on safety which made the Bletchley Summit so well-received.

The inclusivity challenge facing summit organisers is spelled out by the fact that the only signatories to the Seoul Ministerial Statement who were not represented on the Bletchley Declaration are Mexico and New Zealand, and the total number of signatories went down from 28 to 27. And regarding innovation, there is plenty of ‘recognition’ and ‘encouragement’ of certain actions, but still not enough specificity about the decisions that need to be made on skills, infrastructure and compute to ensure governments can lead the discussion on innovation, as they have already started to do on safety. Korean Minister for Science and ICT, Lee Jong-Ho, gave an indication of what this should look like. His focus on “low power AI chips to help mitigate the global negative impacts on energy and the environment caused by the spread of AI” offered a tangible area of innovation where Korea is well placed to lead – more of the same will be needed from other ministers and leaders in the coming months. 

2. Frontier AI Safety Commitments

16 AI tech companies signed up to ‘develop AI safely’. This included many of the companies which were present at the Bletchley Summit, as well as China’s Zhipu.ai and the UAE’s Technology Innovation Institute. Ahead of the France AI Action Summit, these companies will be expected to publish frameworks which outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what they would do to ensure thresholds are not surpassed. They also committed to “not develop or deploy a system at all” if mitigations cannot keep risk below those thresholds. 

There is still concern about whether these voluntary commitments will be enough to substantially impact the business decisions of AI giants. Indeed, one of the main outcomes from the Bletchley Summit was a commitment from leading US tech companies to submit their models to the UK AI Safety Institute prior to their release, but doubts have since been raised about which companies have acquiesced. Anthropic co-founder Jack Clark said in April that “pre-deployment testing is a nice idea but very difficult to implement”. 

For firms investing billions of dollars, securing competitive advantage often outweighs ethical considerations. It is unclear what consequences or sanctions the 16 Seoul Summit signatories would face for failing to publish adequate frameworks, and even less clear that the 27 countries that signed up to the Seoul Ministerial Statement would be able to agree on what these sanctions should be. To address inherent trade-offs, a more specific roadmap should be further developed in subsequent summits.

3. Seoul Statement of Intent towards International Cooperation on AI Safety Science

One key outcome of the AI Seoul Summit has been the establishment of the first international network of AI safety institutes to boost cooperation, forge a common understanding of AI safety and align work on research, standards and testing. This is an important step in ensuring that the proliferation of AI safety institutes globally (in 10 countries plus the EU) does not result in several incompatible approaches. From the UK’s perspective, this announcement complements the partnership announced in April with the US on the science of AI safety

There will now be increased pressure on the emerging safety institutes to demonstrate that they have ‘mastered the nascent science of frontier AI testing and evaluation’. The development of state capacity in AI evaluation has been a bright spot over the last year, allowing the likes of the UK AI Safety Institute (AISI) to establish their own information and evidence bases, and build their own jailbreaking attacks, for example. This is a prerequisite to lessening dependence on industry for the ‘ground truth’ in AI and being much closer to the coalface of AI system development and risk mitigation. 

4. Seoul AI Business Pledge

The 14 companies that signed up to the Seoul AI Business Pledge (some but not total overlap with the 16 companies signing up to the Frontier AI Safety commitments) sounded a more optimistic tone on the role of AI in revolutionising productivity and creating new added value. Nonetheless, there was a recognition of the need to promote measures to reduce mis/disinformation generated by AI and address the environmental challenges posed by AI. 

The Seoul AI Business Pledge arguably contained more of a footprint from Korean industry than other industry-focused agreements. This is also reflected in comments made during the summit by NAVER Founder, Lee Hae-jin, warning against the danger of a “small number of AI’s” dominating, rendering “our understanding of past history and culture” to be “just what those AI’s tell us.” This partly underpins Korean companies’ strategy to carve out a niche in the international AI market, creating AI systems that cater to the cultures and values of different regions.

Other notable developments include:

  • A new £8.5m grant programme administered by the UK AISI to welcome proposals in the emerging field of ‘systemic AI safety’.
  • The (interim) publication of the International Scientific Report on the Safety of Advanced AI, chaired by Yoshua Bengio.
  • The Seoul Declaration for Safe, Innovative and Inclusive AI (mirroring the Seoul Ministerial Statement for world leaders).

The Way Forward: Reformulating Priorities ahead of France 2025

Shift to Actionable Outcomes

The AI Seoul Summit has kickstarted the process of reconciling the different objectives of AI safety, innovation and inclusivity, but shifting to actionable outcomes in the coming months will be vital. This requires clarifying the scope of the various agreements reached so far, and making clear the expectations for the next Summit in France. Approaches we might wish to see in this regard include national supervision systems to ensure AI companies are balancing safety requirements with shareholder priorities, international standards and certification for AI services and products, and societal-level initiatives to encourage the consumption of safe and responsible AI products.

 

Numerous AI safety vehicles have been established over the last year, instigating broad discussions on topics like risk assessment, threshold setting, risk identification, information sharing and AI literacy. Prioritising effectively entails careful consideration of factors like current technological capabilities, ease of implementation, time, cost, engagement from industry and leadership from government. 

 

Naturally, tendencies towards these factors can diverge between countries. But establishing a clear global roadmap to safe AI is dependent on ensuring these emerging global networks can develop a united front. It will also increase the chances of striking a long-term balance where safety does not become deprioritised in relation to innovation. 

 

As well as aligning the different safety institutes, aligning industry players in different countries is also crucial. The Frontier AI Safety Commitments in Seoul highlighted this by securing the endorsement of not just US-based companies like Google, Microsoft, and Meta but also global entities such as G42 (UAE), Mistral AI (France), Zhipu.ai (China), and NAVER (South Korea). There is an opportunity for the UK and South Korea, using the leverage they have built over the last two summits, to lay the foundations for meaningful global private sector participation.

Geopolitical manoeuvring

The inclusion of a Chinese company in the above list is important, but must also be balanced against the fact that China did not ultimately sign the Seoul Ministerial Declaration. Engagement with China requires sustained effort and care – it is taking place in the broader context of fierce great power tech competition. AI dialogue with China cannot be restricted to twice-yearly AI summits: it should be incorporated across diplomatic missions. For example, in May 2024, South Korea hosted the Korea-Japan-China trilateral summit in Seoul after a four-and-a-half-year hiatus. At the summit, the three countries agreed to “emphasise the importance of interaction in the field of AI.” Given the consensus reached at the trilateral summit, South Korea is well positioned to play a leading role in AI governance discussions with China and encourage their active participation in international fora.

Leveraging the UK-South Korea Partnership 

The AI safety efforts discussed in this article are a global endeavour, but nonetheless will require a small group of countries to maintain momentum. The partnership between the UK and South Korea has been exemplary in this regard – working together to ensure the outcomes from November’s Bletchley Summit were consistent with the agenda in Seoul and avoiding painting a picture of strategic confusion. Global endeavours like this require the countries at the vanguard to adopt a mature position, avoiding the exclusive pursuit of their own interest. 

 

This has been helped both by long-standing commitments and new developments in the bilateral relationship between the two countries, epitomised by the Downing Street Accord signed in November 2023. The co-hosting of the AI Seoul Summit was the culmination of extensive work across time zones over the last year: this must be a springboard for the next phase of cooperation between the UK and South Korea.

 

Crucially, this cooperation must go beyond the narrow domain of AI safety. The authors have previously published research on the opportunities for the UK and South Korea to deepen collaboration across the semiconductor supply chain. UK strengths in core IP and chip design map well with South Korea’s strengths in manufacturing. Moreover, innovative AI-semiconductor partnerships in Korea between companies like NAVER and Samsung chart a possible path for collaboration between UK AI companies and Korean chip makers.

 

Zooming out further, the UK and South Korea can do more to reconcile the fragmentation to the AI value chain that has been caused by US–China competition (shown in Figure 1 below). Fragmentation carries costs, and in today’s landscape, states and companies are under mounting pressure to forge strategic alliances and pick sides. Particularly for parts of the world which cannot afford to pick one side, there is likely to be a yearning for a more inclusive and competitive global AI ecosystem. Although a highly ambitious goal, it is plausible that the UK and South Korea, as countries with rich histories in innovation and R&D, could be at the forefront of this open model of AI-semiconductor cooperation, creating positive externalities beyond their own borders.

Figure 1. Fragmentation in US–China AI value chain
Fragmentation in US-China AI Value Chain

Source: Cho et al. (2021). Modified by Hyunjin Lee. 

Conclusion

Momentum is everything in global governance initiatives, and the AI Seoul Summit was integral to sustaining this for AI safety. By folding in separate goals like innovation and inclusivity, the Summit perhaps gave a clue as to where the discourse is headed next. As some safety institutes around the world prepare to be placed on statutory footings, we may expect these institutes to take on more responsibility in addressing safety risks at source, opening up space for other dialogues on the global stage. It will be important for both the UK and South Korea to ensure that the thread from Bletchley to Seoul to France is visible, and give participants confidence that the meetings are still fulfilling a unique role in the AI landscape. 

 

The views expressed in this article are those of the authors, and do not necessarily represent the views of The Alan Turing Institute or any other organisation.

Authors

Citation information

Ardi Janjeva, Seungjoo Lee and Hyunjin Lee, "AI Seoul Summit Stocktake: Reflections and Projections," CETaS Expert Analysis (June 2024).