A new agreement between 10 countries plus the European Union, reached at the AI Seoul Summit, has committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.
The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will bring together the publicly backed institutions, similar to the UK’s AI Safety Institute, that have been created since the UK launched the world’s first at the inaugural AI Safety Summit – including those in the US, Japan and Singapore.
Coming together, the network will build “complementarity and interoperability” between their technical work and approach to AI safety, to promote the safe, secure and trustworthy development of AI.
This will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science around AI safety.
This was agreed at the leaders’ session of the AI Seoul Summit, bringing together world leaders and leading AI companies to discuss AI safety, innovation and inclusivity.
As part of the talks, leaders signed up to the wider Seoul Declaration which cements the importance of enhanced international cooperation to develop AI that is “human-centric, trustworthy and responsible”, so that it can be used to solve the world’s biggest challenges, protect human rights, and bridge global digital divides.
They recognised the importance of a risk-based approach in governing AI to maximise the benefits and address the broad range of risks from AI, to ensure the safe, secure, and trustworthy design, development, deployment, and use of Al.