The Global AI Safety Summit held at the UK’s Bletchley Park last year sparked considerable interest, with over 25 government representatives pledging to a joint declaration for collaborative AI oversight. However, expectations are realistic for the upcoming follow-up summit, scheduled for May 21-22, to be co-hosted virtually by the UK and South Korea. Significant players, such as DeepMind and Mozilla, have chosen not to attend, and top tech regulators from the European Union have also confirmed their absence.
The United States Department of State, while confirming participation, has not indicated its representatives for the event. Non-participation announcements have also come from the Canadian, Brazilian and Dutch governments. Speculation exists around the French government postponing the larger annual Safety Summit until 2025, but this is unverified.
The challenges around AI have become more complex, with AI’s nascent stage making it relatively easy right now to advocate for protection from globally consequential events. However, finding concrete solutions for issues such as deep fakes, environmental damage, and copyright requires a much more considerable effort. Despite laws and regulations, such as the EU AI Act, designed to control AI cropping up, many key issues remain unresolved.
According to Francine Bennett, interim director of the Ada Lovelace Institute, the policy discourse around AI has broadened, encompassing issues such as market concentration and environmental impacts. Navigating AI safety’s wider scope requires extensive and largely subjective discussion, which may not be best served in a virtual format.
Geopolitical tensions also pose a challenge, especially between Western powers and China. Although the US and China have been in private discussions about AI security, public events like the World Economic Forum have seen tense interactions and even a walkout from the US delegation during a Chinese talk.
The upcoming virtual safety summit might reflect on the progress made so far, but significant practical action on primary issues still awaits. Notable absences from the AI Safety Summit in May underscore the need for substantial international discourse on AI Safety.