PauseAI, an activist group focused on AI safety, organized global demonstrations to advocate for a halt in the development of AI models that surpass the power of GPT-4. The group rallied supporters in 14 cities worldwide, including New York, London, Sydney, and Sao Paulo, with the intent of drawing attention to what it perceives are the significant risks associated with AI.
By organizing the protests, the aim of PauseAI is to persuade those with influence attending the upcoming AI Safety Summit to acknowledge and address the inherent dangers associated with artificial intelligence. The group strongly believes that these influential individuals hold the power needed to solve this escalating issue.
The summit in question, known as the AI Seoul Summit, is a follow-up to the UK AI Safety Summit held in November and is scheduled for the 21st and 22nd of May. Despite initial enthusiasm, interest in international cooperation on AI safety is dwindling. Multiple participants have withdrawn from the summit.
According to PauseAI, any expectation for nations or companies to voluntarily risk their comparative advantage by pausing development in AI, is unrealistic unless they’re not alone in doing so. Hence the group’s protest for a collective pause. The activists hope that the primary outcome of the summit would be the constitution of an international AI safety agency, a concept supported even by OpenAI CEO, Sam Altman.
These protests coincide with the launch of OpenAI’s GPT-4o, an enhanced version of its predecessor, GPT-4. While AI enthusiasts were fascinated by the cutting-edge properties of GPT-4o, protestors from PauseAI demonstrated concern over the rapid advancement in AI model development.
Despite their protest methods, PauseAI questions whether their actions will influence global leaders in making meaningful decisions towards AI safety measures. The group has faith in the power of public advocacy and assert that with the necessary resources, it is possible to avert any imminent perils from AGI (Artificial General Intelligence).
Comparing with previously successful protests against issues such as GMO foods, nuclear weapons, and climate change, PauseAI hopes their efforts prove fruitful. However, the challenges they face are compounded by the increasing allure of tangible AI benefits observed in our world today, which distracts from legitimate safety concerns.
The question remains whether PauseAI will succeed in their mission and gain public acknowledgement, whether their concerns will be dismissed as unfounded, or whether they will be vindicated in their warnings about the dangers of AI development.
The central message of the group’s protest is a plea for responsible development and application of AI technologies to prevent any potential harm to humanity. PauseAI firmly advocates for a universal pause in the training of new AI models until adequate safety measures are established.