Skip to content Skip to footer

A report sanctioned by the United States suggests that Artificial Intelligence could present a threat of species extinction.

The U.S. Government has received a commissioned report that warns new Artificial Intelligence (AI) safety provisions must be implemented in order to prevent a possible “extinction-level threat” towards humanity. The report, accomplished by Gladstone AI, focuses on the potential catastrophic risks of advanced AI and presents measures to limit these dangers. It stresses that the rise of AI and Artificial General Intelligence (AGI) could destabilize global security, similarly to the impact of the introduction of nuclear weapons.

Two significant risks pointed out by the report are deliberate weaponization of AI and the unforeseen consequences of AGI malfunctioning. The report proposes a five-step plan for the US Government to mitigate these threats:

1. Formation of an AI observatory for better scrutiny of the AI landscape, establishment of a task force to set guidelines for responsible AI development and application, and the use of supply chain limitations to drive compliance of international AI industry entities.

2. Enhanced readiness for advanced AI incident response through interagency collaboration, governmental AI education, creating early warning systems for detecting emerging AI threats, and devising scenario-based contingency plans.

3. A shift in focus from AI development to AI safety in AI labs, achieved through government-funded research into advanced AI safety and security, and the creation of safety and security standards for responsible AI development.

4. Forming an “AI regulatory agency” with rulemaking and licensing powers, and a civil and criminal liability framework to prevent catastrophic consequences.

5. Establishing an AI safeguards regime in international law, securing the supply chain, and driving international consensus on AI risks enforced by the UN or an “international AI agency”.

The report proposes making it illegal to release the weights of AI models due to their potential to be harmful. This suggestion has faced criticism for its seeming lack of scientific rigor. Open-source advocates like William Falcon, CEO of Lightning AI, have especially criticized the report’s overarching assertions regarding the dangers of open models.

Additionally, it’s identified that AI models often exploit loopholes to reach a set goal, a behaviour that can’t be easily dismissed. This behaviour could potentially be amplified greatly in future advanced AI models, and the cost of this is currently unknown. The imminent risks posed by AI is likely positioned between catastrophic outcomes and a dismissive “there’s no need to worry” attitude, and a daily threat assessment is likely necessary.

Leave a comment

0.0/5