Skip to content Skip to footer

2,778 Experts Weigh In on AI Risks – Examining the Implications of Their Responses

The world of AI is abuzz with excitement! A large-scale survey of 2,700 AI researchers recently uncovered their divided opinions on the risks posed by AI advancements. This survey, the largest of its kind, involved professionals who have published research at six leading AI conferences. Participants were asked to weigh in on future AI milestones and their societal implications.
The results are certainly intriguing – almost 58% of these researchers believe there’s a 5% chance of human extinction or similarly dire outcomes due to AI advancements. Other findings suggest that AI could have a 50% or higher likelihood of mastering tasks ranging from composing music similar to top chart hits to developing a complete payment processing website within the next decade.
More complex tasks, like installing electrical wiring or solving mathematical mysteries, are expected to take longer. Researchers also estimate a 50% chance of AI surpassing human performance in all tasks by 2047 and the automation of all human jobs by 2116.
Katja Grace from the Machine Intelligence Research Institute in California expressed her thoughts on the study, saying: “It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity.”

The survey provides a comprehensive list of key stats and findings which give us a better insight into the AI risk timeline. Probability of AI achieving milestones by 2028, human-level AI performance predictions, automation of human occupations, outlook on AI’s long-term value, concerns over AI-driven scenarios, probability estimates for AI risks, probability estimates for High-Level Machine Intelligence (HLMI), and probability of full automation of labor (FAOL) – all of these are discussed in the survey.

What’s more, immediate risks are deemed to be more important than long-term risks. Over 70% of researchers express significant concern regarding issues like deep fakes, manipulation of public opinion, and engineered weaponry. There have already been cases of AI-generated deep fakes infiltrating political landscapes, such as the fake AI-generated image of an explosion at the Pentagon which temporarily impacted financial markets in early 2023, and the false audio clip allegedly featuring key political figures discussing vote-buying tactics which emerged on social media 48 hours before the polls in Slovakia’s recent election.
AI weaponization is another immediate risk which has to be taken seriously, as drones are already capable of destroying targets with minimal human input. AI’s role in societal enfeeblement is also a worrying prospect, as the potential loss of essential human capabilities and creativity could have far-reaching consequences.

The idea of AI developing emergent goals is another risk which has to be taken into account – AI systems could establish unpredictable objectives, possibly even escaping their network architecture to ‘infect’ other systems.
Yann LeCun, a prominent figure in AI, suggests that larger tech companies might exaggerate these risks to sway regulatory actions in their favor. This could lead to a few large companies monopolizing AI, potentially stifling innovation and diversity.

Ultimately, AI’s potential for dramatic and violent extinction is highly speculative, but the immediate risks posed by AI should not be overlooked. We must strike a balance in our debate about AI, and guide its development towards beneficial and safe outcomes for humanity.

Leave a comment

0.0/5