Skip to content Skip to footer

According to a study by Georgetown University, only 2% of AI research is focusing on safety.

Despite the growing interest in AI safety, a recent study by Georgetown University’s Emerging Technology Observatory reveals that only a small fraction of the industry’s research focuses on this area. After analyzing over 260 million scholarly publications, they found that just 2% of AI-related papers published between 2017 and 2022 directly addressed AI safety, ethics, robustness, or governance.

While there was a 315% growth in AI safety publications, increasing from around 1,800 to over 7,000 per year during this period, the increase is outpaced by the vast growth in AI capabilities. The report indicates that US-based institutions lead in AI safety research, while China lags. On the whole, AI safety research has to address both pressing and long-term risks.

Despite the warnings of potential risks if AGI (Artificial General Intelligence) develops without adequate safeguards, many AI researchers perceive AI safety as being overhyped. Some believe the hype is manufactured to assist Big Tech companies in enforcing regulations and eliminating open-source and grassroots competitors.

With the emergence of deep fakes, bias, and privacy issues, AI safety needs to address the risks that persist, not just for the future, but in the present. This point underscores the need for more in-depth research into AI safety that removes this imbalance.

The study also points out that the leading contributors of safety research are universities such as Carnegie Mellon, MIT, Stanford, and companies like Google, but no single organization contributed more than 2% of all safety-related publications. This observation emphasizes the need for wider and more collaborative efforts.

Additionally, the commercialization of AI has led corporations to protect their advancements, and extensive internal safety research has never been shared. AI principles of ‘moving fast and breaking things’ may need to be questioned in light of safety concerns.

Closing the AI safety gap involves addressing the balance between openness in AI development and the need for proprietary advancements. Given the risks associated with AI, enhanced global and corporate cooperation may be required, similar to the formation of the International Atomic Energy Agency (IAEA) after several nuclear incidents. The question still remains whether AI will need its own calamity to trigger that level of cooperation. The study concludes by urging all stakeholders, from universities, governments, tech companies, to research funders, to invest more effort and resources in AI safety.

Leave a comment

0.0/5