Generative Artificial Intelligence (Gen AI) is leading to significant advancements in sectors such as science, economy, and education. At the same time, it also raises significant concerns that stem from its potential to produce robust content based on input. These advancements are leading to in-depth socio-technical studies to understand the profound implications and assessing risks and opportunities with the technology.
A debate around Gen AI’s openness has started, with regulatory developments that include the EU AI Act and the US Executive Order highlighting the necessity of understanding these risks and opportunities. Simultaneously, these also raise questions about governance and systemic risks. With closed-source models still performing better than open ones, the gap between the two is closing, leading to further debate on how to optimize open releases in ways that abate risk.
Researchers from renowned institutions such as the University of Oxford and the University of California, Berkeley are advocating for responsible open-source Gen AI. These researchers analogize the success of open-source implementation in traditional software. They emphasize benefits like research empowerment and technical alignment while also addressing existential and non-existential risks associated with the technology.
The research categorizes Gen AI models in different stages of development and proposes a new taxonomy for openness. Based on their accessibility, components are divided into fully-closed, semi-open, and fully open models. Moreover, a point-based system is used to evaluate licenses, differentiating between very restrictive and restriction-free ones. The study applies this to assess 45 high-impact Large Language Models (LLMs), showcasing a balance between closed and open-source components. Researchers call for responsible open-source development for the benefits to be capitalized on and risks to be effectively mitigated.
The research also discusses the existential risks associated with AGI. Referring to AGI’s potential to cause human extinction or irreversible global catastrophe, existential risk is a hot topic within the AI research community. Theoretical risks include automated warfare, bioterrorism, rogue AI agents, and cyber warfare. As such, the open-sourcing of AI and how this could impact AGI’s existential risk is closely examined in different development scenarios.
Incidentally, the socio-technical approach of the study also contrasts the effects of standalone open-source Gen AI models with closed ones across several key areas. A contrastive analysis followed by the examining of the risks relative to each other is conducted. This analysis moves beyond the merely technical aspects and includes socio-economic factors like research, innovation, security, and equity, among others.
In conclusion, the study provides a comprehensive and nuanced view of Gen AI and the related implications, including safety risks and the potential for misuse. The authors have proposed a robust taxonomy for openness while shedding light on near, mid, and long-term risks and opportunities. They also extend their research to provide recommendations for developers for balancing such aspects.