In AI developments this week, OpenAI released a high-performance, cost-effective version of its flagship GPT-4o model, the GPT-4o mini. More developers are expected to opt for the mini version due to its affordable token rates and impressive performance on the MMLU benchmark. Elsewhere, Meta launched its highly anticipated Llama 3.1 405B model and upgraded 8B and 70B versions. Despite concerns about China having access to Meta’s most potent model, co-founder and CEO Mark Zuckerberg asserted that open source AI is the future, even though the claim is controversial.
Additionally, prominent Big Tech companies co-founded the Coalition for Secure AI (CoSAI), pushing for AI safety. However, OpenAI’s commitment to AI safety is questionable following allegations of lax safety checks for GPT-4o. The AI organization could face challenges meeting the safety demands of the US Senate resulting from the Whistleblower claims.
Innovative concepts about integrating fear into AI have surfaced, suggesting that this emotion could improve AI safety by making it more cautious. However, the idea of an AI fearing humans raises concerns. Studies indicate that fear could contribute to building more adaptable, resilient, and natural AI systems.
Meanwhile, a new study calls attention to how easily one can bypass OpenAI’s alignment guardrails, making people skeptical about its claims of ensuring AI safety. On another note, Google’s latest hybrid AI model can predict the weather using significantly less computing power. Separately, an AI model can now design proteins on-demand faster than nature would typically allow.
Among other AI news, OpenAI and Nvidia are making further advancements in their products, while it’s been suggested that humans need to teach AI on nuclear war avoidance as a precaution. Furthermore, an internationally accessible Chinese AI video generator, Kling, was launched with free credits. The battle between open and closed models in the AI industry is predicted to intensify, amid concerns over AI safety. As the tech world is increasingly becoming vulnerable to tech mishaps, professionals are urged to weigh in on these AI discussions.