This week in artificial intelligence news, OpenAI, despite its closed nature, faced a data breach and received considerable criticism. This cybersecurity incident is not surprising as the global competition in the AI field continues to intensify. In corporate restructuring, Microsoft withdrew from its observer role on OpenAI’s board and Apple declined an offer for a seat. Though Microsoft regards the move as a natural progression, speculation arises about potential hidden issues.
As OpenAI hesitates to publicly launch the voice assistant GPT-4o due to safety concerns, French AI lab Kyutai debuted its AI voice assistant Moshi, which is full of bugs and glitches but is publically usable.
A recent study probed whether AI can be funny by comparing jokes written by humans and the AI model GPT-3.5. The AI jokes were rated as funnier in a blind test. Another AI innovation is a lie detector that outperforms humans in detecting deceitful behavior.
Contrary to widespread belief that size is everything in AI, Salesforce’s tiny xLAM-1B and 7B models surprisingly outperform GPT-4 and Gemini 1.5 Pro. They excel in converting users’ natural language requests to specific API or function calls.
In an unexpected victory against GPT-4, Chinese tech company SenseTime released its SenseNova 5.5 model, claiming superior performance. This multimodal model exhibits the same voice functionality as GPT-4, but in Mandarin.
When it comes to problematic AI applications, the web design app Figma attracted criticism as users noticed that the designs created by its AI Make Design feature were eerily familiar. Meanwhile, a Spanish court sentenced 15 children for creating AI-generated explicit material, underscoring the lack of regulation for AI’s effect on young users.
In medical advancements, a new AI system can predict the likelihood of Alzheimer’s disease onset within six years based on speech analysis. In addition, AI has been used to identify drug-resistant infections such as typhoid, allowing for more effective and prompt treatment.
However, not all AI news was positive. Microsoft has refrained from releasing its advanced synthetic speech model VALL-E-2 due to security concerns. Furthermore, Wimbledon’s AI tool that generates tennis-related content frequently makes mistakes, while investigations into AI self-awareness remain inconclusive.
Despite setbacks, the AI industry continues to innovate and develop. However, it also shows us that more work needs to be done regarding AI ethics and data security. Let’s hope that the future of AI includes a balance of humor, healthcare advancements, and ethical considerations.