This week’s summary includes the latest news in artificial intelligence (AI) with discussions on OpenAI’s voice cloner, a troubling NYC chatbot, and a predictive AI model.
OpenAI, the artificial intelligence research lab, announced this week that its AI voice cloner is so good that it presents a risk and could potentially not be released. Meanwhile, in New York City, a city-endorsed chatbot has drawn criticism for providing residents with misleading legal advice. In company news, Amazon invested an additional $2.75 billion in Anthropic, the producers of the Claude AI model. AI agents have become an emerging trend in the field of AI, with Apple’s researchers developing ReALM, an AI model that comprehends on-screen visuals more effectively than GPT-4, and DeepMind’s creation of Genie, a foundational model that builds 2D game environments using images or text prompts.
The US government has also recognized the need to further integrate AI into its operations. Aiming to employ 100 AI professionals, the White House outlined new AI rules to ensure federal agencies utilize AI without endangering public safety or rights. Simultaneously, US and UK officials agreed on a bilateral agreement regarding AI safety, with both countries planning on sharing research in several areas.
In addition, the number of AI relationships appears to be increasing, with a surge in popularity for AI girlfriends in certain countries. Meanwhile, AI’s footprint in energy consumption continues to command attention. Delta Electronics, for instance, launched a new energy-efficient AI device at NVIDIA’s GTC event. The University of California, Berkeley also unveiled an AI forecasting system that surpasses human capabilities in terms of accuracy.
Other stories of interest include future plans by Microsoft and OpenAI to build the $100B Stargate supercomputer, as well as Google’s desire to run Gemini on its Pixel 8 phones. NYC is also set to trial AI gun detection on the city’s subway system, and OpenAI announced plans to allow individuals to use ChatGPT without registration, albeit with a catch. Lastly, Elon Musk estimates a 20% likelihood of AI causing humanity’s destruction, but argues AI development should continue.