Skip to content Skip to footer

MAI#24 – Battle of neuro-chips, duplicates, and Swift fans retaliate

This week in AI news, Tay Tay’s Swifties combatted AI-generated explicit content. Artificial intelligence not only made it easier to create compromising content of celebrities, but also of everyday people. Discovery of tech like InstantID has led to concerns about how easily such content can be created, leading to widespread public and industry reaction.

In political news, AI’s ability to clone voices has become unsettlingly accurate. For instance, a Baltimore principal was targeted by an AI that generated an offensive audio clip using his voice. OpenAI, while espousing transparency, refuses to discuss its financial statements, model training data, or conflict-of-interest policy, leading to skepticism about its operations.

OpenAI’s recent focus on self-manufacturing AI chips was revealed through its dealings with Samsung and other chip manufacturers. Furthermore, a report highlighted North Korea’s burgeoning AI industry, suggesting that such operations in the secretive nation were more extensive than previously thought.

AI has also shown success in medical applications. GPT-4 has shown agreement with doctors regarding strokes treatments and Musk’s firm Neuralink, known for its work with brain-machine interfaces, completed its first human brain implant.

In terms of AI safety, studies have shown that language models could potentially assist in creating bio-weapons. With increasing calls for safety measures and regulation, both RAND Corp. and OpenAI conducted studies of their own, with varying results. The main point emphasized was the potential danger in allowing AI agents unsupervised internet access.

In the realm of EU AI legislation, the upcoming AI Act Summit 2024 aims to thrash out proposed regulations. Many civil rights groups call on the EU to probe Microsoft and OpenAI’s dealings, due to Microsoft’s sizable investment in OpenAI. This concern is amplified by Microsoft’s significant revenue growth, largely due to its AI developments.

Italy’s data protection authority has voiced concerns about OpenAI’s ChatGPT’s brought up data privacy issues related to personal data handling. Additionally, the U.S. National Science Foundation launched an AI research resource pilot program.

Finally, the former board member of OpenAI was critical of the power held by such board structures. Meta aims to close the gap with GPT-4 through its free Code Llama AI programming tool. OpenAI partnered with Common Sense Media to guard teenagers from AI misuse.

Leave a comment

0.0/5