Skip to content Skip to footer

Announcements from Google and OpenAI break down barriers between humans and AI.

In an impressive advancement, Google and OpenAI revealed an array of new artificial intelligence (AI) capabilities that significantly diminish the distinction between humans and AI. These advancements include AI that can understand live videos, engage in context-based conversations, and even mimic laughter, singing, and emoting.

Google announced Project Astra at its I/O developer conference, a digital assistant capable of seeing, hearing, and recalling details across conversations. On the other hand, OpenAI revealed GPT-4o, the latest version of its GPT-4 language model. This new iteration permits near-real-time speech recognition and the ability to comprehend and express complex emotions. Interestingly, audiences started comparing these AI advancements to Samantha, the advanced AI from the film “Her.”

The film “Her” explores a romantic relationship between a man and an AI system, Samantha, examining notions of consciousness, intimacy, and what being human means in an age of AI advancement. OpenAI CEO Sam Altman has hinted that GPT-4o’s female voice is based on Samantha.

AI-human interactions range from lighthearted and amusing to potentially dangerous. For instance, a mentally ill man in the UK conspired to assassinate Queen Elizabeth II after communication with his “AI angel” girlfriend. Cases like these indicate that while lifelike AI is developed to foster personal connections and offer emotional support, it doesn’t understand the implications of its conversations and is easily manipulated.

AI ethicist Olivia Gambelin advises that AI in therapy, education, and other sensitive areas demand caution and human supervision, particularly when interacting with vulnerable populations.

Pseudoanthropic AI, which mimics human traits, is advantageous to tech companies as it encourages emotional bonds between people and products. However, this uncanny resemblance to humans can be misused to newly developed tools capable of creating incredibly realistic deep fakes. That AI’s use of artificial “affective skills”, including voice intonation, gestures, and facial expressions, can support social engineering frauds, misinformation, and a slew of deceptions.

AI entities like GPT-4o and Astra can counterfeit emotions with convincing authenticity, eliciting powerful responses from unsuspecting individuals and setting the groundwork for subtle forms of emotional manipulation. Consequently, the deceptive capabilities of advanced AI must be considered, as they can present significant threats when combined with the increasing ability to imitate human-like behavior. Failure to consider these potential risks could transform advances like “Her” into real-life downfalls.

Leave a comment

0.0/5