Recent research has shown that the latest AI language model by Anthropic, Claude 3 Opus, is capable of generating persuasive arguments comparable to those crafted by human beings. The study was spearheaded by Esin Durmus and it looked into the link between the model scale and its persuasiveness across various versions of Anthropic language models. The main focus of the research enveloped 28 modern and complicated subject matters such as online content moderation and ethical guidelines for space exploration. These are areas where people typically lack solid or time-honored views.
The team of researchers compared the persuasive capabilities of different Anthropics models, which included Claude 1, 2, and 3, against arguments penned down by humans. Claude 3 Opus, the most sophisticated model from Anthropics, produced arguments that were statistically on par with those generated by humans in terms of persuasiveness.
The team found a noticeable upward trend with each generation of models, with newer versions displaying increased persuasiveness, whether in more concise, compact models or expansive, frontier models. The research utilized four unique prompts to instigate AI-based arguments, therefore, covering a wider array of persuasive writing skills and methods.
However, the Anthropics team acknowledges some limitations, stating that while the results are noteworthy, they were carried out in a lab setting and might not necessarily apply to real-world circumstances. Despite these limitations, the AI model’s persuasiveness has been proven to be impressive, and this is not the first study to showcase this capability.
In a related study conducted in March 2024, researchers from EPFL, Switzerland, and the Bruno Kessler Institute of Italy discovered that when the GPT-4 model had access to personal information about its opponent in a debate, it was 81.7% more likely to convince they than a human participant. The researchers concluded that the language learning model-based (LLM) highly outperformed both normal LLM and human-based micro-targeting, with GPT-4 exploiting personal information more effectively than a human would.
However impressive the advantages of persuasive AI, there are legitimate societal concerns about the potential misuse of these highly persuasive LLMs. Concerns revolve around coercion and social engineering. As Anthropics states, it’s critical to evaluate and measure these risks accurately to develop robust safeguards.
Further concerns are connected to how the increasing persuasiveness of AI language models might incorporate advanced voice cloning technology such as OpenAI’s Voice Engine. The Voice Engine needs just 15 seconds to create a realistic voice copy which could potentially be used for sophisticated fraud or social engineering scams. The high likelihood of deep fake scams increasing is alarming, especially when voice cloning technology is combined with AI’s potent persuasive techniques. These developments underline the need for responsible technology use and appropriate safeguards.