A team from the University of Würzburg and the Max-Planck Institute for Human Development has successfully trained an AI model developed by Google, known as BERT, in lie detection, offering an alternative to conventional techniques such as polygraphs. Given the less-than-seamless results exhibited by human efforts (typically around 50% accuracy), an effective artificial lie detector could potentially have a profound effect on human interaction.
Critically, this presents an interesting challenge to the so-called ‘truth-default theory’, which posits that most people almost always assume that the information they are being told is true. The existence of a reliable means of detecting dishonesty might thus challenge this foundational aspect of our interactions with others.
BERT was trained in the art of lie detection through a process involving 986 participants who were tasked with outlining their weekend plans and then buttressing their account with supportive detail. To add to the data, they were then required to adopt the plans of another participant and to convincingly argue that these were in fact their original plans for their forthcoming weekend. The machine was trained on a selection that represented 80% of the nearly 1,536 statements and was later put to the test on the remaining balance.
The results proved to be highly promising, with the AI model achieving an accuracy rate of 66.86% in identifying false accounts, surpassing the human judges considerably, who made accurate judgements on only 46.47% of occasions.
However, despite this success the researchers found that there may be some resistance to widespread adoption of the technology, with only one third of their participants opting to use the AI model when given the choice.
The team surmise that the prospect of an accurate and accessible lie detection mechanism could prove to be a significant societal disruptor that paradoxically might also exacerbate distrust and division between already divided groups.
On the other side of the coin, a reliable AI lie detector could also bring immense benefits including but not limited to improved deals in business and a weapon in the fight against insurance fraud.
However, the existence of this tool also gives rise to a number of important ethical concerns. For instance, should it be used to screen refugees seeking asylum for dishonesty and could we trust the results?
The team believes that more advanced models than BERT are likely to emerge which will make human deceit increasingly easy to detect. Accordingly, they conclude that the time to lay the groundwork for a new policy framework to govern the use of this technology in society is now.