Skip to content Skip to footer

Researchers at MIT and the MIT-IBM Watson AI Lab have developed a system that teaches users when to trust AI and when to ignore it, and it has already led to an approximately 5% increase in accuracy during image prediction tasks. The researchers designed a customised onboarding process, which is when the user is familiarized with a particular service, that shows how the AI tool works in action, automatically crafting rules based upon previous interactions for correct usage and a better collaborative experience. Importantly, this system is fully automated and can be applied to numerous scenarios that require human and AI collaboration.

Hussein Mozannar, a lead researcher on the project, emphasized that most AI tools are provided to users without any training support, and this is a problem that needs to be tackled from both a methodological and behavioral perspective. The researchers found that merely informing the user when to trust the AI, without any training, resulted in worse performance. The researchers expect this kind of onboarding to become a crucial part of the training process for medical professionals using AI to aid treatment decisions.

The system creates an onboarding process based on data from a specific task, making it useful for a variety of tasks and scalable across different situations. Unlike current onboarding methods, which develop training materials based on specific use cases, this system learns from data and evolves over time. It does this by collecting data on how the human and AI perform a task, such as detecting a traffic light from a blurry image. It then identifies instances where the human has collaborated incorrectly with the AI, documents these as rules for training purposes and highlights these during the onboarding process to enhance future collaborative efforts.

The system was tested on two tasks: detecting traffic lights in blurry images and answering multiple-choice questions from numerous domains (like biology, philosophy, computer science, etc.). Users were divided into five groups for different training and information delivery methods. Notably, only the researchers’ onboarding procedure, without any further recommendations, significantly improved users’ accuracy on the task of predicting traffic lights in blurry images.

The researchers are keen to conduct larger studies to assess the short and long-term effects of onboarding and hope to leverage unlabeled data for the onboarding process. Dan Weld, a professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, appreciated the innovative method for identifying situations where the AI is trustworthy and noted its potential for improving human-AI team interactions. This work was partially funded by the MIT-IBM Watson AI Lab.

Leave a comment

0.0/5