Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a system that instructs users when to trust an AI system’s decision-making. In medicine, there might be instances like a radiologist using an AI model for reading X-rays where human intervention can make a difference. However, clinicians are uncertain whether to lean on the AI model or to rely on their judgement. To resolve this issue, the researchers have designed an onboarding process. During the onboarding, the users perform several tasks relying on AI, where their performance and the AI’s performance is evaluated and feedback is given. This teaches users the application of AI in decision-making and have found to improve the accuracy by 5% when humans and AI collaborated on an image prediction task. This onboarding system is completely automated and has been designed to adapt to different tasks and to be scalable. It can be used in various situations where humans and AI models work together, and is applicable to various fields beyond medical applications.

The system collects data from an individual task performed by the user with the help of the AI, and the AI’s prediction about the task. This data is then used to develop rules for collaboration in natural language. It allows the user to familiarize themselves with situations in which their reliance on the AI would be misplaced. The researchers’ proposed system identifies these situations and formulates rules to guide the user accordingly. Not only does this system provide clarity on when to rely on the AI’s prediction, it also improves task accuracy significantly.

The researchers utilized AI system onboarding on two tasks—detecting traffic lights in blurry images and answering multiple-choice questions. They discovered that only their onboarding procedure without recommendations improved user accuracy significantly, boosting traffic light prediction task performance by about 5% without reducing their speed. However, the onboarding did not significantly improve the outcomes of the question-answering task.

According to the researchers, their system is crucial for AI developers to guide users on when to trust AI’s suggestions. They believe that their method will result in beneficial human-AI team interactions. They are planning to conduct further studies to evaluate the short- and long-term effects of onboarding, and find ways to leverage unlabeled data in the onboarding procedure. The research is funded by the MIT-IBM Watson AI Lab.

Leave a comment

0.0/5