Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed an automated system that trains users on when to collaborate with an AI assistant. In medical fields such as radiology, this system could guide a practitioner on when to trust an AI model’s diagnostic advice. The researchers claim that their onboarding procedure led to around a 5% improvement in accuracy when applied to an image prediction task. To arrive at this figure, they tested their system on two tasks: detecting traffic lights in blurry images and answering multiple-choice questions from across various subjects, such as biology and philosophy.

The researchers trained an AI model to identify instances where the human radiologist trusted the advice of the AI model when it was incorrect. The system then learns to create a set of rules for when the radiologist should trust the AI while describing them in natural language. This enables the radiologist to better understand when they can trust the AI’s advice. The system evolves over time as the AI model’s capabilities and the user’s understanding of the model grow, ultimately leading to more accurate predictions. While onboarding was highly effective for the traffic light detection task, it showed limited effectiveness in the question-answering domain.

Hussein Mozannar, a graduate student and lead author of the study, highlighted the need for training and tutorials while using AI tools and asserted that their team’s approach aims to address this gap. Senior author David Sontag explains that such onboarding processes could significantly alter training methods for medical professionals and could reshape medical education and clinical trials. The research is funded by the MIT-IBM Watson AI Lab and will be presented at the Conference on Neural Information Processing Systems.

Leave a comment

0.0/5