Skip to content Skip to footer

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a system to teach users of artificial intelligence (AI) technology when they should or shouldn’t trust its outcomes. This could be particularly beneficial in the medical field, where errors could have serious repercussions.

The team created an automated system to teach a radiologist how to best collaborate with an AI assistant in interpreting patient X-rays. By learning from the data gathered during specific tasks, the system can identify when the radiologist might incorrectly trust the AI’s findings. Once these instances are identified, the system applies natural language processing to articulate the rules for collaboration.

During the onboarding process, the radiologist then practices these rules through training exercises with the AI assistant. Through this system, which the researchers say can be applied to diverse tasks, they saw a 5% increase in accuracy on an image prediction task. A potential downside was discovered, however: merely telling the user when to trust the AI without also providing training led to worse performance.

Given that both AI models’ capabilities and the user’s perception of its ability are continually evolving, the team are designing their training process to adapt over time and to scale up accordingly. This ability to continuously evolve with the user and the task at hand makes it a standout compared to existing onboarding methods.

This system was put to the test with users undertaking two tasks: detecting traffic lights in blurry images and answering multiple choice questions from a variety of fields. The process improved users’ accuracy on the traffic light task by about 5%, without causing the task to take longer. This was not the case for the question-answering task, which likely was due to the AI model providing helpful explanations with each answer.

The researchers also discovered that telling a user when to trust the AI without including the onboarding process caused the user to perform worse and take more time. As a result, the team is seeking to refine their system by conducting larger studies. They also hope to utilize unlabeled data and discover methods to effectively reduce the number of identified regions without omitting crucial examples.

The onboarding system developed by these researchers may introduce significant improvements in how AI assistants and humans collaborate. By honing the onboarding process, the team anticipates maximizing the effectiveness of AI collaboration, ensuring increased accuracy, and improving user trust in AI-assisted tasks.

Leave a comment

0.0/5