Skip to content Skip to footer

Researchers at MIT and the MIT-IBM Watson AI Lab have developed an AI system designed to educate users on when to trust an AI’s decision-making process – for instance, a radiologist determining if a patient’s X-ray shows signs of pneumonia. The training system identifies scenarios where the human should not trust the AI model, automatically learns collaboration rules, and provides them using a natural language.

The novel onboarding process results in an approximate 5 percent improvement in accuracy when humans and AI work together on image prediction tasks. Conversely, merely instructing the user when to trust the AI without providing accompanying training led to poorer performance.

The researchers’ system is designed to be adaptable to various tasks, which means it could be used extensively in a variety of applications involving human-AI collaboration, such as social media content moderation, programming, and writing.

Existing onboarding processes typically comprise training material produced by human experts for specific use cases, which limits scalability. Instead, the new onboarding method evolves over time, automatically learning from data. This enables the creation of a dataset encompassing numerous instances of a task, making it more versatile and usable across a wider range of applications.

The system collects data from both humans and AI performing a specific task. This data is plotted onto a latent space in which similar points are closer together. Algorithms then identify regions where human-AI collaboration is flawed – situations where the human could trust the AI mistakenly or disregard the AI mistakenly. Once these problem areas are identified, a second algorithm uses natural language to create rules that address these issues.

The researchers tested this system on users across two tasks – detecting traffic lights in blurry images and answering multiple choice questions on topics like biology and philosophy. Their custom onboarding process led to a significant performance boost of about 5% in the image prediction task. However, onboarding was less beneficial for the question-answering task, likely because the AI model provided explanations to go with each answer.

The researchers aim to develop their onboarding process further by studying the short and long-term effects of onboarding on larger scales. They plan to use unlabeled data in the onboarding process to enhance its effectiveness and identify ways to decrease the number of regions without missing key examples.

This research emphasizes the need for efficient onboarding processes to enhance the effectiveness of human-AI collaboration. It would help users understand when to rely on AI’s suggestions and when not, ultimately improving the value of AI in various applications. The study was helmed by Hussein Mozannar from MIT in collaboration with a team of researchers from IBM and MIT.

Leave a comment

0.0/5