Skip to content Skip to footer

Researchers at MIT and the MIT-IBM Watson AI Lab have developed a system that trains users on when to trust an AI model’s advice. This automated system essentially creates an onboarding process based on a specific task performed by a human and an AI model. It then uses this data to develop training exercises, helping the user to understand where the AI tends to be accurate, and where it’s more likely to be wrong.

The researchers’ tests showed that users’ accuracy improved by around 5% when they collaborated with the model on an image prediction task. They also found that those trained to know when to trust the AI model performed better than those who were simply told when to trust it without any training.

The system is designed to scale, making it a potential tool for use in various situations where AI is used in collaboration with humans, including moderating social media content, writing, and programming. It also offers considerable potential in healthcare settings, where medical professionals who make treatment decisions with assistance from AI could benefit substantially from such training.

The system improves on existing methods which are often based on training materials created by human experts for specific use cases, limiting their scalability. However, since the system relies on the availability of data, the effectiveness of the onboarding stage can be constrained when data are lacking.

In the future, the researchers aim to conduct larger studies to examine the short- and long-term effects of onboarding, and are seeking ways to utilize unlabeled data effectively in the process. They also hope to reduce the number of regions identified in the data without excluding important examples.

Leave a comment

0.0/5