Researchers at the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab have developed an onboarding system that trains humans when and how to collaborate with Artificial Intelligence (AI). The fully automated system learns to customize the onboarding process according to the tasks performed, making it usable across a variety of scenarios where AI and humans work together.
Artificial intelligence models, specifically those equipped to recognize patterns in images, can often perform better than the human eye. However, the decision of when to trust an AI’s prediction and when not to can pose a challenge. This new onboarding system finds situations where a user might wrongly trust an AI’s judgment and the system explains, in natural language, the rules for collaboration with the AI. Onboarding comprises training exercises based on these rules, enabling users to gain feedback on their own and the AI’s performance.
The researchers found that the onboarding method resulted in about a 5 percent increase in accuracy when humans and AI collaborated on an image prediction task. Conversely, just instructing the user when to trust the AI, without training, led to worse performance. One of the system’s advantages is its ability to learn automatically from data and adapt to different tasks.
Onboarding methods are typically designed by human experts for specific use cases and might not scale up efficiently. This research suggests an evolving onboarding method that learns from data. The system starts by collecting data on the human and AI performing a task. It then utilizes an algorithm to find instances where the human incorrectly collaborates with the AI and describes these instances in natural language. Training exercises are then built using these ‘rules’.
Testing of the system by researchers led to boosted user performance on particular tasks without causing them to slow down. However, it was found that providing instructions without onboarding led to lower performances and users taking more time to make predictions.
The researchers aim to conduct larger studies in the future to evaluate the short- and long-term effects of onboarding. They also plan to find methods to effectively reduce the number of regions without omitting important examples. The ultimate goal is to help humans understand when it’s safe to rely on the AI’s suggestions to foster better human-AI interactions. This work was funded in part by the MIT-IBM Watson AI Lab.