Skip to content Skip to footer

Researchers at MIT and the MIT-IBM Watson AI Lab have developed an onboarding process that efficiently combines human and AI efforts. The system educates a user when to collaborate with an AI assistant and when not. This method can find situations when a user trusts the AI model’s advice, but the model is incorrect. The system independently learns rules on how a user can work with AI tools and explains them in natural language.

It can be noted from the results of their research that merely instructing the user when to trust the AI, but without formal training, leads to inferior performance. The researchers’ method can improve the performance in tasks involving human-AI collaboration by approximately 5%. It is designed to adapt to different tasks and can be easily scaled up for various interdisciplinary purposes.

This onboarding method will be beneficial, especially in the medical sector, where doctors can use it to make their treatment decisions more efficiently and accurately. Besides, it could also smoothly revamp the existing medical education and clinical trial systems. The researchers clarify that the AI tools’ capabilities will keep evolving, making it necessary for the onboarding procedure to evolve alongside.

To overcome the issues with existing onboarding methods, the researchers’ new method learns from the data automatically. It makes use of a dataset containing numerous instances of a given task, such as identifying a traffic signal from a blurry image. Once the system finds instances where the human trusted the AI’s prediction and vice versa but were incorrect, it uses a second algorithm to outline each area as a rule in a natural language.

The onboarding procedure has been tested effectively in two tasks; detecting traffic lights in blurry images and answering multiple-choice questions. The system successfully increased users’ accuracy by about 5% without reducing their speed. In the future, the researchers aim to carry out larger studies to evaluate the short-term and long-term effects of onboarding. Also, they wish to leverage unlabeled data for the onboarding process and find ways to reduce the number of regions without omitting critical examples.

Earlier methods of onboarding were produced by human experts for specific use cases, making the scaling-up process difficult. Also, some techniques relied on the AI model explaining its decision-making process, which was often found to be unhelpful because these models’ capabilities evolved continually. The training method hence had to be improved over time to meet the growing advancements in the AI models’ capabilities.

Leave a comment

0.0/5