Researchers at MIT and the MIT-IBM Watson AI Lab have outlined an onboarding process, which includes training for users of artificial intelligence (AI) tools to better comprehend and utilise them. With a 5% accuracy improvement, the system setup enables a user to discern when to collaborate with AI by providing a personalised training programme.
The AI system under this onboarding process presents its own set of usage guiding rules, narrated in a human-like manner, which are then used as the basis for training exercises for the user. The user receives feedback with relation to their own as well as AI’s performance.
This system has shown its significance, especially when compared to situations where the user is only informed when to trust the AI, as this approach resulted in lower performance standards. In addition, the framework for the system is completely automated, designed to collect data from the user and AI performance, making it adaptable for different tasks and scalable for larger applications.
Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author, highlighted how AI is often used without any training or tutorials, which is counterproductive given the vast capabilities of the tool. The researchers suggest that ‘onboarding’ could be incorporated into professional training in various fields, such as creating treatment decisions with the help of AI in the medical field.
The onboarding method, which is constantly evolving and adapting like the AI itself, collects data on the performance of both the human and the AI in a specific task and uses natural language to communicate relevant rules to the user. After testing, a process of repetition on weak areas is carried out, aimed at engaging the users to make more accurate predictions in the future.
Training was undertaken on their system by users on two tasks — detecting traffic lights in images and answering multiple choice questions. Users were split into five groups for task assignment. Significant improvement in performance was noted only when the researchers’ onboarding process was used without recommendations. However, the same level of performance improvement was not noticed with the question answering task, presumably due to AI explanations provided with each answer.
Interestingly, providing recommendations without onboarding made users perform worse, and they also took a longer time to make decisions. Providing recommendations without the onboarding process could actually harm the performance, but with onboarding there was an inherent limitation of the amount of available data. Going forward, the researchers plan to conduct more extensive studies on the effects of onboarding and develop methods for reducing the number of regions necessary for training without missing out on important examples.