Researchers from MIT and the University of Washington have created a model that can accurately predict and assess human and machine behaviour to support more effective AI-human collaboration. The model can compute the behavioural constraints of an individual or machine by evaluating data related to previous actions. The resulting “inference budget” can be utilised to predict the future behaviour of an agent. In a new paper, the researchers showed the model could be applied in real world scenarios, such as determining a person’s future movements based on previous navigation history or predicting upcoming moves in a chess game. Compared to other techniques, their approach either matched or exceeded performance.
With this research, scientists hope to instruct AI systems on human behavioral patterns to improve their responses towards their human counterparts. Understanding human behaviour can enable AI assistants to better infer goals and anticipate potential human errors resulting in the AI system providing more effective assistance or even mitigating human-enhanced weaknesses. Notably, the “inference budget” is a relatable metric indicating that complex problems require more planning and high-skilled players need to plan more, Jacob, an MIT graduate student and one of the lead authors explained.
Researchers modelled their system based on observations of chess players, with better players usually spending more time planning their moves. The model would process an individual’s decision-making process through an algorithm run for a set period and use the same to predict future behaviour in similar situations.
A large part of the model’s efficiency stems from using an existing algorithm, therefore having access to a complete set of decisions without requiring additional efforts. The team applied this approach to three different tasks, including inferring navigation objectives from previous routes, understanding someone’s communication intent from their verbal cues, and predicting subsequent moves in human chess matches. As a result, their model matched or surpassed a widely used alternative in each experiment.
In the future, the researchers hope to extend this approach to other fields, including reinforcement learning in robotics. The ultimate objective is to enhance the functionality of AI collaborators. This research received sponsorship from the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.