MIT and the University of Washington researchers have devised a method to model human or machine agent behaviour incorporating unknown computational constraints limiting problem-solving abilities. The technique generates an “inference budget” by observing a few previous actions, effectively predicting future behaviour. Lead author Athul Paul Jacob believes the work could help AI systems better understand and respond to human behaviour, suggesting an AI assistant could intervene if a human is about to make an error.
The researchers’ paper illustrates their technique’s application to inferring chess players’ future moves and navigation from prior actions, which was as effective, if not more so, than a well-known modelling method. Notably, the inefficiencies of human decision-making often go unaccounted for, but the researchers’ model can accurately depict how individuals behave imperfectly in distinct ways. The team constructed a framework that uses previous actions to calculate a user’s planning depth and predict future actions accordingly. It offers the advantage of assessing the complete decision series provided by problem-solving algorithms.
This interpretable technique was applied successfully to predict a range of behaviour, including inferring directions from past routes and predicting future moves in chess. It performed as well or better than an established alternative and reflected measures of player skill and task difficulty convincingly. With support from MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation, the researchers plan to extend it to other domains, such as reinforcement learning in robotics, advancing their ultimate goal of developing better AI collaborators.