Researchers from MIT and the University of Washington have created a model that considers the computational constraints of an agent, which could be a human or a machine, resulting in a more accurate prediction of the agent’s actions.
Humans, despite having sophisticated decision-making abilities, are often irrational and tend to behave suboptimally due to computational constraints. Because humans cannot spend a significant amount of time contemplating the best solution to a problem, these constraints need to be included in models of human behaviour so that AI systems can successfully work with human collaborators. Moreover, modelling this suboptimal behaviour isn’t usually successful, since it cannot always be accurately captured by simply adding noise to the model.
Instead, the researchers created a model that could deduce an agent’s computational constraints based on their previous actions. This agent’s “inference budget” can then be used to predict their future actions.
The practicality of this method was shown in a new paper, where it was applied to infer navigational goals from previous pathways and predict future moves in chess games. This technique was found to be as good as, if not better than, another popular method for modelling this type of decision-making.
Understanding human behaviour and then inferring goals from it holds potential for developing more proficient AI assistants, according to Athul Paul Jacob, a graduate student in electrical engineering and computer science at MIT and the lead author of the paper on this method. An AI system built upon this model could intervene when a human is about to make a mistake, or adapt to the weaknesses of the human with whom it is working.
The model was inspired by prior studies of chess players, where it was observed that players took less time to think before making simple moves. The researchers then constructed a framework that could deduce an agent’s depth of planning from their previous actions and use this to model their decision-making process. A set algorithm then compares these decisions to an agent working on the same problem, enabling alignment of the agent’s decisions with the algorithm’s and identification of when the agent stopped planning. Thus, an agent’s inference budget can be determined and used to predict their reactions to future similar problems.
Jacob and his team tested their model in three different scenarios – inferring navigation aims from previous routes, guessing a person’s intent from their vocal cues, and predicting the next move in chess games. In each case, their method either matched or surpassed another popular alternative. The researchers hope to use their model to represent the planning process in other domains, like reinforcement learning, to build more efficient AI systems.
This research was supported by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity programme and the National Science Foundation.