Researchers at MIT and the University of Washington have developed a model that predicts the behavior of an agent (either human or machine) by accounting for unknown computational constraints that might hamper problem-solving abilities. This model, described as an agent’s “inference budget”, can infer these constraints from just a few prior actions and subsequently predict future behaviors.
The researchers demonstrated their model’s efficacy by forecasting navigation goals using previous routes and predicting future moves in chess games, outperforming another popular method. Their work could be instrumental in teaching AI systems how to anticipate and respond to human behavior, thus making AI more effective and useful.
The team’s research focused on the premise that humans often make sub-optimal decisions due to limitations in their computational resources, i.e., the inability to ponder over a problem for extended periods. Traditional computational models of human behavior account for such decision-making by incorporating “noise”. However, these models often fail to capture the nuances in human suboptimality.
Drawing inspiration from prior studies of chess players, the team noticed that the time spent in decision-making varies based on an individual’s skill and the complexity of the game. Building on this observation, they created a framework that uses past actions to infer an agent’s planning depth and modeled the decision-making process accordingly.
This novel method involves running an algorithm to solve a problem for a finite time, recording each decision the algorithm takes, and comparing these with an agent’s decisions on the same problem. The comparison helps identify the stage at which the agent ceased planning, thereby determining the agent’s inference budget.
The team tested their approach on three tasks: deducing navigation goals from previous routes, discerning communicative intent from verbal cues, and predicting the next moves in human-human chess games. In each case, their technique either matched or outperformed other popular methods. Additionally, the model demonstrated good correlation with measures of player skill in chess and task difficulty.
In the future, the researchers plan to refine this approach for other domains, including reinforcement learning. Their ultimate goal is to develop more efficient AI collaborators. Funding for this research was partly provided by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.