Researchers from MIT and the University of Washington have developed a model to predict human behavior that accounts for computational constraints. These constraints can impact the problem-solving abilities of both human and artificial intelligences (AI). The model can infer an “inference budget”, a computation of the possible constraints on an agent’s problem-solving methods, by observing an agent’s previous actions. Therefore, the model also can forecast an agent’s future behavior. The model could help AI predict human collaborators’ actions and improve the way AI respond to and collaborate with humans.
The study was led by Athul Paul Jacob, an Electrical Engineering and Computer Science graduate student from MIT. He said that this research could see AI assisting humans more effectively, by understanding human behavior and predicting goals or mistakes through that. Jacob worked alongside Abhishek Gupta, an assistant professor at the University of Washington, and Jacob Andreas, an associate professor in EECS and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Previous models for human behavior usually add noise to the model, which includes allowing space for errors or incorrect decisions. However, these previous models did not consistently accurately represent how humans do not always make the same mistakes or operate in the same way.
The research team then developed a framework that inferred an agent’s depth of planning through previous actions to model their decision-making process. This method involves running an algorithm for a specific time frame to solve a problem. The resultant decisions are then compared to the agent’s decisions to identify where the agent stopped planning. The model can then predict the agent’s inference budget and future actions.
Jacob stated that the inference budget is clear to interpret, as it suggests that more complex problems require longer planning time. The research team tested their model in three scenarios: inferring navigation goals from previous routes, guessing communicative intent from verbal cues, and predicting subsequent moves in chess matches. Their method either matched or outperformed a frequently-used alternative in each experiment. The researchers intend to test this approach to model the planning process in other sites, such as reinforcement learning. This study was partly backed by the MIT Schwarzman College of Computing’s Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.