Skip to content Skip to footer

MIT and the University of Washington researchers have developed a model to understand and predict human behavior by considering computational constraints that limit decision-making abilities for both humans and machines. One of the defining points about the model is its ability to derive an agent’s computational constraints or “inference budget” based on a few previous actions. This could potentially predict future behavior.

The model developed by the researchers can infer someone’s intentions by analyzing prior behavior and can predict subsequent actions in situations like chess games. The ultimate goal of the research is to improve AI systems to better interact with humans by understanding their behavior and predicting their actions. Lead researcher Athul Paul Jacob stated that the ability to intervene when a human is about to make a mistake or adapt to human weaknesses could make AI more effective.

Models predicting human behavior have been under development for years. Still, their precision is compromised by their inability to account for humans’ inconsistent irrationality. To overcome this, Jacob and his team were inspired by studies of chess players. They saw that the time spent in planning moves differed according to the complexity of moves and player strength, deducing that planning depth or thinking time could be a good reflection of human behavior.

The method involves running an algorithm for a specific period or specific steps, comparing these with an agent’s decision-making process. This comparison helps in determining the agent’s inference budget, i.e., the planning time they are likely to assign for a problem. The model can then use this inference budget to predict how an agent would react when facing a similar issue.

Three tests were conducted using the model: inferring navigation goals from previously determined routes, predicting subsequent moves in human chess matches, and deducing a person’s communicative intent from their verbal cues. In each, the researchers’ method was as good as, if not better, than the existing methods. Additionally, it effectively deduced player skills and task difficulty.

The model’s most striking feature is its interpretability, says Jacob. It naturally picks up behaviors such as a strong player planning for longer or more challenging problems needing more detailed planning. The researchers aim to use this approach in other fields, like reinforcement learning in robotics, and continue to develop more efficient AI collaborators. The work was partially funded by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5