Skip to content Skip to footer

Researchers at MIT and the University of Washington have developed a model to estimate the computational limitations or “inference budget” of an individual or AI agent, with the ultimate objective of enhancing the collaboration between humans and AI. The project, spearheaded by graduate student Athul Paul Jacob, proposes that this model can greatly improve the way we predict an agent’s future actions, thereby significantly improving the efficacy of AI systems.

Unlike previous models that interpret suboptimal decision-making behavior as simply randomness or noise, the researchers constructed a model that acknowledges how humans are inclined to behave suboptimally, but in inconsistent ways. One of the sources of inspiration for the study was the observation of chess players making decisions under time constraints, working more quickly when faced with elementary moves while allocating more time for harder challenges.

Utilizing this observation, they built a structure that estimates an agent’s level of planning from prior actions and applies that knowledge to anticipate their decision-making process. Essentially, the researchers allow an algorithm to analyze a problem for a specified amount of time and track the decisions made at each step.

By comparing the algorithm’s decisions with the agent’s decisions, the model is able to define the point at which the agent ceased planning. As a result, the model can ascertain the agent’s “inference budget”; the length of planning for a problem, and employ this data to predict the agent’s behavior in similar situations.

The model offers efficiency as it enables the researchers to access the entire suite of conclusions made by the problem-solving algorithm without any additional procedures. It also proved to be applicable for problems that can be solved with certain algorithms, facilitating a wider range of usability.

The model was tested using three diverse tasks – deducing navigation goals, deriving communicative intent from verbal cues, and guessing subsequent moves in chess games involving humans. In all cases, the model performed as well or better than popular alternatives, and reliably aligned with skill level assessments and task complexity.

The team is hopeful these results can be adapted for processes like reinforcement learning in robotics, with the long-term goal being to enhance AI efficacy in collaboration settings. The research was partially funded by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5