Skip to content Skip to footer

Researchers at MIT and the University of Washington have successfully developed a model that can infer an agent’s computational constraints from observing a few samples of their past actions. The findings could potentially enhance the ability of AI systems to collaborate more effectively with humans. The scientists found that human decision-making often deviates from optimal, largely due to unknown computational constraints. To address this, the team developed a method that can account for these constraints in any agent – human or machine.

Their proposed model uses these constraints in calculating an agent’s “inference budget”, a term that refers to the extent to which they’ll likely plan for a problem. It can then use this inference budget to predict the agent’s subsequent actions. This prediction capability was demonstrated in the study by inferring navigation goals from earlier routes, and predicting subsequent moves in chess matches. The technique performed equivalently or better than prevailing methods of modelling decision-making under similar circumstances.

These insights could help to understand human behaviours better, which in turn can improve the effectiveness of AI systems in responding to their human counterparts. Knowledge of an individual’s decision-making quirks could guide an AI assistant in adapting to that person’s weaknesses and proactively offer improved ways of tackling tasks.

The researchers’ method involves the use of an algorithm which runs for a pre-set time, solving the problem being studied. The decisions made by the algorithm are then compared to an agent solving the same problem. This comparison helps determine where the agent stopped planning, thereby helping to establish the agent’s inference budget. This budget can then be used to predict how the agent would react when faced with a similar problem in future.

The model’s efficiency lies in the fact that the full set of the algorithm’s decisions is accessible without any extra effort. The researchers tested the method in three different tasks – inferring navigation goals, guessing communicative intent and predicting chess moves.

In the future, the scientists are looking to apply this inference budget method to other areas such as reinforcement learning, striving towards the ultimate goal of building better AI assistants. This work was supported by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5