Skip to content Skip to footer

Artificial Intelligence (AI) that can work effectively with humans requires a robust model of human behaviour. However, humans often behave irrationally, limiting their decision-making abilities.

Researchers at MIT and the University of Washington have developed a model for predicting an agent’s behaviour (whether human or machine) by considering computational constraints that affect problem-solving, which they refer to as an “inference budget”.

By observing their past actions, the model can infer an agent’s computational constraints. It can then predict the agent’s future behaviour, which can be especially useful for AI systems dealing with humans.

In a new study, the researchers showed their method could infer a person’s navigation goals from prior routes, and predict players’ subsequent moves in a chess game, matching or outperforming other decision-making models.

The key is understanding a human’s behaviour and inferring their goals from that behaviour – crucial for creating a genuinely helpful AI. An effective AI could even anticipate when a human is about to make a mistake, and offer a better solution.

The research team drew inspiration from previous studies of chess players who took less time for simple moves and more time for challenging ones. They noted that the length of planning influenced human behaviour and built a model to infer an agent’s depth of planning from previous actions.

This ‘inference budget’ is a valid predictor of human behaviour for arduous tasks and for skilled players. An algorithm, set to run for a specific period, is applied to a problem to generate a data set, which is then compared with an agent’s behaviours in the same problem. The model aligns the agent’s decisions with the algorithm’s and identifies at which point the agent ceased planning, providing the agent’s inference budget.

The inference budget can process extensive data without additional work, making it efficient and versatile for problems solvable by specific algorithms. It also proved adept at ‘picking up’ complex behaviours not anticipated by its creators during testing.

The model performed well in three tasks: inferring navigational goals, guessing communicative intent, and predicting moves in chess games, either matching or surpassing a prevalent alternative method.

The team exported similar strategies to other areas, like reinforcement learning (a common trial-and-error approach in robotics). Their long-term goal is to advance this work in designing more effective AI collaborators.

This work was partially funded by MIT Schwarzman College of Computing Artificial Intelligence for Augification and Productivity programme and the National Science Foundation.

Leave a comment

0.0/5