Skip to content Skip to footer

Researchers at MIT and the University of Washington have devised a model to predict the behaviour of AI systems and humans. The model factors in the indefinite computational constraints which may hinder an agent’s problem-solving skills. By analysing only a few instances of previous actions, the model can predict an agent’s future behaviour. The findings could help AI systems understand and respond better to human behaviour.

The team found humans often behave suboptimally when making decisions, due to computational constraints. “If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it,” said Athul Paul Jacob, an electrical engineering and computer science graduate and lead author of the study.

The researchers’ model inferred a chess player’s or an AI’s planning depth from their prior actions to predict future decision-making patterns. The study found planning depth to be an excellent proxy for human behaviour and could be used to guess an agent’s future actions on similar problems.

The team used their model to anticipate navigation goals from previous routes and predict future moves in chess matches. Its results either matched or outperformed another widely-used method for modelling this type of decision-making. In future, the researchers plan to apply this approach to other domains, including reinforcement learning, commonly used in robotics. The overall goal of their research is to develop more effective AI collaborators. Their study was supported by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5