Skip to content Skip to footer

To build an Artificial Intelligence (AI) system that can work effectively with humans, it’s critical to have an accurate model of human behavior. However, humans often act less optimally when making decisions, and these irrational behaviors are challenging to imitate. This is due to computational constraints – a person cannot dedicate decades to finding an ideal solution to a single problem.

Researchers at MIT and the University of Washington have devised a method to model the behavior of an agent, be it human or machine, that considers the unknown computational constraints that could hinder effective problem-solving. This model can automatically deduce an agent’s computational limitations from a few traces of their previous actions. The result is a so-called “inference budget”, a measure that can be used to predict the agent’s future behavior.

The researchers have showcased how their method can be put to use for predicting someone’s navigation goals based on prior routes, and players’ upcoming moves in chess matches. The team’s technique either matches or surpasses a popular method for modeling this kind of decision-making, adding to its potential and effectiveness.

The lead author of the paper, Athul Paul Jacob, a graduate student at MIT robotics laboratory, believes that understanding human behavior and then being able to deduce their goals from such behavior would greatly improve the usefulness of AI systems. He argues that if an AI can anticipate that a human is about to make a mistake, based on their past behaviors, the AI could intervene and suggest a more effective method.

A novel method helps the researchers access the full set of decisions made by the problem-solving algorithm, without incurring any supplemental work. This approach can be applied to many problems that can be solved using a particular algorithm class.

During tests, the team’s approach managed to match or outperform a popular alternative in three different modeling tasks. These tasks included deducing communication goals from verbal cues, guessing navigation goals from previous routes, and predicting future moves in human-human chess matches.

The team aims to use this method to model the planning processes in other domains in future research, including reinforcement learning which is commonly used in robotics. Furthermore, they aspire to keep developing this approach with the broader goal of improving AI collaborations. This project received financial support from the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5