Skip to content Skip to footer

To construct AI systems that can effectively collaborate with humans, a comprehensive model of human behavior is pivotal, however, humans often exhibit suboptimal decision-making. Linking this irrationality to computational limitations, researchers from the Massachusetts Institute of Technology (MIT) and the University of Washington have presented an innovative technique to model the behavior of a human or machine agent even when hampered by unidentified computational constraints.

The eigen model can deduce an agent’s computational limitations by witnessing a limited number of their previous actions, resulting in what the research team tags the agent’s ‘inference budget’. This knowledge can then be applied to forecast the agent’s future conduct.

In a research paper on the subject, the scientists demonstrate the utilization of their method, using it to infer someone’s navigational objectives from their past routes and to predict future moves in chess games. Their proposed approach either matches or outperforms another popular decision-making model.

Athul Paul Jacob, a member of research team emphasized that the seller understanding of human behavior and the capability to deduce goals therein could improve the utility of an AI assistant significantly.

In creating their model, Jacob and his colleagues took inspiration from a previous study of chess players, observing that players spent less time planning for simple moves compared to complex ones. From studying the depth of planning, the team was able to infer knowledge about an agent’s decision-making process.

The new model works by running a problem-solving algorithm for a set amount of time, following which, it analyses the decisions made by the algorithm at each stage. By comparing this with the decisions made by the agent in a similar problem-solving situation, the model can judge the point at which the agent ceased planning. This information allows the model to estimate the agent’s ‘inference budget’ or the time the agent spends planning for any problem, enabling prediction of the agent’s future reactions in problem-solving scenarios.

Jacob and his team validated their approach within three different modeling tasks related to navigation, communication and chess-crunching. In each experiment, their method tied with or outperformed popular alternatives. Their behavior model also corresponded well with measures of expertise and task difficulty.

The researchers hope to employ their technique to model planning methods in other domains, including reinforcement learning and aim to continue building on their work to create more effective AI collaborators. This project was partly funded by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5