Skip to content Skip to footer

Scientists from the Massachusetts Institute of Technology (MIT) and the University of Washington have developed an approach to mechanically infer the computational weaknesses of an AI or human agent by observing prior activities. This perceptible agent’s “inference budget” can be used to predict future behavior. Used in forthcoming AI structures, the technique could allow them to work more effectively with humans by decoding their conduct and thus their objectives.

The research, led by Athul Paul Jacob, an MIT graduate student, was inspired by previous studies of chess players. Researchers noted that players take less time to think before acting when making simpler moves, and stronger players usually spend more time planning complex matches. Jacob explains that time spent analyzing the problem is a good indicator of human behavior.

The model worked by allowing a problem-solving algorithm to function for a set time. For instance, during a chess match, an algorithm might be allowed to run for a pre-determined number of steps to see decisions at each juncture. The model then compares this to behavior exhibited by a human undertaking the same problem, identifying where the agent ceased planning. In determining the agent’s inference budget – or how long it will plan for a specific problem – the model can predict how the agent might react in a comparable situation.

This method is efficient, providing a full set of algorithm decisions without additional effort. The framework could also be applied to any problem solvable by a specific class of algorithms.

The researchers tested their model in three scenarios: inferring navigation goals from previous routes, guessing someone’s communicative intent from their verbal cues, and predicting subsequent moves in human-human chess matches. Their technique matched, or in some cases outperformed, a popular alternative model in all experiments.

As part of future work, the researchers aim to use this method to model the planning process in other areas, such as reinforcement learning often used in robotics. The research aligns with their overarching goal of developing more effective AI collaborators. This work received support from the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

Leave a comment

0.0/5