Skip to content Skip to footer

Researchers at MIT and University of Washington have crafted a model for understanding the behavior of humans and machines in decision-making scenarios, even when this behavior is suboptimal due to computational constraints. The model is based on an agent’s “inference budget”, predictive of future behavior derived from observations of previous actions.

This model could potentially help AI systems learn more about human behavior allowing them to assist their human counterparts more effectively. A human’s potential mistakes and weaknesses could be mitigated with the help of AI if it can predict their actions and provide alternative suggestions or adapt to those shortcomings.

The model’s effectiveness was demonstrated by predicting navigation goals and predicting subsequent moves in chess matches. In these tests, their technique either matched or outperformed another popular modeling method.

This approach could well revolutionize the way in which AI interacts with humans, making a collaborative environment much more productive. An AI system working in synergy with humans could better infer their goals and actions, making it a highly useful tool.

The key to modeling this behavior was in understanding the depth of planning an agent uses when solving a problem. This critical aspect was inferred from the agent’s past actions and was used to predict how the agent would respond to similar problems in the future.

The development of this model was inspired in part by studies regarding chess players where it was observed that simple moves required less cognitive planning, while more complex games saw stronger players planning more extensively. These planning depths acted as a proxy for how humans behave and determine computational constraints.

The model was tested in various scenarios; guessing a person’s intent from their verbal cues, predicting subsequent moves in chess games, and inferring navigation goals from previous routes. It was observed that the model’s predictions matched or surpassed a popular alterative method and provided a good correlation with player skill in chess and task difficulty.

The researchers are planning to extend the application of this model to other areas such as reinforcement learning used commonly in robotics. They hope to further this for the broad goal of making AI systems more effective collaborators.

This work was supported in part by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation. The research is set to be presented at the International Conference on Learning Representations.

Leave a comment

0.0/5