Researchers from MIT and the University of Washington have developed a computational model to predict human behavior while taking into account the suboptimal decisions humans often make due to computational constraints. The researchers believe such a model could help AI systems anticipate and counterbalance human-derived errors, enhancing the efficacy of AI-human collaboration.
Suboptimal decision-making is characteristic of humans due to limitations on how long they can spend contemplating a problem – a factor often overlooked in computational models that simulate human behavior. To address this, the research team developed a method to infer an entity’s computational limitations by analyzing their previous actions, which they dubbed the “inference budget”. This model is then used to forecast the entity’s future behavior.
The researchers demonstrated the effectiveness of their approach with navigation predictions based on previous routes and chess moves predicting subsequent actions. Their technique matched or surpassed the capabilities of a widely used alternative method for modeling this type of decision-making.
The lead author of the paper, Athul Paul Jacob, said that understanding human behavior and inferring subsequent actions could make AI assistants more helpful. If an AI system using their novel model knew a human was about to err, based on their previous behavior, it could intervene to offer a better solution or adjust to its human collaborator’s weaknesses.
To construct their model, the researchers drew inspiration from studies on chess players, who they observed spent less time contemplating straightforward moves and more time strategizing difficult ones, especially if their skill level was high. This informed their approach to determine an entity’s depth of planning from their previous actions and to model their decision-making process accordingly.
The team’s method sets a problem-solving algorithm running for a set period on the task in hand. The algorithm’s sequential decisions are then compared to those of a human solving the same problem, thereby approximating how long the human spent considering their options. This allows the model to establish the human’s inference budget, which can then be used to predict their actions on a similar task in the future.
The new model can be applied comprehensively as it accesses the full set of decisions without additional effort. Jacob highlighted the interpretability of this ‘inference budget’, expressing his surprise at how well their algorithm pinpointed typical human behaviors.
Further testing showed their method’s effectiveness in three modeling tasks: inferring navigation goals, predicting communicative intent, and forecasting successive chess moves.
With future research, the team hopes to apply their methodology to other domains, such as reinforcement learning in robotics. Their ultimate aim is to create more effective AI-human collaboration. The project was supported in part by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.