A team at MIT, along with the Broad Institute of MIT and Harvard, and Massachusetts General Hospital, has developed an artificial intelligence (AI) tool that can help navigate the uncertainty in medical image analysis. The tool, named Tyche, provides multiple possible interpretations of a medical image rather than the single answer typically provided by AI models.
AI systems have become an integral tool in biomedicine, especially in identifying and annotating key structures in medical images such as organs or cells. However, these systems often spit out singular answers, missing the complexity and ambiguity that may exist in the images. For instance, human experts studying the same image might disagree about certain elements of it, and choosing one interpretation over others may potentially exclude vital information.
Tyche addresses this problem by being able to produce multiple possible interpretations of a given image, each highlighting different areas. The users can then choose the most appropriate interpretation depending upon their specific requirements. Furthermore, Tyche can perform new segmentation tasks without needing retraining, which significantly reduces the time, resources, and data usually associated with the application of AI tools. The tool can be applied to various tasks including identification of lung lesions to peculiarities in brain MRIs.
One of the vital aspects of Tyche is its ability to factor in ambiguity, which has been largely understudied within the AI model usage in medical imaging. For instance, if an AI model fails to identify something that three out of five experts agree exists within the image, it could mean missing out on crucial diagnostic information.
Tyche has been created by modifying the neural network architecture, which uses nodes or neurons to process data. To handle uncertainty, Tyche produces multiple predictions based on one medical image input and a contextual set of examples. Various predictions are generated at each stage as the data move from layer to layer, ensuring diversity. Furthermore, Tyche has been designed to maximize the quality of its best prediction during its training phase.
When tested, Tyche’s predictions displayed a range that reflected the diversity of human annotators. Its best predictions outperformed the baseline models, and Tyche performed these tasks faster than most existing models. Interestingly, the tool could also outperform complex models trained using large, specialized datasets.
The team plans to expand the flexibility of the model by including a more varied context set, such as adding text or multiple types of images. They are also exploring methods to improve Tyche’s less accurate predictions and enhance the system to recommend the best segmentation candidates. The system has been highlighted at the upcoming IEEE Conference on Computer Vision and Pattern Recognition.