A research team from MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital has developed an artificial intelligence (AI) tool, named Tyche, that presents multiple plausible interpretations of medical images, highlighting potentially important and varied insights. This tool aims to address the often complex ambiguity in medical image interpretation where different experts may view and interpret an image in differing ways.
Lead author Marianne Rakic, an MIT computer science PhD candidate, explained that the innovative Tyche system can handle new image segmentation tasks without requiring retraining. Typically, AI models require extensive machine learning experience and large amounts of data for continual retraining. The ability of Tyche to forego this process makes it user-friendly for clinicians and researchers.
The system also offers the potential for widespread applications. For instance, it could identify lesions in lung X-rays or detect anomalies in brain MRI scans. The team believes that Tyche’s multiple segmentation approach could illuminate critical information overlooked by other AI tools.
Tyche operates by modifying a conventional neural network architecture. A user commences by feeding it several examples of a segmentation task. These examples demonstrate the potential ambiguity in interpreting similar images. The researchers determined that just 16 examples, referred to as a “context set”, suffice for Tyche to make reliable predictions.
To capture uncertainty, Tyche’s neural network was adapted to produce numerous predictions from one medical image input and the relevant context set. As the candidate segmentations interact with each other and the context set examples layer by layer, diversity is ensured.
Tyche was trained to optimize the quality of its best prediction, presenting all of its predictive candidates even if one surpasses the others. The team further developed a version of Tyche that can work with an already trained model for medical image segmentation.
When Tyche was tested on datasets of annotated medical images, it produced predictions that reflected the diversity of human annotators’ interpretations with improved speed and performance compared to other models. According to the researchers, Tyche even outperformed more complex models trained on large, specialized datasets.
In the future, the team plans to improve Tyche further by using a more flexible context set, investigating techniques to upgrade its lowest quality predictions, and enhancing the system’s capability of recommending the best segmentation candidates. This research received partial funding from the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.