Medical imaging is a critical tool in diagnosing and monitoring disease. However, interpreting these images is not always straightforward, leading to potential disagreement amongst clinicians. To address this issue, researchers at MIT, in collaboration with the Broad Institute of MIT and Harvard, and Massachusetts General Hospital (MGH), have developed an artificial intelligence (AI) tool, named Tyche, to better handle the varying interpretations derived from medical images.
In a research paper, the team demonstrates that Tyche, inspired by the Greek divinity of uncertainty, can offer several plausible segmented images based on an original medical image. The user can then select the most suitable interpretation for their requirements. Unlike other models, Tyche does not require retraining to tackle new segmentation tasks, reducing its complexity and making it more user-friendly for clinicians and researchers.
In medical imaging, segmentation involves assigning pixels, the basic units of a digital image, to label specific structures such as organs or cells. This is historically done by clinicians and can be a highly subjective process, leading to potential discrepancies. AI tools can help streamline this process by identifying areas that may indicate a disease or an anomaly. However, existing models typically provide a single interpretation, which does not acknowledge the ambiguity inherent in some medical images.
Tyche, on the other hand, offers multiple interpretations, highlighting different aspects of the image. The number of interpretations can be customized by the user, allowing for a more personalized approach to medical imaging. Furthermore, Tyche can be used ‘out of the box’ for various tasks such as identifying lesions in X-rays or pinpointing anomalies in MRI scans.
This innovative approach to medical image segmentation could potentially call attention to important information that other AI models may overlook. The system could significantly enhance the diagnosis process and contribute to biomedical research.
The team behind Tyche, having identified the limitations of existing models, designed Tyche to address two key issues: the inability to handle ambiguity and the need for retraining upon encountering a new segmentation task. This allows Tyche to accommodate uncertainty while being more accessible to clinicians and researchers.
Tyche employs a modified neural network that processes data in layers, similar to how the human brain functions. Upon receiving example images, Tyche can identify the task and adjust its predictions based on the examples and input. This enables the model to offer an array of interpretations, retaining the individuality of each segmentation while accomplishing the task at hand.
Tests conducted with Tyche showed it could emulate the diversity presented in human annotations, offering faster and superior predictions compared to baseline models. Future improvements to the system could include the capability to recommend the best segmentation candidates and refine the quality of its predictions, as well as embedding more flexibility into the context set.
This research was funded by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.