Biomedicine often requires the annotation of pixels in a medical image to identify critical structures such as organs or cells, a process known as segmentation. In this context, artificial intelligence (AI) models can be useful to clinicians by highlighting pixels indicating potential disease or anomalies. However, decision-making in medical image segmentation is frequently complex, with experts potentially providing different segmentations. Recognizing this, a new AI tool known as Tyche has been developed by Marianne Rakic, an MIT computer science PhD candidate, alongside colleagues from MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital.
Tyche is designed to capture the inherent uncertainty in medical image segmentation, offering multiple plausible segmentations for each image, each highlighting slightly different areas. Users are given the flexibility to determine how many options Tyche produces and can select the most suitable one for their needs. Unlike other models, Tyche does not require retraining for new segmentation tasks. The typical training process is data-intensive, involving multiple examples and extensive machine learning experience. Because Tyche does not require retraining, it can be used “out of the box” for various tasks such as identifying lesions in lung X-rays or pinpointing anomalies in brain MRIs.
The development of Tyche involved modifying a basic neural network architecture. The user initially provides a few examples illustrating the segmentation task that the model needs to learn, and a context set of just 16 images is enough to make accurate predictions. The researchers modified the neural network so it can offer several predictions based on one medical image input and the context set. Altering the network’s layers allows the candidate segmentations produced at each stage to interact with each other and the examples in the context set, ensuring that the segmentations are slightly different but still solve the task. The network has also been conditioned to maximize the quality of its best prediction.
In tests with datasets of annotated medical images, Tyche predictions successfully captured the diversity of human annotators, while performing faster than most models. Its best predictions were also superior to those from the baseline models, making Tyche a robust tool for tackling complexities and uncertainties in medical image segmentation. The research team has future plans to enhance Tyche’s capabilities and improve its worst predictions, envisioning a more flexible context set that might include text or multiple types of images. The intention is to further develop the system so that it can recommend the best segmentation candidates. This research has been partially funded by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.