Skip to content Skip to footer

Biomedical segmentation pertains to marking pixels from significant structures in a medical image like cells or organs which is crucial for disease diagnosis and treatment. Generally, a single answer is provided by most artificial intelligence (AI) models while making these annotations, but such a process is not always straightforward.

In a recent paper, Marianne Rakic, an MIT computer science PhD candidate, and her colleagues from MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital, have proposed an AI tool called Tyche to overcome this challenge. Offering multiple plausible segmentations highlighting different aspects of a medical image, Tyche enables the user to select an annotation that is most suitable for their analysis.

The unique aspect of Tyche is that it can manage new segmentation tasks without having to be retrained, thus saving resources and making it accessible to clinicians and biomedical researchers. They could potentially use it to identify anomalies in a brain MRI or lesions in a lung x-ray to further improve diagnosis, or to assist with biomedical research by drawing attention to critical data that could be overlooked.

This innovative solution also addresses the underexplored area of ambiguity in AI models. The researchers explained that they want to capture the area of uncertainty, which could provide vital information. Additionally, Tyche improves upon system speed and the quality of predictions.

The IP model, housed in a neural network, uses a group of approximately 16 example images – called a “context set”. This allows Tyche to undertake new tasks without retraining. However, the tool can also adapt to accommodate a larger number of examples.

The researchers modified the network to offer several different predictions based on a single medical image and a “context set”. This was achieved by adjusting the network’s pathways to allow candidate segmentations created at different stages to communicate with each other and the examples in the context set.

Looking ahead, the researchers are planning to develop a more flexible context set, focusing on improving Tyche’s worst predictions and enhancing the system’s ability to recommend the best segmentation candidates. Their work, presented at the IEEE Conference on Computer Vision and Pattern Recognition, has been funded in part by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.

Leave a comment

0.0/5