Skip to content Skip to footer

In the field of biomedicine, segmentation plays a crucial role in identifying and highlighting essential structures in medical images, such as organs or cells. In recent times, artificial intelligence (AI) models have shown promise in aiding clinicians by identifying pixels that may indicate disease or anomalies. However, there is a consensus that this method is far from perfect, with issues such as varying interpretations by different annotators evidencing the problem’s complexity.

Marianne Rakic, a computer science PhD candidate at MIT, and her team have developed Tyche, a novel AI tool that addresses this problem by offering multiple segmentations. Named after the Greek divinity of chance, Tyche produces numerous plausible segmentations, highlighting different areas of a medical image each time. This approach adheres to the principle that decision-making is facilitated by multiple options.

Tyche has a unique advantage in that it does not require retraining for each new segmentation task. This feature makes the tool significantly easier for clinicians and biomedical researchers to use. Tyche can be instantly applied to a variety of tasks, such as identifying lesions in lung X-rays or pinpointing anomalies in brain MRIs. Consequently, Tyche has the potential to improve disease diagnosis and advance biomedical research by highlighting critical information that other AI tools may overlook.

To handle the issue of ambiguity, the team modified a simple neural network architecture. When Tyche is fed a few examples that show the segmentation task, it can generate multiple predictions based on a single medical image input. By adjusting the network’s layers, the researchers were able to ensure that each candidate’s segmentations were slightly different but still fulfilled the task.

Tyche was tested on datasets of annotated medical images. The results demonstrated that Tyche’s predictions mirrored the diversity of human annotators, and its best predictions were superior to the baseline models. Tyche also performed faster than most models. The team saw Tyche outperforming complex models that relied on large, specialized datasets.

Future research will look at incorporating a more flexible context set, including text or various image types. Additionally, the team intends to explore methods to improve Tyche’s least effective predictions and refine the system so that it can suggest the best segmentation candidates.

This research was sponsored in part by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer. The findings will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.

Leave a comment

0.0/5