In biomedicine, the process of segmentation involves marking significant structures in a medical image, such as cells or organs. This can aid in the detection and treatment of diseases visible in these images. Despite this promise, current artificial intelligence (AI) systems used for medical image segmentation only offer a single segmentation result. This approach isn’t ideal, as there are often grey areas in medical images where experts might disagree on anomalies or extent of the boundaries of an organ.
Marianne Rakic, a PhD candidate at MIT, leads a team of researchers from MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital that have developed an AI tool called Tyche. Unlike existing models, Tyche provides multiple plausible segmentations of medical images. Instead of highlighting one fixed area, this system displays different areas for each segmentation, and users can specify the number of outputs they want from Tyche. This makes it a particularly powerful tool for offering insight around areas of uncertainty in medical images.
Tyche presents a significant advantage over traditional AI systems, as it eliminates the need for retraining to adapt to new segmentation tasks. Retraining AI models demand substantial amounts of data and expertise in machine learning due to their complexities. Therefore, Tyche offers a more user-friendly option to clinicians and researchers, which can be applied immediately for diverse operations, such as identifying anomalies in a lung X-ray or a brain MRI.
The capacity of Tyche to generate multiple segmentations might highlight significant details that could be overlooked by other AI tools, thereby potentially improving diagnosis or aiding in thorough biomedical research. The system would also help in situations where the considered consensus might not adequately capture the necessary details in the image. For instance, if a model completely missed a nodule that only some experts agreed upon, such a probabilistic issue requires attention which Tyche can provide.
One of the main challenges that the researchers faced when developing Tyche was addressing the two prevalent issues discovered by collaborating with the Broad Institute and MGH. Current models cannot perceive uncertainty and need to be retrained even for slightly different segmentation tasks. The researchers created Tyche by altering a conventional neural network architecture, and allowed it to produce multiple predictions based on a single medical image and a small set of example images. This operation does not require retraining for different tasks.
Early results appear promising: in a preliminary test with a dataset of annotated medical images, Tyche’s predictions captured the diversity of human annotators, making its predictions more accurate and quicker than current models. Researchers are optimistic that Tyche will prove its worth and anticipate exploring more flexible context sets, such as text or multiple types of images, and methods to improve Tyche’s least accurate predictions in the near future.