Segmentation, a practice in biomedicine whereby pixels from a significant structure in a medical image are annotated, can be aided by artificial intelligence (AI) models. However, these models often give only one solution, while the problem of medical image segmentation usually requires a range of interpretations. For instance, multiple human experts may have different perspectives on a medical image, creating uncertainty in interpretation.
Recognizing this problem, Marianne Rakic, an MIT computer science PhD candidate, alongside other researchers from the Broad Institute of MIT and Harvard, and Massachusetts General Hospital, developed an AI tool capable of presenting multiple potential segmentations of a medical image, known as Tyche.
Named after the Greek divinity of chance, Tyche is designed to highlight different areas of a medical image and provide multiple interpretations. The system allows users to decide how many possible outputs for Tyche to present and to select the most suitable one.
One of the key advantages of Tyche is its ability to handle new segmentation duties without the need for retraining, a data-intensive process that necessitates many examples and a thorough understanding of machine learning. As such, Tyche is a more user-friendly option for those in medical professions, able to operate “out of the box” for tasks such as identifying lesions in a lung X-ray, or pinpointing brain MRI anomalies.
Rakic and her team note that the ambiguity often found in medical images has been understudied and often misunderstood by AI models. The capacity of the Tyche system to flag potential anomalies and uncertainties that could be missed by other models might enhance clinical diagnoses and aid biomedical research.
Tyche was created by modifying a simple neural network architecture. The user initially provides it with a few examples, which allows the model to learn the task and understand that there could be ambiguity. Just 16 example images, referred to as a “context set”, are required for the model to generate credible predictions, although more examples can be provided if desired.
The neural network was altered to output multiple predictions based on one image input and the context set. As a result, Tyche can ensure that its candidate predictions are varied, but still relevant to the task at hand.
When tested with datasets of annotated medical images, Tyche was shown to capture the diversity of human annotators, making better and faster predictions than most models and outperforming more complex AI systems trained on large, specialised datasets. Furthermore, Rakic and her team developed an offshoot of Tyche that could work with a pre-existing model for medical image segmentation, enabling the model to output multiple candidates.
Looking ahead, the researchers plan to make use of a more flexible context set, potentially including text or multiple types of images. They also intend to better Tyche’s least accurate predictions and refine the system to propose the best segmentation candidates.