Skip to content Skip to footer

Exploring the Functionality of ‘Black Box’ Drug Discovery Models

The team of scientists at the University of Bonn under the direction of Prof. Dr. Jürgen Bajorath have made significant strides in the field of Artificial Intelligence (AI) research in pharmaceuticals! Their study, recently published in Nature Machine Intelligence, has unlocked the often mysterious inner workings of the AI models involved in drug discovery.
Previous assumptions regarding the predictive capabilities of AI in the field of pharmaceuticals have been challenged by the research, as the team found that AI models primarily rely on recalling existing data rather than learning new chemical interactions.
The AI-assisted drug discovery field experienced a flurry of accomplishments in 2023, from MIT-developed models that analyzed millions of compounds for potential therapeutic effects, to AI-discovered drugs showing promise in slowing aging, and AI-generated proteins with excellent binding strength.
The team decided to focus on Graph Neural Networks (GNNs), a common machine learning application used in drug discovery, in order to demystify the process of AI. Prof. Bajorath pointed out that GNNs are like a “black box we can’t glimpse into,” and the research team sought to open the box.
After analyzing six different GNN architectures, the team made a groundbreaking discovery. Study author and PhD candidate at Sapienza University in Rome Andrea Mastropietro explains that the GNNs are “very dependent on the data they are trained with” and the AI models often “remember” rather than “learn” new interactions.
This phenomenon somewhat resembles the “Clever Hans Effect” where animals appear to perform arithmetic tasks by interpreting subtle cues from their handler rather than actually understanding mathematics. As a result, the researchers found that GNNs’ ability to learn chemical interactions is overestimated.

The findings suggest that simpler methods might be equally effective and some GNNs showed potential in learning more interactions, indicating that improved training techniques could enhance their performance. In the hopes of creating “Explainable AI”, that is AI models with clear decision-making processes, Prof. Bajorath and the team are working on methods to clarify AI model functionality.

This remarkable breakthrough is a huge step forward in understanding the inner workings of AI models used in pharmaceutical research, and will surely pave the way for further developments in the field of drug discovery. The team’s enthusiasm and passion for uncovering the mysteries of AI have resulted in a groundbreaking discovery that will undoubtedly have far-reaching implications!

Leave a comment

0.0/5