Published Date : 9/10/2025
Previous AI models can make predictions about Gleason scores, but often do not provide a comprehensible explanation, which limits their clinical acceptance, explains Titus Brinker from the DKFZ. The newly developed system dispenses with retrospective explanations and is based directly on descriptions of the pathology. To this end, 1,015 tissue samples were annotated with detailed explanations by international experts.
The study, which involved 54 pathologists from ten countries, presents one of the most comprehensive collections of explanation-based tissue annotations. As a result, the Heidelberg team presents “GleasonXAI,” an AI that offers interpretable decisions—similar to those a pathologist would provide.
By using so-called “soft labels,” which reflect the uncertainties between individual pathologist assessments, the AI was able to achieve reproducible results despite high variability. In a direct comparison with conventional models, GleasonXAI achieved equivalent or better accuracy—while also offering increased transparency.
AI speaks the language of pathologists
Pathologists from Germany, the US, Canada, Switzerland, and other countries participated in the study. The experts contributed a median of 15 years of clinical experience to the project. In addition to developing the model, the team is also publishing the largest freely available dataset to date with explanatory annotations for Gleason patterns in order to further advance research on explainable AI.
For the first time, we have developed an AI system that recognizes the characteristic tissue features of Gleason patterns and explains them in a similar way to a pathologist, says Gesa Mittmann, co-author of the study. This should increase trust and acceptance of AI in everyday clinical practice.
Potential for clinical practice
The results show that explainable AI can be implemented in a practical manner without compromising performance. This could accelerate its use in routine pathology—which is highly relevant, especially in times of rising cancer rates and declining specialist capacities.
In addition, the model also supports training: The explainable segmentations can particularly help young pathologists understand typical patterns and make reliable diagnoses more quickly, emphasizes Brinker.
Q: What is the Gleason grading system?
A: The Gleason grading system is a method used to determine the aggressiveness of prostate cancer based on the appearance of cancer tissue under a microscope.
Q: What is the main advantage of GleasonXAI over previous AI models?
A: GleasonXAI provides transparent and interpretable decisions, similar to those made by pathologists, which enhances clinical acceptance and trust in AI.
Q: How does GleasonXAI handle uncertainties in pathologist assessments?
A: GleasonXAI uses 'soft labels' to reflect the uncertainties between individual pathologist assessments, ensuring reproducible results despite variability.
Q: What is the significance of the dataset published by the Heidelberg team?
A: The dataset is the largest freely available collection of explanation-based tissue annotations for Gleason patterns, which helps advance research on explainable AI.
Q: How can GleasonXAI assist in training young pathologists?
A: The explainable segmentations provided by GleasonXAI can help young pathologists understand typical patterns and make reliable diagnoses more quickly, enhancing their training and performance.