Go to Gradio

Bridging the Interpretability Gap in Medical Machine Learning

An interpretable model is far more useful as a second pair of eyes.

By Areeba Abid

Diagnostic machine learning algorithms are already outperforming physicians in a range of specialties — ophthalmology, radiology, and dermatology, among others. We’ve seen these algorithms surpass humans in their ability to classify retinal fundus images, chest X-rays, and melanomas. So why do we rarely see doctors using these models in the day-to-day practice of medicine?

ML algorithms often surpass human performance in diagnostics, but are rarely put into practice. (Image by author.)

Often, the missing piece is interpretability, or the ability of a model to explain why it has given an output. “Black box” models, or models that simply provide a prediction with no explanation, are likely to face challenges in building user trust, even if their performance is shown to exceed that of humans. People have difficulty trusting what they don’t understand, and rightly so in machine learning, given that models are often not learning what their developers intended. This is especially important to address in medicine, where stakes are much higher and lives are literally on the line. We may continue to see breakthroughs in medical ML accuracy, but if the clinical decisions and diagnoses from these models can’t be interpreted, it’s likely they’ll keep collecting dust.

Why are interpretable models better?

From the perspectives of clinicians and patients, a model’s output is not very meaningful or accountable if it can’t be explained. An algorithm that classifies pneumonia, but can’t say why a patient has this diagnosis, is less likely to be trusted and valued than a model that can share some insight regarding its “reasoning”.

Black box models provide no reasoning to explain their outputs. Transparent models allow a peek into the model’s understanding. (Image by author.)

An interpretable model is far more useful as a second pair of eyes. Imagine being a radiologist and seeing that an ML model has predicted a patient has pneumonia. Looking at the x-ray, you might notice edema in the lower right lung, leading you to suspect pneumonia. You would be much more likely to trust the model if you could see that it was focusing on the same areas of the image, indicating that its “thought process” likely lined up with your own. In the event that you disagreed with the model output, you could still extract value from its interpretation, even if you end up disregarding the output.

More and more, interpretability is becoming apparent as the missing building block to advances in health AI. For high stakes decisions like predicting if a patient has a disease or how long they will live, interpretability is critical to evaluating a model’s fidelity and building user trust. Beyond that, medicine is a conservative field — physicians are unlikely to welcome an intrusion into their practices of technology that they don’t understand.

What’s the solution?

If interpretability is so crucial for applying machine learning models in the real world, why are interpretable models not the norm? Interpretability is a growing area of research, but not all machine learning architectures lend themselves well to interpretability, and interpretability can come at the cost of performance.

Interpretability used to be hard to implement too, but new tools and libraries are making this much easier. Gradio is a Python library that automatically generates a model-agnostic interpretation for any input, along with an interface that can quickly be shared with domain experts to assess model fidelity. Using Gradio, the pneumonia model could be sent over to radiologists beforehand to evaluate how well the model picks up on the same pathological patterns that human doctors appreciate.

Step 1: Create an interface that can be shared with domain experts who can assess the model’s fidelity. (Image by author.)

The tool also lowers the technical barrier to interpretation of ML models, allowing users to write their own interpretability function, or just use the built-in interpretation model for a fast analysis.

Step 2: Automatically generate an interpretation for any model output. Above, red sections of the image have more significance to the model’s classification. (Image by author.)

ML researchers need interpretability to unlock their algorithms’ potential in medicine. To learn how to implement interpretability for your ML models, check out more examples here.

References

[1] Gulshan V, Peng L, et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. (2016) Journal of the American Medical Association

[2] Rajpurkar, P., O’Connell, C., Schechter, A. et al. CheXaid: deep learning assistance for physician diagnosis of tuberculosis using chest x-rays in patients with HIV. (2020) Nature Digital Medicine

[3] Liu, Y., Jain, A., Eng, C. et al. A deep learning system for differential diagnosis of skin diseases. (2020) Nature Medicine