Matthew Watson matthew.s.watson@durham.ac.uk
Postdoctoral Research Associate
Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations
Watson, M.; Awwad Shiekh Hasan, B.; Al Moubayed, N.
Authors
Dr Bashar Awwad Shiekh Hasan bashar.awwad-shiekh-hasan@durham.ac.uk
Academic Visitor
Dr Noura Al Moubayed noura.al-moubayed@durham.ac.uk
Associate Professor
Abstract
Deep Learning of neural networks has progressively become more prominent in healthcare with models reaching, or even surpassing, expert accuracy levels. However, these success stories are tainted by concerning reports on the lack of model transparency and bias against some medical conditions or patients’ sub-groups. Explainable methods are considered the gateway to alleviate many of these concerns. In this study we demonstrate that the generated explanations are volatile to changes in model training that are perpendicular to the classification task and model structure. This raises further questions about trust in deep learning models for healthcare. Mainly, whether the models capture underlying causal links in the data or just rely on spurious correlations that are made visible via explanation methods. We demonstrate that the output of explainability methods on deep neural networks can vary significantly by changes of hyper-parameters, such as the random seed or how the training set is shuffled. We introduce a measure of explanation consistency which we use to highlight the identified problems on the MIMIC-CXR dataset. We find explanations of identical models but with different training setups have a low consistency: ≈ 33% on average. On the contrary, kernel methods are robust against any orthogonal changes, with explanation consistency at 94%. We conclude that current trends in model explanation are not sufficient to mitigate the risks of deploying models in real life healthcare applications
Citation
Watson, M., Awwad Shiekh Hasan, B., & Al Moubayed, N. (2022). Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations. . https://doi.org/10.1109/wacv51458.2022.00159
Presentation Conference Type | Conference Paper (Published) |
---|---|
Conference Name | Proc. Winter Conference on Applications of Computer Vision |
Start Date | Jan 3, 2022 |
End Date | Jan 8, 2022 |
Acceptance Date | Oct 4, 2021 |
Online Publication Date | Feb 15, 2022 |
Publication Date | 2022 |
Deposit Date | Oct 27, 2021 |
Publicly Available Date | Jan 9, 2022 |
Publisher | Institute of Electrical and Electronics Engineers |
DOI | https://doi.org/10.1109/wacv51458.2022.00159 |
Public URL | https://durham-repository.worktribe.com/output/1138798 |
Files
Accepted Conference Proceeding
(1.6 Mb)
PDF
Copyright Statement
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
You might also like
Is Unimodal Bias Always Bad for Visual Question Answering? A Medical Domain Study with Dynamic Attention
(2022)
Presentation / Conference Contribution
Towards Graph Representation Learning Based Surgical Workflow Anticipation
(2022)
Presentation / Conference Contribution
Efficient Uncertainty Quantification for Multilabel Text Classification
(2022)
Presentation / Conference Contribution
Contrastive Learning with Heterogeneous Graph Attention Networks on Short Text Classification
(2022)
Presentation / Conference Contribution
INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations
(2022)
Presentation / Conference Contribution
Downloadable Citations
About Durham Research Online (DRO)
Administrator e-mail: dro.admin@durham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search