Explanations are generated to accompany a model decision indicating features of the input data that were the most relevant towards the model decision. Explanations are important not only for understanding the decisions of deep neural network, which in spite of their their huge success in multiple domains operate largely as abstract black boxes, but also for other model classes such as gradient boosted decision trees. In this work, we propose methods, using both Bayesian and Non-Bayesian approaches to augment explanations with uncertainty scores. We believe that uncertainty augmented saliency maps can help in better calibration of the trust between human analyst and the machine learning models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.