Share Email Print
cover

Proceedings Paper • new

Role of influence functions in model interpretability (Conference Presentation)
Author(s): Supriyo Chakraborty; Jorge Ortiz; Simon Julier
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Deep Neural Networks (DNNs) have achieved near human and in some cases super human accuracies in tasks such as machine translation, image classification, speech processing and so on. However, despite their enormous success these models are often used as black-boxes with very little visibility into their working. This opacity of the models often presents hindrance towards the adoption of these models for mission-critical and human-machine hybrid networks. In this paper, we will explore the role of influence functions towards opening up these black-box models and for providing interpretability of their output. Influence functions are used to characterize the impact of training data on the model parameters. We will use these functions to analytically understand how the parameters are adjusted during the model training phase to embed the information contained in the training dataset. In other words, influence functions allows us to capture the change in the model parameters due to the training data. We will then use these parameters to provide interpretability of the model output for test data points.

Paper Details

Date Published: 14 May 2018
PDF
Proc. SPIE 10635, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX, 1063505 (14 May 2018); doi: 10.1117/12.2306009
Show Author Affiliations
Supriyo Chakraborty, IBM Thomas J. Watson Research Ctr. (United States)
Jorge Ortiz, IBM Thomas J. Watson Research Ctr. (United States)
Simon Julier, Univ. College London (United Kingdom)


Published in SPIE Proceedings Vol. 10635:
Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX
Michael A. Kolodny; Dietrich M. Wiegmann; Tien Pham, Editor(s)

© SPIE. Terms of Use
Back to Top