Share Email Print

Proceedings Paper

An approach to explainable deep learning using fuzzy inference
Author(s): David Bonanno; Kristen Nock; Leslie Smith; Paul Elmore; Fred Petry
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Deep Learning has proven to be an effective method for making highly accurate predictions from complex data sources. Convolutional neural networks continue to dominate image classification problems and recursive neural networks have proven their utility in caption generation and language translations. While these approaches are powerful, they do not offer explanation for how the output is generated. Without understanding how deep learning arrives at a solution there is no guarantee that these networks will transition from controlled laboratory environments to fieldable systems. This paper presents an approach for incorporating such rule based methodology into neural networks by embedding fuzzy inference systems into deep learning networks.

Paper Details

Date Published: 3 May 2017
PDF: 5 pages
Proc. SPIE 10207, Next-Generation Analyst V, 102070D (3 May 2017); doi: 10.1117/12.2268001
Show Author Affiliations
David Bonanno, U.S. Naval Research Lab. (United States)
Kristen Nock, U.S. Naval Research Lab. (United States)
Leslie Smith, U.S. Naval Research Lab. (United States)
Paul Elmore, U.S. Naval Research Lab. (United States)
Fred Petry, U.S. Naval Research Lab. (United States)

Published in SPIE Proceedings Vol. 10207:
Next-Generation Analyst V
Timothy P. Hanratty; James Llinas, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?