Share Email Print

Proceedings Paper

Representational learning for sonar ATR
Author(s): Jason C. Isaacs
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Learned representations have been shown to give hopeful results for solving a multitude of novel learning tasks, even though these tasks may be unknown when the model is being trained. A few notable examples include the techniques of topic models, deep belief networks, deep Boltzmann machines, and local discriminative Gaussians, all inspired by human learning. This self-learning of new concepts via rich generative models has emerged as a promising area of research in machine learning. Although there has been recent progress, existing computational models are still far from being able to represent, identify and learn the wide variety of possible patterns and struc- ture in real-world data. An important issue for further consideration is the use of unsupervised representations for novel underwater target recognition applications. This work will discuss and demonstrate the use of latent Dirichlet allocation and autoencoders for learning unsupervised representations of objects in sonar imagery. The objective is to make these representations more abstract and invariant to noise in the training distribution and improve performance.

Paper Details

Date Published: 9 June 2014
PDF: 9 pages
Proc. SPIE 9072, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XIX, 907203 (9 June 2014); doi: 10.1117/12.2053057
Show Author Affiliations
Jason C. Isaacs, Naval Surface Warfare Ctr. Panama City Div. (United States)

Published in SPIE Proceedings Vol. 9072:
Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XIX
Steven S. Bishop; Jason C. Isaacs, Editor(s)

© SPIE. Terms of Use
Back to Top