Share Email Print
cover

Optical Engineering

Multimodal pattern recognition by modular neural network
Author(s): Shulin Yang; Kuo-Chu Chang
Format Member Price Non-Member Price
PDF $20.00 $25.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Multilayer perceptrons (MLPs) have been widely applied to pattern recognition. It is found that when the data has a multimodal distribution, a standard MLP is hard to train and a valid neural network classifier is difficult to obtain. We propose a two-phase learning modular (TLM) neural network architecture to tackle this problem. The basic idea is to transform the multimodal distribution into a known and more learn- able distribution and then use a standard MLP to classify the new data. The transformation is accomplished by decomposing the input feature space into several subspaces and training several MLPs with samples in the subsets. We verified this idea with a two-class classification example and applied the TLM to the inverse synthetic aperture radar (ISAR) automatic target recognition (ATR), and compared its performance with that of the MLP. Experiments show that the MLP is difficult to train. Its performance depends strongly on the number of training samples as well as the architecture parameters. On the other hand, the TLM is much easier to train and yields better performance. In addition, the TLM's performance is more robust.

Paper Details

Date Published: 1 February 1998
PDF: 10 pages
Opt. Eng. 37(2) doi: 10.1117/1.602016
Published in: Optical Engineering Volume 37, Issue 2
Show Author Affiliations
Shulin Yang, George Mason Univ. (United States)
Kuo-Chu Chang, George Mason Univ. (United States)


© SPIE. Terms of Use
Back to Top