Share Email Print

Proceedings Paper

Less interclass disturbance learning for unsupervised neural computing
Author(s): Lurng-Kuo Liu; Panos A. Ligomenides
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A number of training algorithms for neural networks are based on the 'competition' learning method. This is regarded as an adaptive process for tuning neural networks to specific features of input. The responses from the neural network, then, tend to become localized. However, a shortcoming of this model is that some neural units can remain inactive. Since a neural unit never learns unless it wins, it is possible that some of the neural units are always outperformed by others, and therefore never learn. This paper presents a new unsupervised learning algorithm, less-interclass-disturbance learning (LID), which deals with the limitations of the simple competitive neural network. The main idea of the method is that it reinforces the competing neurons in such a way as to prevent the weights from 'fooling around.' A new compound similarity metric is introduced in this algorithm to reduce the interclass disturbance during the training process. The behavior of this algorithm was investigated through computer simulations. It is shown that LID learning is quite effective.

Paper Details

Date Published: 1 November 1991
PDF: 12 pages
Proc. SPIE 1606, Visual Communications and Image Processing '91: Image Processing, (1 November 1991); doi: 10.1117/12.50316
Show Author Affiliations
Lurng-Kuo Liu, Univ. of Maryland (United States)
Panos A. Ligomenides, Univ. of Maryland (United States)

Published in SPIE Proceedings Vol. 1606:
Visual Communications and Image Processing '91: Image Processing
Kou-Hu Tzou; Toshio Koga, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?