Share Email Print

Proceedings Paper

Some new competitive learning schemes
Author(s): James C. Bezdek; Nikhil R. Pal; Richard J. Hathaway; Nicolaos B. Karayiannis
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We demonstrate the problem using the IRIS data. Then, we show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models -- the GLVQ-F family -- that remedies the problem. We derive algorithms for competitive learning using the GLVQ-F model, and prove that they are invariant to all positive scalings of the data. The learning rule for GLVQ-F updates all nodes using a learning rate function which is inversely proportional to their distance from the input data point. We illustrate the failure of GLVQ and success of GLVQ-F with the ubiquitous IRIS data.

Paper Details

Date Published: 6 April 1995
PDF: 12 pages
Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205158
Show Author Affiliations
James C. Bezdek, Univ. of West Florida (United States)
Nikhil R. Pal, Indian Statistical Institute (India)
Richard J. Hathaway, Georgia Southern Univ. (United States)
Nicolaos B. Karayiannis, Univ. of Houston (United States)

Published in SPIE Proceedings Vol. 2492:
Applications and Science of Artificial Neural Networks
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?