Share Email Print
cover

Proceedings Paper

Some new competitive learning schemes
Author(s): James C. Bezdek; Nikhil R. Pal; Richard J. Hathaway; Nicolaos B. Karayiannis
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We demonstrate the problem using the IRIS data. Then, we show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models -- the GLVQ-F family -- that remedies the problem. We derive algorithms for competitive learning using the GLVQ-F model, and prove that they are invariant to all positive scalings of the data. The learning rule for GLVQ-F updates all nodes using a learning rate function which is inversely proportional to their distance from the input data point. We illustrate the failure of GLVQ and success of GLVQ-F with the ubiquitous IRIS data.

Paper Details

Date Published: 6 April 1995
PDF: 12 pages
Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205158
Show Author Affiliations
James C. Bezdek, Univ. of West Florida (United States)
Nikhil R. Pal, Indian Statistical Institute (India)
Richard J. Hathaway, Georgia Southern Univ. (United States)
Nicolaos B. Karayiannis, Univ. of Houston (United States)


Published in SPIE Proceedings Vol. 2492:
Applications and Science of Artificial Neural Networks
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top