Share Email Print

Proceedings Paper

Fuzzy algorithms for learning vector quantization: generalizations and extensions
Author(s): Nicolaos B. Karayiannis; Pin-I Pai
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This paper presents a general methodology for the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms can be used to train feature maps to perform pattern clustering through an unsupervised learning process. The development of FALVQ algorithms is based on the minimization of a fuzzy objective function, formed as the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of the map, which represent the prototypes. This formulation leads to the development of genuinely competitive algorithms, which allow all prototypes to compete for matching each input. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible generalized membership functions with different properties. The efficiency of the proposed algorithms is illustrated by their use in codebook design required for image compression based on vector quantization.

Paper Details

Date Published: 6 April 1995
PDF: 12 pages
Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); doi: 10.1117/12.205133
Show Author Affiliations
Nicolaos B. Karayiannis, Univ. of Houston (United States)
Pin-I Pai, Univ. of Houston (United States)

Published in SPIE Proceedings Vol. 2492:
Applications and Science of Artificial Neural Networks
Steven K. Rogers; Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top