Share Email Print
cover

Proceedings Paper

Threshold competitive learning for vector quantization
Author(s): Ahmed S. EL-Behery; Samia A. Mashali; Ahmed M. Darwish
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Image data compression is essential for a number of applications that involve transmission and storage. One technique that has been recently extensively investigated is vector quantization (VQ). One class of neural networks (NN) structures, namely competitive learning networks appears to be particularly suited for VQ. One main feature that characterizes NN training algorithms is that the VQ codewords are obtained in an adaptive manner. In this paper, a new competitive learning (CL) algorithm called the Threshold Competitive Learning (TCL) is introduced. The algorithm uses a threshold to determine the codewords to be updated after the presentation of each input vector. The threshold can be made variable as the training proceeds and more than one threshold can be used. The new algorithm can be easily combined with other NN training algorithms such as the Frequency-Sensitive competitive learning (FSCL) or the Kohonen Self-Organizing Feature Maps (KSFM). The new algorithm is shown to be efficient and yields results comparable to the famous traditional LBG algorithm.

Paper Details

Date Published: 26 March 1993
PDF: 9 pages
Proc. SPIE 1819, Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II, (26 March 1993); doi: 10.1117/12.142200
Show Author Affiliations
Ahmed S. EL-Behery, Electronics Research Institute (Egypt)
Samia A. Mashali, Electronics Research Institute (Egypt)
Ahmed M. Darwish, Cairo Univ. (Egypt)


Published in SPIE Proceedings Vol. 1819:
Digital Image Processing and Visual Communications Technologies in the Earth and Atmospheric Sciences II
Mark J. Carlotto, Editor(s)

© SPIE. Terms of Use
Back to Top