Share Email Print

Proceedings Paper

Neural network processing to minimize quantization losses
Author(s): Yu-Jhih Wu; Paul M. Chau
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A general neural network co-processor has been investigated and designed to adaptively adjust the quantization thresholds of a data quantizer, and thus a data quantizer with minimum quantization loss can be obtained. For a given probability density function and number of quantization levels, the neural network is designed to learn the near optimal quantization uniform step-size which minimizes the loss caused by the quantizer. With this neural network co-processor approach, consistent and substantial performance improvements have been verified on either an AWGN or a Rayleigh fading communication channel with convolutional encoder and maximum likelihood decoder. This general neural network co-processor approach can be applied to any digital signal processing system which has quantization loss, such as digital communication, image data compression, or adaptive signal processing.

Paper Details

Date Published: 1 November 1993
PDF: 10 pages
Proc. SPIE 2027, Advanced Signal Processing Algorithms, Architectures, and Implementations IV, (1 November 1993); doi: 10.1117/12.160462
Show Author Affiliations
Yu-Jhih Wu, Univ. of California/San Diego (United States)
Paul M. Chau, Univ. of California/San Diego (United States)

Published in SPIE Proceedings Vol. 2027:
Advanced Signal Processing Algorithms, Architectures, and Implementations IV
Franklin T. Luk, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?