Share Email Print

Proceedings Paper

Lossless coding using predictors and VLCs optimized for each image
Author(s): Ichiro Matsuda; Noriyuki Shirai; Susumu Itoh
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper proposes an efficient lossless coding scheme for still images. The scheme utilizes an adaptive prediction technique where a set of linear predictors are designed for a given image and an appropriate predictor is selected from the set block-by-block. The resulting prediction errors are encoded using context-adaptive variable-length codes (VLCs). Context modeling, or adaptive selection of VLCs, is carried out pel-by-pel and the VLC assigned to each context is designed on a probability distribution model of the prediction errors. In order to improve coding efficiency, a generalized Gaussian function is used as the model for each context. Moreover, not only the predictors but also parameters of the probability distribution models are iteratively optimized for each image so that a coding rate of the prediction errors can have a minimum. Experimental results show that the proposed coding scheme attains comparable coding performance to the state-of-the-art TMW scheme with much lower complexity in the decoding process.

Paper Details

Date Published: 23 June 2003
PDF: 8 pages
Proc. SPIE 5150, Visual Communications and Image Processing 2003, (23 June 2003); doi: 10.1117/12.502843
Show Author Affiliations
Ichiro Matsuda, Science Univ. of Tokyo (Japan)
Noriyuki Shirai, Science Univ. of Tokyo (Japan)
Susumu Itoh, Science Univ. of Tokyo (Japan)

Published in SPIE Proceedings Vol. 5150:
Visual Communications and Image Processing 2003
Touradj Ebrahimi; Thomas Sikora, Editor(s)

© SPIE. Terms of Use
Back to Top