Share Email Print
cover

Proceedings Paper

Comparison of the coding efficiency of perceptual models
Author(s): Robert J. Safranek
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Over the past several years there have been many attempts to incorporate perceptual masking models into image compression systems. Unfortunately, there is little or no information on how these models perform in comparison to each other. The purpose of this paper is to examine how two different perceptual models perform when utilized in the came coding system. The models investigated are the Johnson-Safranek and the Watson. They both develop a contrast masking threshold for DCT based coders. The coder used for comparison is the Baseline Sequential mode of JPEG. Each model was implemented and used to generate image dependent masking thresholds for each 8x8 pixel block in the image. These thresholds were used to zero out perceptually irrelevant coefficients, while the remaining coefficients were quantized using a perceptually optimal quantization matrix. Both objective and subjective performance data was gathered. Bit rate saving versus standard JPEG was computed, and a subjective comparison of images encoded with both models and the nonperceptual JPEG was run. The perceptually based coders gave greater compression with no loss in subjective image quality.

Paper Details

Date Published: 20 April 1995
PDF: 9 pages
Proc. SPIE 2411, Human Vision, Visual Processing, and Digital Display VI, (20 April 1995); doi: 10.1117/12.207562
Show Author Affiliations
Robert J. Safranek, AT&T Bell Labs. (United States)


Published in SPIE Proceedings Vol. 2411:
Human Vision, Visual Processing, and Digital Display VI
Bernice E. Rogowitz; Jan P. Allebach, Editor(s)

© SPIE. Terms of Use
Back to Top