SPIE Digital Library Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers


SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Method automatically assesses image quality

Using wavelet leader pyramids for a novel, full-reference objective image-quality metric not only produces better results than existing state-of-the-art metrics, but it is also computationally faster.
6 December 2011, SPIE Newsroom. DOI: 10.1117/2.1201111.003923

Digital images are subject to distortions during acquisition, management, communication, and processing, so image quality is an important performance index for assessing imaging systems. Subjective image quality assessment (IQA) directly performed by a human can give the most accurate quality assessment scores.1 However, such subjective evaluation methods are not only expensive and cumbersome, but they also cannot be incorporated into the computational loops of an automatic image processing system. For this, automatic and objective IQA methods on par with human evaluation are required.

Early on, most widely used IQA methods computed visual quality by comparing measures of the pixel distortion, such as the mean square error or the peak signal-to-noise ratio (PSNR).1 However, in many cases, pixel-based methods poorly match people's quality judgments.1 In recent years, great efforts have been made to develop new IQA metrics by considering the properties of the human visual system (HVS). Two notable algorithms are the structural similarity index (SSIM)2 and the visual information fidelity index (VIF).3 However, the SSIM method is less successful when used to assess the qualities of blurred and white noisy images, and the computational complexity of the VIF method is very high.

Human eyes are sensitive to sharp edges and contours when recognizing objects and scene understanding.4–6 In addition, human visual sensitivity varies with spatial frequency in multi-scale presentation. Therefore, we suspected we might achieve more accurate image assessment results if we could quantify the weighted distortion of the edges and contours with a multi-scale approach. We propose a wavelet leader pyramids-based visual information fidelity (WALE-VIF) method for image quality assessment. To the best of our knowledge, the concept of wavelet leaders7 has not been used for IQA before.

We used 2D wavelet leader pyramids to robustly extract the multi-scale information of edges. Wavelet leaders were derived from the high-pass wavelet coefficients7 (see Figure 1). Essentially, the wavelet leader Lj for scale j and position was the absolute value of maximum response from all wavelet coefficients in both the spatial neighborhood of and its scale neighborhood at finer scales.7 Using the wavelet leaders rather than wavelet coefficients can significantly reduce the computational cost of our subsequent procedures. Another merit of using wavelet leaders is that information on edges and contours was well maintained while small wavelet coefficients were effectively removed in images. Thus, we derived a more robust and accurate result when estimating the quality of the distorted images. All the different scales of wavelet leaders together with the low-pass sub-band of wavelet coefficients constructed a multi-scale quality feature (QF) of the image.


Figure 1. Definition of wavelet leader. The wavelet leader Lj (yellow block) is defined as the largest absolute value of the wavelet coefficients in the cyan volume. The red, green and blue sectors in each block mean the wavelet coefficients with same scale and position but different directions. Only the wavelet coefficients over which the maximum is taken are marked in this figure.

To evaluate the quality of the distorted image, we computed the amount of shared information (or the quality similarity) between the QFs of the distorted and the original images. We used the VIF metric to calculate the quality similarity in each individual scale. Then, we set the scale-variant weights to various single-scale quality similarities and computed the overall quality score as the weighted sum. Conceptually, this method is coherent with the HVS contrast sensitivity function (CSF).8

We tested our approach on the Laboratory for Image and Video Engineering (LIVE) image quality assessment database (release 2) made available by University of Texas.9 The database also contains the subjective evaluation result—i.e., the degradation mean opinion scores (DMOS)—for each image. We compared the performance of our proposed method and other state-of-the-art IQA ones including SSIM, VIF, and PSNR. The performance of IQA method is indicated by several popular statistical metrics, including the correlation coefficient (CC), Spearman rank order correlation coefficient (SROCC), root mean square error (RMSE), and mean absolute error (MAE). The CC and SROCC metrics indicate the consistency between the IQA measures and DMOS, and larger values mean the corresponding IQA algorithm has better accuracy (perfect match=1). On the contrary, the MAE and RMSE metrics indicate the statistical distances to the subjective scores. Therefore, smaller values in MAE and RMSE mean better performance (perfect match=0). Comparison of performance shows that our method outperforms the other state-of-the-art IQA methods (see Table 1). Our method is more consistent with human judgment than others (see Figure 2 for the fitted curves of DMOS versus the predicted score by the IQA methods). In addition, our method is four times faster than VIF (see Table 2 for the average times required to predict the quality of an image by each of these measures).


Figure 2. Scatter plots for the four objective quality measures versus DMOS for images in LIVE dataset. (a) PSNR, (b) SSIM, (c) VIF, and (d) WALE-VIF.
Table 1.Performance comparison of image quality assessment models using the University of Texas Laboratory for Image and Video Engineering (LIVE) dataset. For correlation coefficient (CC) and Spearman rank order correlation coefficient (SROCC), 1 is a perfect match. For root mean square error (RMSE) and mean absolute error (MAE), 0 is a perfect match. PSNR: Peak signal-to-noise ratio. SSIM: Structural similarity index. VIF: Visual information fidelity index. WALE-VIF: Wavelet leader pyramids-based visual information fidelity.

Performance measure

Perfect match = 1 Perfect match = 0

CC SROCC RMSE MAE
Model PSNR 0.924 0.937 0.996 0.372

SSIM 0.958 0.973 0.969 0.054

VIF 0.974 0.981 0.095 0.526

WALE-VIF 0.979 0.983 6.356 4.549

Table 2.Average time required to calculate the quality score of one image by each IQA method on LIVE dataset
Model Time (s)
PSNR 0.1031
SSIM 0.8642
VIF 10.5652
WALE-VIF 2.4189

In summary, we propose a WALE-VIF algorithm for image quality assessment. We introduce 2D wavelet leaders in IQA to extract multi-scale information on edges. The quality similarity on each individual scale is evaluated by comparing the VIF of wavelet leaders. Following the HVS CSF, we determine the weights of different scales and obtain the final quality score from all the single-scale similarities using a weighted sum method. Thus, our algorithm systematically outperforms state-of-the-art IQA methods in accuracy, robustness, and computational efficiency. As a next step, we will try to find a more efficient model for fitting the distribution of single-scale QF and thus obtain more accurate single-scale quality similarities.

This work was supported by 973 Program (2010CB731401, 2010CB731406), NSFC (60932006, 60828001, 61001146, 61071155).


Xiaolin Chen
Shanghai Jiao Tong University
Shanghai, China

Xiaolin Chen received a BE degree in electronic engineering from the University of Electronic, Science and Technology of China, Chengdu, China. She is currently pursuing a PhD at Shanghai Jiao Tong University, Shanghai, China. Her research interests include image processing, computer vision, and pattern recognition.


References:
1. Z. Wang, A. C. Bovik, Modern Image Quality Assessment, Synthesis Lectures on Image, Video, and Multimedia Processing Morgan, Claypool, 2006.
2. Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Proc. 13, pp. 600-612, 2004.
3. H. R. Sheikh, A. C. Bovik, Image information and visual quality, IEEE Trans. Image Proc. 15, pp. 430-444, 2006.
4. J. H. Elder, S.W. Zucker, Evidence for boundary-specific grouping in human vision, Vision Res. 38, pp. 143-152, 1998.
5. F. Bergholm, Edge focusing, IEEE Trans. Pattern Anal. Mach. Intell. 9, pp. 726-741, 1987.
6. J. H. Elder, S.W. Zucker, Local Scale Control for Edge Detection and Blur Estimation, IEEE Trans. Pattern Anal. Mach. Intell. 20, pp. 699-716, 1998.
7. S. Jaffard, B. Lashsermes, P. Abry, Wavelet leaders in multifractal analysis, Wavelet Analysis and Applications, ch. 3: Fractal and multifractal theory, wavelet algorithm, wavelet in numerical analysis, pp. 201-246, Birkhäuser, 2007.
8. F. W. Campbell, J. G. Robson, Application of Fourier analysis to the visibility of gratings, J. Physiol. 197, pp. 551-566, 1968.
9. H. R. Sheikh, Z. Wang, L. Cormack, A. C. Bovik, LIVE Image Quality Assessment Database Release 2, http://live.ece.utexas.edu/research/quality/