Digital images are subject to distortions during acquisition, management, communication, and processing, so image quality is an important performance index for assessing imaging systems. Subjective image quality assessment (IQA) directly performed by a human can give the most accurate quality assessment scores.1 However, such subjective evaluation methods are not only expensive and cumbersome, but they also cannot be incorporated into the computational loops of an automatic image processing system. For this, automatic and objective IQA methods on par with human evaluation are required.
Early on, most widely used IQA methods computed visual quality by comparing measures of the pixel distortion, such as the mean square error or the peak signal-to-noise ratio (PSNR).1 However, in many cases, pixel-based methods poorly match people's quality judgments.1 In recent years, great efforts have been made to develop new IQA metrics by considering the properties of the human visual system (HVS). Two notable algorithms are the structural similarity index (SSIM)2 and the visual information fidelity index (VIF).3 However, the SSIM method is less successful when used to assess the qualities of blurred and white noisy images, and the computational complexity of the VIF method is very high.
Human eyes are sensitive to sharp edges and contours when recognizing objects and scene understanding.4–6 In addition, human visual sensitivity varies with spatial frequency in multi-scale presentation. Therefore, we suspected we might achieve more accurate image assessment results if we could quantify the weighted distortion of the edges and contours with a multi-scale approach. We propose a wavelet leader pyramids-based visual information fidelity (WALE-VIF) method for image quality assessment. To the best of our knowledge, the concept of wavelet leaders7 has not been used for IQA before.
We used 2D wavelet leader pyramids to robustly extract the multi-scale information of edges. Wavelet leaders were derived from the high-pass wavelet coefficients7 (see Figure 1). Essentially, the wavelet leader Lj for scale j and position was the absolute value of maximum response from all wavelet coefficients in both the spatial neighborhood of and its scale neighborhood at finer scales.7 Using the wavelet leaders rather than wavelet coefficients can significantly reduce the computational cost of our subsequent procedures. Another merit of using wavelet leaders is that information on edges and contours was well maintained while small wavelet coefficients were effectively removed in images. Thus, we derived a more robust and accurate result when estimating the quality of the distorted images. All the different scales of wavelet leaders together with the low-pass sub-band of wavelet coefficients constructed a multi-scale quality feature (QF) of the image.
Definition of wavelet leader. The wavelet leader Lj
(yellow block) is defined as the largest absolute value of the wavelet coefficients in the cyan volume. The red, green and blue sectors in each block mean the wavelet coefficients with same scale and position but different directions. Only the wavelet coefficients over which the maximum is taken are marked in this figure.
To evaluate the quality of the distorted image, we computed the amount of shared information (or the quality similarity) between the QFs of the distorted and the original images. We used the VIF metric to calculate the quality similarity in each individual scale. Then, we set the scale-variant weights to various single-scale quality similarities and computed the overall quality score as the weighted sum. Conceptually, this method is coherent with the HVS contrast sensitivity function (CSF).8
We tested our approach on the Laboratory for Image and Video Engineering (LIVE) image quality assessment database (release 2) made available by University of Texas.9 The database also contains the subjective evaluation result—i.e., the degradation mean opinion scores (DMOS)—for each image. We compared the performance of our proposed method and other state-of-the-art IQA ones including SSIM, VIF, and PSNR. The performance of IQA method is indicated by several popular statistical metrics, including the correlation coefficient (CC), Spearman rank order correlation coefficient (SROCC), root mean square error (RMSE), and mean absolute error (MAE). The CC and SROCC metrics indicate the consistency between the IQA measures and DMOS, and larger values mean the corresponding IQA algorithm has better accuracy (perfect match=1). On the contrary, the MAE and RMSE metrics indicate the statistical distances to the subjective scores. Therefore, smaller values in MAE and RMSE mean better performance (perfect match=0). Comparison of performance shows that our method outperforms the other state-of-the-art IQA methods (see Table 1). Our method is more consistent with human judgment than others (see Figure 2 for the fitted curves of DMOS versus the predicted score by the IQA methods). In addition, our method is four times faster than VIF (see Table 2 for the average times required to predict the quality of an image by each of these measures).
Figure 2. Scatter plots for the four objective quality measures versus DMOS for images in LIVE dataset. (a) PSNR, (b) SSIM, (c) VIF, and (d) WALE-VIF.
Table 1.Performance comparison of image quality assessment models using the University of Texas Laboratory for Image and Video Engineering (LIVE) dataset. For correlation coefficient (CC) and Spearman rank order correlation coefficient (SROCC), 1 is a perfect match. For root mean square error (RMSE) and mean absolute error (MAE), 0 is a perfect match. PSNR: Peak signal-to-noise ratio. SSIM: Structural similarity index. VIF: Visual information fidelity index. WALE-VIF: Wavelet leader pyramids-based visual information fidelity.
||Perfect match = 1
||Perfect match = 0
Table 2.Average time required to calculate the quality score of one image by each IQA method on LIVE dataset
In summary, we propose a WALE-VIF algorithm for image quality assessment. We introduce 2D wavelet leaders in IQA to extract multi-scale information on edges. The quality similarity on each individual scale is evaluated by comparing the VIF of wavelet leaders. Following the HVS CSF, we determine the weights of different scales and obtain the final quality score from all the single-scale similarities using a weighted sum method. Thus, our algorithm systematically outperforms state-of-the-art IQA methods in accuracy, robustness, and computational efficiency. As a next step, we will try to find a more efficient model for fitting the distribution of single-scale QF and thus obtain more accurate single-scale quality similarities.
This work was supported by 973 Program (2010CB731401, 2010CB731406), NSFC (60932006, 60828001, 61001146, 61071155).
Shanghai Jiao Tong University
Xiaolin Chen received a BE degree in electronic engineering from the University of Electronic, Science and Technology of China, Chengdu, China. She is currently pursuing a PhD at Shanghai Jiao Tong University, Shanghai, China. Her research interests include image processing, computer vision, and pattern recognition.
1. Z. Wang, A. C. Bovik, Modern Image Quality Assessment, Synthesis Lectures on Image, Video, and Multimedia Processing Morgan, Claypool, 2006.
2. Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Proc. 13, pp. 600-612, 2004.
3. H. R. Sheikh, A. C. Bovik, Image information and visual quality, IEEE Trans. Image Proc. 15, pp. 430-444, 2006.
4. J. H. Elder, S.W. Zucker, Evidence for boundary-specific grouping in human vision, Vision Res. 38, pp. 143-152, 1998.
5. F. Bergholm, Edge focusing, IEEE Trans. Pattern Anal. Mach. Intell. 9, pp. 726-741, 1987.
6. J. H. Elder, S.W. Zucker, Local Scale Control for Edge Detection and Blur Estimation, IEEE Trans. Pattern Anal. Mach. Intell. 20, pp. 699-716, 1998.
7. S. Jaffard, B. Lashsermes, P. Abry, Wavelet leaders in multifractal analysis, Wavelet Analysis and Applications, ch. 3: Fractal and multifractal theory, wavelet algorithm, wavelet in numerical analysis, pp. 201-246, Birkhäuser, 2007.
8. F. W. Campbell, J. G. Robson, Application of Fourier analysis to the visibility of gratings, J. Physiol. 197, pp. 551-566, 1968.