Proceedings Volume 5668

Image Quality and System Performance II

cover
Proceedings Volume 5668

Image Quality and System Performance II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 17 January 2005
Contents: 6 Sessions, 33 Papers, 0 Presentations
Conference: Electronic Imaging 2005 2005
Volume Number: 5668

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Preference and Psychophysics
  • Poster Session
  • Preference and Psychophysics
  • Perceptual Image Quality
  • Standards
  • Perceptual Image Quality
  • Measurements and Modeling I
  • Measurements and Modeling II
  • Standards
  • Poster Session
Preference and Psychophysics
icon_mobile_dropdown
Image quality evaluation: the data mining approach
Chengwu Cui
It is difficult if not impossible to derive a model to adequately describe the entire visual, cognitive and preference decision process of image quality evaluation and to replace it with objective alternatives. Even if some parts of the process can be modeled based on the current knowledge of the visual system, there is often a lack of sufficient data to support the modeling process. On the other hand, image quality evaluation is constantly required for those working on imaging devices and software. Measurements and surveys are regularly conducted to test a newer processing algorithm or a methodology. Large scale subjective measurement or surveys are often conducted before a product is released. Here we propose to combine the two processes and apply data mining techniques to achieve both goals of routine subjective testing and modeling. Specifically, we propose to use relational databases to log and store regular evaluation processes. When combined with web applications, the relational databases approach allow one to maximally improve the efficiency of designing, conducting, analyzing, and reporting test data. The collection of large amounts of data makes it possible to apply data mining techniques to discover knowledge and patterns in the data. Here we report one such system for printing quality evaluation and some theories on data mining including data visualization, observer mining, text comment mining, test case mining, model mining. We also present some preliminary results based on some of these techniques.
Poster Session
icon_mobile_dropdown
Multimedia quality evaluation across different modalities
Truong Cong Thang, Yong Man Ro
In heterogeneous computing environments, multimedia contents would be adapted drastically in terms of quality as well as modality. There is a strong need to measure the content quality in this context, i.e. with both content scaling and modality conversion. However, the quality is often considered within a single modality (e.g. video, audio). There has been very little work on quality evaluation when the content's modality is variously converted (e.g. a video converted to a sequence of "important" images or explanatory text). This paper aims at evaluating the content quality across different scaling levels and modalities. We contend that the quality consists of two key aspects: the perceptual quality and the semantic quality. The former refers to user's satisfaction in perceiving the content, regardless of what information the content contains; the latter, which is crucial in modality conversion, refers to the amount of conveyed information, regardless of how the content is presented. We design a procedure to measure these two qualities through subjective tests, and then we present computational methods that can be used to replace the time-consuming subjective method. The experiments show the insights into the dependence of the qualities on modalities and content scaling levels.
Preference and Psychophysics
icon_mobile_dropdown
Multidimensional scaling with nonrepeating random paths
Nathan Moroney, Ingeborg Tastl
Triadic comparisons can be used to create similarity or dissimilarity matrices for multi-dimensional scaling. The number of judgments to be performed grows rapidly with increasing numbers of input items or stimuli. Balanced incomplete block designs have been used to specify the number of times a given pair of items occurs in order to reduce the overall number of judgments. This paper proposes using a set of non-repeating random paths to sample a given set of samples. This sampling scheme can be used to efficiently distribute the total number of dissimilarity judgments among a large number of observers. This paper applies the non-repeating random path sampling, a special form of random walks sampling to a web-based color name similarity experiment. The results are then compared to a parallel laboratory study using a complete set of triads and a smaller number of observers.
Presence and preferable viewing conditions when using an ultrahigh-definition large-screen display
Kenichiro Masaoka, Masaki Emoto, Masayuki Sugawara, et al.
We are investigating psychological aspects to obtain guidelines for the design of TVs aimed at future high-presence broadcasting. In this study, we performed subjective assessment tests to examine the psychological effects of different combinations of viewing conditions obtained by varying the viewing distance, screen size, and picture resolution (between 4000 and 1000 scan lines). The evaluation images were presented in the form of two-minute programs comprising a sequence of 10 still images, and the test subjects were asked to complete a questionnaire consisting of 20 items relating to psychological effects such as "presence", "adverse effects", and "preferability". It was found that the test subjects reported a higher feeling of presence for 1000-line images when viewed around a distance of 1.5H (less than the standard viewing distance of 3H, which is recommended as a viewing distance for subjective evaluation of image quality for HDTV), and reported a higher feeling of presence for 4000-line images than for 1000-line images. The adverse effects such as "difficulty of viewing" did not differ significantly with resolution, but were evaluated to be lower as the viewing distance increased and tended to saturate at viewing distances above 2H. The viewing conditions were evaluated as being more preferable as the screen size increased, showing that it is possible to broadcast comfortable high-presence pictures using high-resolution large-screen displays.
A psychophysical study on the influence factors of color preference in photographic color reproduction
Reproduction of more pleasing colors is one of the efficient methods to improve image quality of color imaging devices. A psychophysical experiment was completed to investigate the preferred colors for three main categories: human skin, blue sky and green grass. A new experimental technique, the cube-selection method, was developed to adjust lightness, chroma and hue to find out observer’s preference in CIELAB color space. It is a fast and accurate multi-dimensional adjustment technique superior to the conventional method of adjustment. Several potential influence factors for image color preference including image content, capturing illuminant, object background and culture difference were studied by comparing the observers’ preference. Applicable conclusions were drawn from the analysis of the experiment results that help better understand the influence of these factors. It showed that capturing illuminant and image content had significant influence for human skin and grass color reproduction preference respectively. The results from this paper show the way for further research on influence factors of color preference in photographic color reproduction.
Softcopy banding visibility assessment
Banding is a printer artifact perceived as one dimensional luminance variations across the print-out caused by the vibrations of different printer components. In the printing industry, banding is considered to be one of the worst defects that dominates overall perceived image quality. Understanding the visibility of banding will help us in developing strategies to reduce the banding artifact. We developed a softcopy environment to conduct various experiments for investigating the visibility of banding. This environment includes the methodology to duplicate the print on the monitor, and a banding extraction technique. This technique enables us to freely adjust the magnitude of banding of any printer. We validated the accuracy of this methodology by conducting a banding matching experiment. We used this platform to conduct banding visibility assessment experiments. One of them was a banding discrimination experiment. The results showed that for the printers investigated, a reduction of 6.5% in the banding magnitude will be just noticeable by an average observer. We were also able to find the detection thresholds of banding in grayscale images for three laser electrophotographic printers. The detection threshold of the best printer was about 50% of its original banding. So there is still plenty of room to reduce the visibility of the banding artifact. We were also able to compare the banding visibility of different printers quantitatively by conducting a cross-platform experiment. This methodology can form the basis for a metric for visibility of banding.
Perceptual Image Quality
icon_mobile_dropdown
Exploring s-CIELAB as a scanner metric for print uniformity
The s-CIELAB color difference metric combines the standard CIELAB metric for perceived color difference with spatial contrast sensitivity filtering. When studying the performance of digital image processing algorithms, maps of spatial color difference between 'before' and 'after' images are a measure of perceived image difference. A general image quality metric can be obtained by modeling the perceived difference from an ideal image. This paper explores the s-CIELAB concept for evaluating the quality of digital prints. Prints present the challenge that the 'ideal print' which should serve as the reference when calculating the delta E* error map is unknown, and thus be estimated from the scanned print. A reasonable estimate of what the ideal print 'should have been' is possible at least for images of known content such as flat fields or continuous wedges, where the error map can be calculated against a global or local mean. While such maps showing the perceived error at each pixel are extremely useful when analyzing print defects, it is desirable to statistically reduce them to a more manageable dataset. Examples of digital print uniformity are given, and the effect of specific print defects on the s-CIELAB delta E* metric are discussed.
A perceptual quality metric for color-interpolated images
An objective image quality metric can be used to compare the output of different image processing algorithms, but objective measures are not always well correlated with subjective image quality assessment; the latter implies the use of human observers, thus objective methods able to emulate the Human Visual System (HVS) better than the classical measures are preferred. In this paper a full reference objective metric, based on perceptual criteria and oriented to demosaiced images is proposed. The basic idea is to model the main artifacts produced by the interpolation process, taking into account the HVS sensibility to the typical aliasing and the zipper defects. The proposed technique has been compared to the DE94 CIELAB metric. Furthermore, two subjective tests have been performed; one relative to the color aliasing artifact and one to the zipper effect. The experimental results highlight that the quality scores obtained by the proposed measures have a similar trend to the DE94 CIELAB metric. Moreover, subjective tests are in accordance with the obtained results. This technique is useful to evaluate the quality of the interpolation techniques implemented in the image processing pipeline of different digital still cameras.
The effect of opponent noise on image quality
A psychophysical experiment was performed examining the effect of luminance and chromatic noise on perceived image quality. The noise was generated in a recently developed isoluminant opponent space. 5 spatial frequency octave bands centered at 2, 4, 8, 16, and 32 cycles-per-degree (cpd) of visual angle were generated for each of the luminance, red-green, and blue-yellow channels. Two levels of contrast at each band were examined. Overall there were 30 images and 1 "original" image. Four different image scenes were used in a paired-comparison experiment. Observers were asked to select the image that appears to be of higher quality. The paired comparison data were used to generate interval scales of image quality using Thurstone's Law of Comparative Judgments. These interval scales provide insight into the effect of noise on perceived image quality. Averaged across the scenes, the original noise-free image was determined to be of highest quality. While this result is not surprising on its own, examining several of the individual scenes shows that adding low-contrast blue-yellow isoluminant noise does not statistically decrease image quality and can result in a slight increase in quality.
Perceptual color noise formulation
Image noise is one of the important image quality metrics for evaluation and optimization in color reproduction system. While the noise evaluation in psychometric lightness L* works pretty well in black-and-white images, it is insufficient for color images. A perceptual color noise evaluation equation was derived to extend noise evaluation in CIELAB color space, incorporating chromatic noise components. Psychophysical experiments were designed to evaluate color noise subjectively. In the first step, Weber's Law and Fechner's Law were used to generate a standard ruler with equal perceptual interval steps, which served to anchor a numerical image quality rating scale for the subjective rating experiment in the second step. An objective noise evaluation equation that incorporated the noise sensitivity functions modeled from the experimental data was found to better correlate with subjective evaluation. A more robust noise evaluation equation will be derived based on the psychophysical experiment techniques and experimental data in the future.
An investigation of perceived sharpness and sharpness metrics
Sharpness is an important attribute that contributes to the overall impression of image quality. As digital photography becomes more and more popular, digital photo enhancement has been a topic of great interest. In this paper, we investigate two issues related to digital photo sharpness. 1) How do we quantitatively measure the sharpness of a digital image? 2) What is the preferred sharpness of a digital image, and what is the relation between preferred sharpness and sharpness detection threshold? Both issues are of practical use to the digital photography market. First, we present the design and properties of three sharpness metrics to answer the first question. Next, we describe psychophysical experiments to investigate the second question. It is found that 1) the sharpness metric Digital Sharpness Scale (DSS) and Average Edge Transition Slope (AETS) are highly correlated to the perceived sharpness; 2) Both DSS and AETS predict sharpness equality with acceptable error; 3) the sharpness detection threshold is relatively consistent across subjects and across image contents, compared with the sharpness preference; 4) the average level of preferred sharpness is consistently higher than the detection threshold across image contents and across subjects, which implies that observers in general prefer a sharpened image to the original image; and 5) the preferred level of sharpness has a strong dependency on image content.
Perception-based line quality measurement
Wencheng Wu, Edul N. Dalal
It is well-known that many sub-attributes of line quality contribute to the perception of the overall line quality. But the relative importance of these sub-attributes is not clear, nor is there a method available for combining them into one representative number for overall line quality. To address these issues, we have designed and conducted a series of psychophysical experiments, which explore the shape of the human visual transfer functions (VTF) relevant to the perception of three selected sub-attributes: lumpiness, waviness and raggedness. We found that human sensitivity to these sub-attributes can be represented by VTF’s of the same shape but with relative perception weighting factors of 6:4:3 respectively. Based on this, we have proposed an approach to assess overall line quality. In our method, we first pre-process the line image acquired and extract certain profiles relevant to line quality measurement. A set of corresponding VTF’s is then applied to these profiles to calculate the various sub-attributes. Finally, overall line quality is determined by the weighted combination of these individual sub-attributes. These preference weights (1:1:3 for lumpiness, waviness and raggedness respectively) are different from the perception weights mentioned earlier. Our preliminary results show that this measurement correlates well with human perception of overall line quality, for the sub-attributes studied.
Standards
icon_mobile_dropdown
Tone-transfer (OECF) characteristics and spatial frequency response measurements for digital cameras and scanners
Measurement of the spatial frequency response (SFR) of digital still cameras by slanted-edge analysis has been established for several years. The method, described in standard ISO 12233, has also been applied to image acquisition subsystems such as document and print scanners. With the frequent application of the method and use of supporting software, questions often arise about the form of the input test image data. The tone-transfer characteristics of the system under test can influence the results, as can signal quantization and clipping. For this reason, the original standard called for a transformation of the input data prior to the slanted-edge analysis. The transformation is based on the measured opto-electronic conversion function (OECF) and can convert the image data to a reference-exposure signal space. This is often helpful when comparing different devices, if the intent is to do so in terms of the performance of optics, detector, and primary signal processing. We describe the use of the OECF and its inverse to derive the signal transformation in question. The influence of typical characteristics will be shown in several examples. It was found that, for test target data of modest contrast, the resulting SFR measurements were only moderately sensitive to the use of the inverse OECF transformation.
Extension of the ISO 12233 SFR measurement technique to provide MTF bounds for critical imaging arrays
Robin B. Jenkin, Ralph E. Jacobson, Mark A. Richardson, et al.
The ISO 12233 SFR measurement technique provides a quick and reliable method to evaluate the SFR and, with slight modification, the MTF of digital imaging systems. This work demonstrates that the MTF provided by the technique approaches the theoretical result for a single imaging element once all other factors, such as image forming optics, have been accounted for. Digital imaging is being increasingly accepted as a solution for critical systems as issues regarding security and provenience are being resolved. The evidential need for recording fingerprints is just one example. Jenkin et al. previously showed that a lower bound of array MTF could be derived taking account of the sampling pitch, aperture and phase differences between the signal and imaging elements. This is a pessimistic estimate of array performance and, as such, holds advantages for design of critical systems. This work shows simple extension of the ISO 12233 SFR measurement technique to provide an estimate of the lower bound and compares it to known exposures of a simple Monte-Carlo model and a commercial digital camera.
ISO 12232 revision: determination of chrominance noise weights for noise-based ISO calculation
Sean C. Kelly V.D.M., Brian W. Keelan
Three ISO speeds for digital cameras, yielding the minimum, typical, and maximum exposures recommended for use, are defined in International Standard 12232, which is under revision. The typical and minimum acceptable exposures are based upon signal-to-noise criteria, described in ISO 12232, in which visual (perceptually relevant) noise is computed as a weighted sum of variances from a luminance (Y) and two chrominance (R-Y, B-Y) channels. The weights of the two chrominance variances, C1 (R-Y) and C2 (B-Y), are in need of reevaluation because of: (1) changes in linearization procedures being introduced in the revision of ISO 12232; (2) the limited nature of the original experiment to determine C1 and C2 and (3) suspicion that the initial C1 and C2 values were too high, overemphasizing the contributions of chrominance noise to perception. This paper describes the image simulations, psychophysical experiment, and analyses conducted to determine new values for the chrominance weights to be used in the revised standard. The values obtained, C1 = 0.279 (standard error = SE = 0.025) and C2 = 0.088 (SE = 0.017), are approximately one-half as large as those in the original version of ISO 12232. Systematic variation of the weights with the color of noise-sensitive uniform areas in the scenes is observed, but the effect is small and does not have a practical impact on the standard.
Update on the INCITS W1.1 standard for evaluating the color rendition of printing systems
The color rendition ad hoc team of INCITS W1.1 is working to address issues related to color and tone reproduction for printed output and its perceptual impact on color image quality. The scope of the work includes accuracy of specified colors with an emphasis on memory colors, color gamut, and the effective use of tone levels, including issues related to contouring. The team has identified three sub-attributes of color rendition: 1) color quantization, defined as the ability to merge colors where needed; 2) color scale, defined as the ability to distinguish color where needed; and 3) color fidelity, defined as a balance of colorimetric accuracy, in cases where a reference exists, and pleasing overall color appearance. Visual definitions and descriptions of how these sub-attributes are perceived have been developed. The team is presently working to define measurement methods for the sub-attributes, with the focus in 2004 being on color fidelity. This presentation will review the definitions and appearance of the proposed sub-attributes and the progress toward developing test targets and associated measurement methods to quantify the color quantization sub-attribute. The remainder of the discussion will focus on the recent progress made in developing measurement methods for the color fidelity sub-attribute.
Gloss uniformity measurement update for ISO/IEC 19751
Yee S. Ng, Chengwu Cui, Chunghui Kuo, et al.
To address the standardization issues of perceptually based image quality for printing systems, ISO/IEC JTC1/SC28, the standardization committee for office equipment chartered the W1.1 project with the responsibility of drafting a proposal for an international standard for the evaluation of printed image quality1. An ISO draft Standard2, ISO/WD 19751-1, Office Equipment - Appearance-based image quality standards for printers - Part 1: Overview, Procedure and Common Methods, 2004 describes the overview of this multi-part appearance-based image quality standard. One of the ISO 19751 multi-part Standard’s tasks is to address the appearance-based gloss and gloss uniformity issues (in ISO 19751-2). This paper summarizes the current status and technical progress since the last two updates3, 4. In particular, we will be discussion our attempt to include 75 degree gloss (G75) objective measurement5 in differential gloss and within-page gloss uniformity. The result for a round-robin experiment involving objective measurement of differential gloss using G60 and G75 gloss measurement geometry is described. The results for two perceptual-based round-robin experiments relating to haze effect on the perception of gloss, and gloss artifacts (gloss streaks/bands, gloss graininess/mottle) are discussed.
Perceptual Image Quality
icon_mobile_dropdown
Assessment of full color image quality with singular value decomposition
Aleksandr Shnayderman, Ahmet M. Eskicioglu
In subjective evaluation of distorted images, human observers usually consider the type of distortion, the amount of distortion, and the distribution of error. We recently proposed an image quality measure, M-SVD, for gray-scale images that can be used as a graphical tool to predict the distortion based on these three factors, and also as a numerical tool to assess the overall visual quality of the distorted image. It performs better than two state-of-the-art metrics, Q and MSSIM, especially when we compute the correlation with mean opinion score across different types of noise. The test image was degraded using six types of noise (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening and DC-shifting), each with five different levels of intensity. In this paper, we extend M-SVD to full color images using a color model which decouples the color and gray-scale information in an image. Our experiments show that using only the luminance component, the measure outperforms Q and MSSIM. When we also use the two chrominance layers, the performance of M-SVD becomes slightly higher whereas the performance of Q and MSSIM is degraded. This indicates that the color components may also contribute to the performance of the proposed measure.
Measurements and Modeling I
icon_mobile_dropdown
A novel technique of image quality objective measurement by wavelet analysis throughout the spatial frequency range
An essential determinant of the value of surrogate digital images is their quality. Image quality measurement has become crucial for most image processing applications. Over the past years , there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet analysis throughout the spatial frequency range. This is done by a detailed analysis of an image for a wide range of spatial frequency content, using a combination of modulation transfer function (MTF), brightness, contrast, saturation, sharpness and noise, as a more revealing metric for quality evaluation. A fast lifting wavelet algorithm is developed for computationally efficient spatial frequency analysis, where fine image detail corresponding to high spatial frequencies and image sharpness in regard to lower and mid -range spatial frequencies can be examined and compared accordingly. The wavelet frequency deconstruction is actually to extract the feature of edges in sub-band images. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated in assisting with quality analysis. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.
Color image quality in projection displays: a case study
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Video quality comparison on LCD monitors
T. Jeong, J. Choe, J. Lim, et al.
In this paper, we investigate video quality on various LCD monitors. There exists a large variance in video quality among various LCD monitors. Due to this unavoidable variance in LCD monitors, there has been a concern about the stability and repeatability of subjective and objective testing for LCD monitors. We performed subjective testing the DSCQS method and compare subjective quality ratings on the 5 LCD monitors. The experimental results show that the correlation coefficients among DMOS’s with respect to each monitor are acceptably high. Thus, it may be possible to develop models for objective measurement of video quality on LCD monitors. In the paper, physical parameters such as color temperature, contrast, brightness, response time, etc will be presented and thorough analyses will be provided.
Measurements and Modeling II
icon_mobile_dropdown
Comparison and evaluation of quality criteria for hyperspectral imagery
Emmanuel Christophe, Dominique Leger, Corinne Mailhes
Hyperspectral data appears to be of a growing interest over the past few years. However, applications for hyperspectral data are still in their infancy. Handling the significant size of hyperspectral data presents a challenge for the user community. To enable efficient data compression without losing the potentiality of hyperspectral data, the notion of data quality is crucial for the development of applications. To assess the data quality, quality criteria relevent to end-user applications are required. This paper proposes a method to evaluate quality criteria. The purpose is to provide quality criteria corresponding well to the impact of degradation on end-user applications. Several quality criteria adapted to hyperspectral context are evaluated. Finally, five criteria are selected to give a good representation of the degradation nature and level affecting hyperspectral data.
Simple and effective method to quantify the optical performance of camera phones
Dong-Xue Michael Wang, Kevin Johnson
A simple method based on binary line pair patterns to evaluate the image quality of camera phones is proposed in this paper, where four different spatial resolution line pairs are located at four different quadrants respectively. The value of Modulation Transfer Function (MTF) of the camera phones can be tested at four different spatial resolutions for the entire field of view (FOV) simultaneously. In addition, other key parameters of optical imaging quality criteria, such as image distortion, relative illumination can also be characterized in the same time. Moreover, the possible color moiré due to color aliasing from RGB channels can be detected and evaluated using the same test chart. Wavelet transforms on the gray level of test images based on the Haar wavelet and the Meyer wavelet are implemented in this paper. The test image is decomposed into the space Vj, (j=-3,-2,-1). The j=-3 corresponds a lower resolution space. Using the multi-resolution wavelet analysis, the 100lp/mm region can be partially recovered at a low resolution space, for example at the space V-2. The results are shown. Some noisy ripples at the original image are also magnified in a lower resolution space.
Image quality evaluation in the field of digital film restoration
M. Chambah, C. Saint-Jean, F. Helt
Digital film restoration is a significant hope for cinematographic archivists. Technical progress, more powerful machines at lower cost, makes it possible nowadays to restore cinematographic archives digitally at acceptable paces. Several digital restoration techniques have emerged during the last decade and became more and more automated but restoration evaluation remains still a rarely tackled issue. After presenting the several defects than can affect cinematographic material, and the film digital restoration field, we present in this paper the issues of image quality evaluation in the field of digital film restoration and suggest some reference free objective measures.
Method for remote monitoring of transmitted video quality using very low bitrate data circuit based on the reduced reference method
Osamu Sugimoto, Ryoichi Kawada, Atsushi Koike, et al.
A method to estimate the PSNR of transmitted video quality based on the reduced reference method is proposed. We previously studied PSNR estimation using the image feature extraction method based on the spread spectrum and Walsh-Hadamard transform (WHT); however, the conventional method is problematic because large bandwidth is required for the transmission of image features. We therefore propose an improved method that reduces the information amount of the image features. The image feature is expressed by the parity of the quantized level of the WHT coefficients and requires only 1 bit for one image feature while the conventional method requires 8-11 bits for each image feature. Computer simulations show that precise picture quality evaluation is possible at one eighth as much bitrate of the data circuit as the conventional method.
Array scanner as microdensitometer surrogate: a deal with the devil or… a great deal?
Inexpensive and easy-to-use linear and area-array scanners have frequently substituted as densitometers for low-frequency (i.e., large-area) hard copy image metrology. Increasingly, they are also being tasked for high spatial frequency, image microstructure metrology, which is reserved for high-performance microdensitometers that use microscope optics, photomultiplier tubes (PMT), and log amps. It is hard to resist their adoption for such use though, given the convenience level. Their high speed, large scan areas, auto-focus, discomfiting low cost, and low operator skill requirements makes one question if their use for such purpose is somehow too good to be true. To confidently judge their limitations requires a comprehensive signal and noise spatial frequency performance evaluation with respect to available driver options. This paper will outline and demonstrate evaluation techniques that use existing ISO metrology standards for modulation transfer function (MTF), noise, and dynamic range with a comparison to a Photometric Data Systems (PDS) microdensitometer
Development of multispectral scanner by using LEDs array for digital color proof
We developed a multi-spectral scanner by using LEDs array with different spectral radiant distribution to measure high accuracy spectral characteristics of printing proof. Five kinds of LEDs were selected from the combination of 40 LEDs on the market to minimize the color difference ΔE*94 between measured and estimated reflectance spectra of 81 kinds of color charts by using polynomial regression and clustering methods. Reflectance spectra of 928 color charts were measured and estimated by using the scanner and the Wiener estimation method. As a result, average color difference ΔE*94 was 1.23 when 81 color data were used to calculate the Wiener estimation matrix.
Application of Tatian’s method to slanted-edge MTF measurement
The ISO 12233 method for the measurement of the spatial frequency response (SFR) of digital still cameras and scanners is based on the analysis of slanted-edge image features. The procedure applies a form of edge-gradient analysis to an estimated edge-spread function. As with all measurement, image noise can introduce bias error and variation into the resulting camera SFR and modulation transfer function, (MTF). It is often pointed out that applying a derivative filter to the estimated edge-spread function, as is done in the ISO method, amplifies this image noise. To reduce the influence of noise on the measurement, data averaging and fitting have been proposed. One method for edge-gradient analysis, reported by Tatian, avoids the above discrete derivative step. The MTF is expressed as a trigonometric series, whose elements are estimated from the measured edge-spread function. We describe the application of this method as an intermediate step in the ISO procedure. The method was benchmarked for both synthetic edges and captured test images. Results indicate good agreement between the two estimates for low-noise image data. For higher noise levels, however, Tatian's method is found to be susceptible to the selection (cropping) of the input data array. For the conditions tested, we found no clear advantage of this method over the current ISO procedure. For other applications, the slanted-edge analysis provides a front end to Tatian's method, and others, based on parametric modeling or statistical fitting.
Analysis of self-correcting active pixel sensors
Khaled Salama, Ahmad Al-Yamani
This paper evaluates the operation of self-correcting active pixel sensors presented in [6] using both Signal-to-Noise Ratio and Dynamic Range figures. The evaluation is based on a simplified Active Pixel Sensing (APS) model. We show that in the absence of stuck faults (i.e., no errors) the performance of the system suffers from considerable degradation especially at low illumination (i.e., typical indoor scenes). We use the same model to quantify the number of defective pixels under which self correction is beneficial and evaluate the quality of the resultant image
Standards
icon_mobile_dropdown
Cares and concerns of CIE TC8-08: spatial appearance modeling and HDR rendering
The International Commission on Illumination (CIE) is dedicated to providing discussion, information, and guidance in the science and art of light and lighting. The terms of reference of Division 8 of the CIE is “to study procedures and prepare guides and standards for the optical, visual and metrological aspects of the communication, processing, and reproduction of images, using all types of analogue and digital imaging devices, storage media and imaging media.” Along those lines, Technical Committee (TC) 8-08 is tasked with developing guidelines and testing methods for using spatial or image appearance models, specifically for use with High Dynamic Range (HDR) images. The goal of TC8-08 is not to create a CIE recommended image appearance model, but rather to design and conduct experimental techniques for evaluating these models.
Poster Session
icon_mobile_dropdown
Visualization of distortion using interference fringe patterns and the correction of chromatic aberration using a Fresnel zone plate with microdisplay
The visualization of distortion was used for the distorted and undistorted interference fringe pattern by a Michelson interferometer. The half image of this symmetric interference fringe was automatically transformed to the continuous phase image by the automated carrier fringe analysis combined with FFT technique and the phase unwrapping method. Finally, the phase image of distortion was created by the calculated MATLAB programme. In the correction of chromatic aberrations in the lens, the circular Fresnel zone plate is designed by the MATAB algorithm programme. The zone plate image with the high resolution of zone plate curvature surface is obtained by increasing the image pixel size (6000by 6000) and by the camera down system. The designed zone plate is fabricated by photography, and the surface of the fabricated zone plate is observed by WYKO interferometric microscope. Furthermore the chromatic aberration of lens combined with the fabricated zone plate is practically examined. This experiment obtained the reduced chromatic aberration.
Multiprojector tiled display wall calibration with a camera
Chao Li, Hai Lin, Jiaoying Shi
Recent developments in computer graphics hardware and distributed parallel rendering research have greatly improved the rendering capabilities of graphics workstations and PC clusters. However, the display resolution of monitors is still far from being enough, and thus become the bottleneck of visualization. Recently, more and more people are focusing on using multiple projectors to form a large display wall and provide high display resolutions. One essential problem of these kinds of systems is how to calibrate the projectors and make the whole display wall seamless and perfect. Traditional software calibration algorithms have three main problems: re-rendering and over illumination in overlapping regions, calibration being not general. In this paper, we introduce our multi-projector tiled display wall calibration system using a digital camera which mainly focuses on solving these three problems. For the first two, we make specific divisions to the overlapping regions basing on our dividing algorithms and each projector project only part of the overlapping images. Furthermore, we make some sub-division to each part and make the mosaic seamless. And for the last problem, we adopt the idea of the open source software VNC, and implement our calibration on the layer of Windows desktop, thus make the calibration process application independent.
A human visual system model for no-reference digital video quality estimation
No-reference metrics are very useful for In-Service streaming applications. In this paper a blind measure for video quality assessment is presented. The proposed approach takes into account HVS Luminance Masking, Contrast Sensitivity and Temporal Masking. Video distortion level is then computed evaluating blockiness, blurring and moving artifacts. A global quality index is obtained using a multi-dimensional pooling algorithm (block, temporal window, frame, and sequence levels). Different video standard and several compression ratios have been used. A non-linear regression method has been derived, in order to obtain high linear and rank order correlation factors between human observer ratings and the proposed HVS-based index. Validation tests have been developed to assess index performance and computational complexity. Experimental results show that high correlation factors are obtained using the HVS models.