Show all abstracts
View Session
- Front Matter: Volume 8652
- Color Spaces
- Capturing Color
- Applications
- Reflectance
- Watching Colours
- Halftoning
- Printing
- Multispectral
- Display and Materials
Front Matter: Volume 8652
Front Matter: Volume 8652
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8652, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Color Spaces
A spherical perceptual color model
Show abstract
The paper introduces a transformed spherical model to represent the color space. A circular cone with a spherical top
tightly circumscribing the RGB color cube is equipped with a spherical coordinate system. Every point in the color cube
is represented by three spherical coordinates, with the radius ρ measuring the distance to the origin, indicating the
brightness attribute of the color, the azimuthal angle Θ measuring the angle on the horizontal plane, indicating the hue
attribute of the color, and the polar angle θ measuring the opening of the circular cone with the vertical axis as its center,
indicating the saturation attribute of the color. Similar to the commonly used perceptual color models including the HSV
model, the spherical model specifies color by describing the color attributes recognized by human vision. The
conversions between the spherical model and the RGB color model are mathematically simpler than that of the HSV
model, and the interpretation of the model is more intuitive too. Most importantly, color changes perceptually smoother
in the spherical color model than in the existing perceptual color models.
Chroma-preserved luma controlling technique using YCbCr color space
Show abstract
YCbCr color space composed of luma and chominance components is preferred for its ease of image processing.
However the non-orthogonality between YCbCr components induces unwanted perceived chroma change as controlling
luma values. In this study, a new method was designed for the unwanted chroma change compensation generated by
luma change. For six different YCC_hue angles, data points named ‘Original data’ generated with uniformly distributed
luma and Cb, Cr values. Then the weight values were applied to luma values of ‘Original data’ set resulting in ‘Test
data’ set followed by ‘new YCC_chroma’ calculation having miminum CIECAM02 ΔC between original and test data
for ‘Test data’ set. Finally mathematical model is developed to predict amount of YCC_chroma values to compensate
CIECAM02 chroma changes. This model implemented for luma controlling algorithm having constant perceived
chroma. The performance was tested numerically using data points and images. After compensation the result is
improved 51.69% than that before compensation when CIECAM02 Δ C between ‘Original data’ and ‘Test data’ after
compensation is compared. When new model is applied to test images, there is 32.03% improvement.
Analysis of a color space conversion engine implemented using dynamic partial reconfiguration
Ryan Toukatly,
Dorin Patru,
Eli Saber,
et al.
Show abstract
Dynamic Partial Reconfiguration allows parts of a Field Programmable Gate Array to be reconfigured, while the rest of
the system continues uninterrupted operation. A Color Space Conversion Engine is a digital image-processing pipeline,
which requires frequent reconfiguration of some, but not all of its stages. Therefore, it is a digital signal processing
system that presumably can take advantage of dynamic partial reconfiguration. This paper describes the necessary
design changes, testing, and performance analysis of a color space conversion engine implemented onto a field
programmable gate array using dynamic partial reconfiguration. The analysis provides insight into the operational
scenarios in which dynamic partial reconfiguration is advantageous or not.
Capturing Color
Color reproductivity improvement with additional virtual color filters for WRGB image sensor
Show abstract
We have developed a high accuracy color reproduction method based on an estimated spectral reflectance of objects
using additional virtual color filters for a wide dynamic range WRGB color filter CMOS image sensor. The four virtual
color filters are created by multiplying the spectral sensitivity of White pixel by gauss functions which have different
central wave length and standard deviation, and the virtual sensor outputs of those virtual filters are estimated from the
four real output signals of the WRGB image sensor. The accuracy of color reproduction was evaluated with a Macbeth
Color Checker (MCC), and the averaged value of the color difference ΔEab of 24 colors was 1.88 with our approach.
Glare and shadow reduction for desktop digital camera capture systems
Thanh H. Ha,
Chyuan-Tyng Wu,
Peter Majewicz,
et al.
Show abstract
The quality of images of objects with significant 3D structure, captured at close range under a flash, may
be substantially degraded by glare and shadow regions. In this paper, we introduce an imaging system and
corresponding algorithm to address this situation. The imaging system captures three frames of the stationary
scene using a single camera in a fixed position, but an illumination source in three different positions, one for each
frame. The algorithm includes two processes: shadow detection and image fusion. Through shadow detection,
we can locate the area of shadows. After getting the shadow maps, we generate a more complete final image by
image fusion. Our experimental results show that in most cases, the shadow and glare are markedly reduced.
Reducing flicker due to ambient illumination in camera captured images
Show abstract
The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by
a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the
reduced flicker artifact, based on visual observation.
Applications
Binary image compression using conditional entropy-based dictionary design and indexing
Show abstract
The JBIG2 standard is widely used for binary document image compression primarily because it achieves much
higher compression ratios than conventional facsimile encoding standards, such as T.4, T.6, and T.82 (JBIG1).
A typical JBIG2 encoder works by first separating the document into connected components, or symbols. Next
it creates a dictionary by encoding a subset of symbols from the image, and finally it encodes all the remaining
symbols using the dictionary entries as a reference.
In this paper, we propose a novel method for measuring the distance between symbols based on a conditionalentropy
estimation (CEE) distance measure. The CEE distance measure is used to both index entries of the
dictionary and construct the dictionary. The advantage of the CEE distance measure, as compared to conventional
measures of symbol similarity, is that the CEE provides a much more accurate estimate of the number of
bits required to encode a symbol. In experiments on a variety of documents, we demonstrate that the incorporation
of the CEE distance measure results in approximately a 14% reduction in the overall bitrate of the JBIG2
encoded bitstream as compared to the best conventional dissimilarity measures.
Segmentation for better rendering of mixed-content pages
Show abstract
We describe a segmentation-based object map correction algorithm, which can be integrated in a new imaging
pipeline for laser electrophotographic (EP) printers. This new imaging pipeline incorporates the idea of
object-oriented halftoning, which applies different halftone screens to different regions of the page, to improve the
overall print quality. In particular, smooth areas are halftoned with a low-frequency screen to provide more
stable printing; whereas detail areas are halftoned with a high-frequency screen, since this will better reproduce
the object detail. In this case, the object detail also serves to mask any print defects that arise from the use of
a high frequency screen. These regions are defined by the initial object map, which is translated from the page
description language (PDL). However, the information of object type obtained from the PDL may be incorrect.
Some smooth areas may be labeled as raster causing them to be halftoned with a high frequency screen, rather
than being labeled as vector, which would result in them being rendered with a low frequency screen. To
correct the misclassification, we propose an object map correction algorithm that combines information from the
incorrect object map with information obtained by segmentation of the continuous-tone RGB rasterized page
image. Finally, the rendered image can be halftoned by the object-oriented halftoning approach, based on the
corrected object map. Preliminary experimental results indicate the benefits of our algorithm combined with
the new imaging pipeline, in terms of correction of misclassification errors.
YACCD2: yet another color constancy database updated
Show abstract
In this paper we present an upgraded version of an image database (IDB) presented here in 2003 to test color constancy and other kinds of visual and image processing algorithms. Big technology improvements have been done in the last ten years, however the motivations to present this upgrade are not only technological. We decided to address other visual features related to vision, like dynamic range and stereo vision. Moreover, to address computer vision related problems (e.g. illuminant or reflectance estimation) we have made available a set of data regarding objects, backgrounds and illuminants used. Here we present the characteristics of the images in the IDB, the choice made and the setup of acquisition.
An efficient flicker noise reduction method for single images
Pan Pan,
Yuan He,
Shufu Xie,
et al.
Show abstract
In this paper, we present a novel efficient flicker noise reductionmethod for single images scanned by overhead line sensors.
The flicker noise here is perceived as horizontal bands which are not necessarily periodic. We view the flicker pattern as
the noise of row cumulative histogram along the vertical direction, and propose two novel cumulative histogram filtering
approaches to smooth the artifact, including using different Gaussian variance and padding the image. The proposed
algorithm is then used to reduce the flicker noise in our scanned color images. The computational complexity of the
proposed algorithm is further analyzed. The algorithm operates on singe images, and does not rely on the frequency of
alternative currency, nor requires the horizontal bands are periodic. Experimental results show the superior performance of
the proposed method in comparison to other existing methods.
Reflectance
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
Show abstract
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant.
The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a
method using color gamuts. The former method, which is one we had previously proposed, improved on the original
method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method
estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect
when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis
of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all
colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors
are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous
method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray
world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma
gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images
show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately
estimated, with smaller estimation error average than that generated in the conventional method.
Estimation of reflectance based on properties of selective spectrum with adaptive Wiener estimation
Show abstract
To accurately represent the colors in a real scene, a multi-channel camera system is necessary. One of the applications of
the data acquired with the multi-channel camera system is the spectral reflectance estimation. One of the most widely
used methods to estimate the spectral reflectance is the Wiener estimation. While simple and accurate in controlled
conditions, the Wiener estimation does not perform as well with real scene data. Therefore, the adaptive Wiener
estimation has been proposed to improve the performance of the Wiener estimation. The adaptive Wiener estimation
uses a similar training set that was adaptively constructed from the standard training set according to the camera
responses. In this paper, a new way of constructing such similar training set using the correlation between each spectral
reflectance in the standard training set and the first approximation of the spectral reflectance that was obtained by the
Wiener estimation is proposed. The experimental results showed that the proposed method is more accurate than the
conventional Wiener estimations.
Metal-dielectric object classification by combining polarization property and surface spectral reflectance
Show abstract
We propose a method for automatically classifying multiple objects in a natural scene into metal or dielectric. We utilize
polarization property in order to classify the objects into metal and dielectric, and surface-spectral reflectance in order to
segment the scene image into different object surface regions. An imaging system is developed using a liquid crystal
tunable filter for capturing both polarization and spectral images simultaneously. Our classification algorithm consists of
three stages; (1) highlight detection based on luminance threshold, (2) material classification based on the spatial
distribution of the degree of polarization at the highlight area, and (3) image segmentation based on illuminant-invariant
representation of the spectral reflectance. The feasibility of the proposed method is examined in detail in experiments
using real-world objects.
Watching Colours
An experiment on the color rendering of different light sources
Show abstract
The color rendering index (CRI) of a light source attempts to measure how much the color appearance of objects is
preserved when they are illuminated by the given light source. This problem is of great importance for various industrial
and scientific fields, such as lighting architecture, design, ergonomics, etc. Usually a light source is specified through the
Correlated Color Temperature or CCT. However two (or more) light sources with the same CCT but different spectral
power distribution can exist. Therefore color samples viewed under two light sources with equal CCTs can appear
different. Hence, the need for a method to assess the quality of a given illuminant in relation to color. Recently CRI has
had a renewed interest because of the new LED-based lighting systems. They usually have a color rendering index rather
low, but good preservation of color appearance and a pleasant visual appearance (visual appeal). Various attempts to
develop a new color rendering index have been done so far, but still research is working for a better one. This article
describes an experiment performed by human observers concerning the appearance preservation of color under some
light sources, comparing it with a range of available color rendering indices.
Color universal design: analysis of color category dependency on color vision type (4)
Show abstract
This report is af ollow-up to SPIE-IS+T / Vol. 7528 7528051-8, SPIE-IS+T / Vol. 7866 78660J-1-8 and SPIE-IS+T / Vol. 8292 829206-1-8.
Colors are used to communicate information in various situations, not just for design and apparel. However, visual information given only by color may be perceived differently by individuals with different color vision types. Human color vision is non-uniform and the variation in most cases is genetically linked to L-cones and M-cones. Therefore, color appearance is not the same for all color vision types. Color Universal Design is an easy-to-understand system that was created to convey color-coded information accurately to most people, taking color vision types into consideration. In the present research, we studied trichromat (C-type), prolan (P-type), and deutan (D-type) forms of color vision.
We here report the result of two experiments. The first was the validation of the confusion colors using the color chart on CIELAB uniform color space. We made an experimental color chart (total of color cells is 622, the color difference between color cells is 2.5) for fhis experiment, and subjects have P-type or D-type color vision. From the data we were able to determine "the limits with high probability of confusion" and "the limits with possible confusion"
around various basing points. The direction of the former matched with the theoretical confusion locus, but the range did not extend across the entire a* range. The latter formed a belt-like zone above and below the theoretical confusion locus. This way we re-analyzed a part of the theoretical confusion locus suggested by Pitt-Judd. The second was an experiment in color classification of the subjects with C-type, P-type, or D-type color vision. The color caps of fhe 100 Hue Test were classified into seven categories for each color vision type. The common and different points of color sensation were compared for each color vision type, and we were able to find a group of color caps fhat people with C-, P-, and D-types could all recognize as distinguishable color categories. The result could be used as the basis of a color scheme for future
Color Universal Design.
Analysis of brain activity and response to colour stimuli during learning tasks: an EEG study
Show abstract
The research project intends to demonstrate how EEG detection through BCI device can improve the analysis and the
interpretation of colours-driven cognitive processes through the combined approach of cognitive science and information
technology methods. To this end, firstly it was decided to design an experiment based on comparing the results of the
traditional (qualitative and quantitative) cognitive analysis approach with the EEG signal analysis of the evoked
potentials. In our case, the sensorial stimulus is represented by the colours, while the cognitive task consists in
remembering the words appearing on the screen, with different combination of foreground (words) and background
colours.
In this work we analysed data collected from a sample of students involved in a learning process during which they
received visual stimuli based on colour variation. The stimuli concerned both the background of the text to learn and the
colour of the characters. The experiment indicated some interesting results concerning the use of primary (RGB) and
complementary (CMY) colours.
Prototypical colors of skin, green plant, and blue sky
Show abstract
Colors of skin, green plant, and blue sky of digital photographic images were studied for modeling and detection of these
three important memory color regions. The color modeling of these three regions in CIELAB and CAM02-UCS was
presented, and the properties of these three color groups were investigated.
Halftoning
Direct binary search (DBS) algorithm with constraints
Show abstract
In this paper, we describe adding constraints to the Direct Binary Search (DBS) algorithm. An example of
a useful constraint, illustrated in this paper, is having only one dot per column and row. DBS with such constraints requires greater than two toggles during each trial operation. Implementations of the DBS algorithm traditionally limit operations to either one toggle or swap during each trial. The example case in this paper produces a wrap-around pattern with uniformly distributed ON pixels which will have a pleasing appearance with precisely one ON pixel per each column and row. The algorithm starts with an initial continuous tone image and an initial pattern having only one ON pixel per column and row. The auto correlation function of Human Visual System (HVS) model is determined along with an initial perceived error. Multiple operation pixel error processing during each iteration is used to enforce the one ON pixel per column and row constraint. The constraint of a single ON pixel per column and row is used as an example in this paper. Further modification of the DBS algorithm for other constraints is possible, based on the details given in the paper. A mathematical framework to extend the algorithm to the more general case of Direct Multi-bit Search (DMS) is presented.
Improved spectral vector error diffusion by dot gain compensation
Show abstract
Spectral Vector Error Diffusion, sVED, is an interesting approach to achieve spectral color reproduction, i.e. reproducing
the spectral reflectance of an original, creating a reproduction that will match under any illumination. For each pixel in
the spectral image, the colorant combination producing the spectrum closest to the target spectrum is selected, and the
spectral error is diffused to surrounding pixels using an error distribution filter. However, since the colorant separation
and halftoning is performed in a single step in sVED, compensation for dot gain cannot be made for each color channel
independently, as in a conventional workflow where the colorant separation and halftoning is performed sequentially. In
this study, we modify the sVED routine to compensate for the dot gain, applying the Yule-Nielsen n-factor to modify the
target spectra, i.e. performing the computations in (1/n)-space. A global n-factor, optimal for each print resolution,
reduces the spectral reproduction errors by approximately a factor of 4, while an n-factor that is individually optimized
for each target spectrum reduces the spectral reproduction error to 7% of that for the unmodified prints. However, the
improvements when using global n-values are still not sufficient for the method to be of any real use in practice, and to
individually optimize the n-values for each target is not feasible in a real workflow. The results illustrate the necessity to
properly account for the dot gain in the printing process, and that further developments is needed in order to make
Spectral Vector Error Diffusion a realistic alternative for spectral color reproduction.
Extending color primary set in spectral vector error diffusion by multilevel halftoning
Show abstract
Ever since its origin in the late 19th century, a color reproduction technology has relied on a trichromatic color
reproduction approach. This has been a very successful method and also fundamental for the development of color
reproduction devices. Trichromatic color reproduction is sufficient to approximate the range of colors perceived by the
human visual system. However, tricromatic systems only have the ability to match colors when the viewing illumination
for the reproduction matches that of the original. Furthermore, the advancement of digital printing technology has
introduced printing systems with additional color channels. These additional color channels are used to extend the tonal
range capabilities in light and dark regions and to increase color gamut. By an alternative approach the addition color
channels can also be used to reproduce the spectral information of the original color. A reproduced spectral match will
always correspond to original independent of lighting situation. On the other hand, spectral color reproductions also
introduce a more complex color processing by spectral color transfer functions and spectral gamut mapping algorithms.
In that perspective, spectral vector error diffusion (sVED) look like a tempting approach with a simple workflow where
the inverse color transfer function and halftoning is performed simultaneously in one single operation. Essential for the
sVED method are the available color primaries, created by mixing process colors. Increased numbers of as well as
optimal spectral characteristics of color primaries are expected to significantly improve the color accuracy of the spectral
reproduction. In this study, sVED in combination with multilevel halftoning has been applied on a ten channel inkjet
system. The print resolution has been reduced and the underlying physical high resolution of the printer has been used to
mix additional primaries. With ten ink channels and halfton cells built-up by 2x2 micro dots where each micro dot can be
a combination of all ten inks the number of possible ink combinations gets huge. Therefore, the initial study has been
focused on including lighter colors to the intrinsic primary set. Results from this study shows that by this approach the
color reproduction accuracy increases significantly. The RMS spectral difference to target color for multilevel halftoning
is less than 1/6 of the difference achieved by binary halftoning.
Reducing auto moiré in discrete line juxtaposed halftoning
Vahid Babaei,
Roger D. Hersch
Show abstract
Discrete line juxtaposed halftoning creates color halftones with discrete colorant lines of freely selectable rational
thicknesses laid-out side by side. Screen elements are made of parallelogram screen tiles incorporating the discrete
colorant lines. The repetition of discrete colorant lines from one screen element to the next may create auto moiré
artifacts. By decomposing each supertile into screen element tiles having slightly different rational thicknesses, we
ensure that successive discrete colorant lines have different phases in respect to the underlying pixel grid. The
resulting repetition vector is different from one discrete line to the next discrete line of the same colorant. This
strongly reduces the original auto moiré artifacts.
Printing
Optimizing CMYK mapping for high speed digital inkjet webpress
Show abstract
The CMYK to CMYK mapping preserving the black channel is a method to solve the problem in standard ICC color
management that lacks the capability of preserving the K channel for printing CMYK contents. While the method has
been successfully used for digital commercial printing, limitations and areas for improvement are found. To address
these problems in generating CMYK re-rendering tables, an alternative method is developed. The K usage and total ink
usage are optimized in a color separation step. Instead of preserving the K channel globally, it preserves K-only gray
contents and maps other colors by optimizing the print quality and ink usage. Experiments verify that the method
significantly improves the print quality.
Estimating toner usage with laser electrophotographic printers
Show abstract
Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. We
propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and
scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page.
The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our twostage
predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of
both accuracy and robustness of the predictions.
Perceived acceptability of colour matching for changing substrate white point
Show abstract
Production and proofing substrates often differ in their white points. Substrate white points frequently differ
between reference and sample, for example between proof and print, or between a target paper colour and an
actual production paper. It is possible to generate characterization data for the printing process on the
production side to achieve an accurate colorimetric match but in many cases it is not practical to generate this
data empirically by printing samples and measuring them1. This approach however, does not account for any
degree of adaptation between the differing substrate white points whereas its acceptability may depend on
accounting for the change in paper colour such that appearance preservation of the original when printed on
the production substrate is attained.
The development of vector based 2.5D print methods for a painting machine
Show abstract
Through recent trends in the application of digitally printed decorative finishes to products, CAD, 3D additive layer
manufacturing and research in material perception, [1, 2] there is a growing interest in the accurate rendering of materials
and tangible displays. Although current advances in colour management and inkjet printing has meant that users can take
for granted high-quality colour and resolution in their printed images, digital methods for transferring a photographic
coloured image from screen to paper is constrained by pixel count, file size, colorimetric conversion between colour
spaces and the gamut limits of input and output devices. This paper considers new approaches to applying alternative
colour palettes by using a vector-based approach through the application of paint mixtures, towards what could be
described as a 2.5D printing method. The objective is to not apply an image to a textured surface, but where texture and
colour are integral to the mark, that like a brush, delineates the contours in the image. The paper describes the difference
between the way inks and paints are mixed and applied. When transcribing the fluid appearance of a brush stroke, there
is a difference between a halftone printed mark and a painted mark. The issue of surface quality is significant to
subjective qualities when studying the appearance of ink or paint on paper. The paper provides examples of a range of
vector marks that are then transcribed into brush stokes by the painting machine.
Multispectral
Unsupervised correction of relative longitudinal aberrations for multispectral imaging using a multiresolution approach
Julie Klein
Show abstract
Longitudinal aberrations appear in multispectral cameras featuring a monochrome sensor with several optical
filters in front of it. Due to the slightly different optical properties of the filters, the focal lengths are different and the images cannot remain sharp for all the color channels. We seek for an unsupervised correction of these aberrations, relative to a given reference color channel. "Unsupervised" means here that no calibration of the system is needed. We use a multiresolution approach that takes advantage of the high contrast present in the reference channel and that utilizes this information for the other, more blurred channels. The results of this correction are evaluated using the sharpness of the corrected image with respect to the original blurred image and using the color accuracy: an algorithm that would corrupt the spectral information of multispectral images would not be helpful. Moreover, using the original image and the one corrected with the algorithm, we can calculate the point spread function of the longitudinal aberrations. We then compare it to the point spread function obtained
with another method which is based on the capture of a noise chart and thus requires calibration.
Acquisition of multi-spectral flash image using optimization method via weight map
Show abstract
To acquire images in low-light environments, it is usually necessary to adopt long exposure times or to resort to
flashes. Flashes, however, often induce color distortion, cause the red-eye effect and can be disturbing to the subjects. On
the other hand, long-exposure shots are susceptible to subject-motion, as well as motion-blur due to camera shake when
performed with a hand-held camera. A recently introduced technique to overcome the limitations of the traditional lowlight
photography is the use of the multi-spectral flash. Multi-spectral flash images are a combination of UV/IR and
visible spectrum information. The general idea is to retrieve the details from the UV/IR spectrum and the color from the
visible spectrum. Multi-spectral flash images, however, are themselves subject to color distortion and noise. In this work,
a method of computing multi-spectral flash images so as to reduce the noise and to improve the color accuracy is
presented. The proposed method is a previously seen optimization method, improved by introducing a weight map used
to discriminate the uniform regions from the detail regions. The optimization target function takes into account the
output likelihood with respect to the ambient light image, the sparsity of image gradients, and the spectral constraints for
the IR-red and UV-blue channels. The performance of the proposed method was objectively evaluated using longexposure
shots as references.
Display and Materials
Adaptive local backlight dimming algorithm based on local histogram and image characteristics
Show abstract
Liquid Crystal Display (LCDs) with Light Emitting Diode (LED) backlight is a very popular display technology, used
for instance in television sets, monitors and mobile phones. This paper presents a new backlight dimming algorithm that
exploits the characteristics of the target image, such as the local histograms and the average pixel intensity of each
backlight segment, to reduce the power consumption of the backlight and enhance image quality. The local histogram of
the pixels within each backlight segment is calculated and, based on this average, an adaptive quantile value is extracted.
A classification into three classes based on the average luminance value is performed and, depending on the image
luminance class, the extracted information on the local histogram determines the corresponding backlight value. The
proposed method has been applied on two modeled screens: one with a high resolution direct-lit backlight, and the other
screen with 16 edge-lit backlight segments placed in two columns and eight rows. We have compared the proposed
algorithm against several known backlight dimming algorithms by simulations; and the results show that the proposed
algorithm provides better trade-off between power consumption and image quality preservation than the other algorithms
representing the state of the art among feature based backlight algorithms.
Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content
Show abstract
High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and
beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless
communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely
accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small
text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other
hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal
adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts
and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder
side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format
across all three color channels combined with foreground/background color vectors of a local color map promises to
overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The
residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder.
A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses
current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.
Content-dependent contrast enhancement for displays based on cumulative distribution function
Show abstract
Perceived contrast is one of the most important attributes affecting image quality on displays. Three dimensional
digital TVs need contrast enhancement techniques to compensate for reduction in luminance levels. Also, mobile
displays demand efficient contrast enhancement techniques to improve degree of visibility under outdoor viewing
environment and reduce power consumptions. This paper presents a new content dependent contrast enhancement
technique to improve perceived image quality on displays. It is designed for real time implementation. Pairs of reference
cumulative distribution functions (CDF) of luminance level and corresponding tone mapping functions (TMFs) are
generated by off-line procedure. For real time implementation, an appropriate TMF is selected by comparing the CDF of
input image and reference CDFs. The selected TMF is then applied to an input image.