Show all abstracts
View Session
- Front Matter: Volume 7866
- Display I
- Display II
- High Dynamic Range Imaging
- Applications
- Vision
- Image Processing
- The Dark Side of Color
- Color Management and Control
- Color Correction and Color Spaces
- Printing I
- Printing II
- Halftoning I
- Halftoning II
- Interactive Paper Session
Front Matter: Volume 7866
Front Matter: Volume 7866
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7866, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Display I
Adaptive color visualization for dichromats using a customized hierarchical palette
Show abstract
We propose a user-centric methodology for displaying digital color documents, that optimizes color representations
in an observer specific and adaptive fashion. We apply our framework to situations involving viewers with
common dichromatic color vision deficiencies, who face challenges in perceiving information presented in color
images and graphics designed for color normal individuals. For situations involving qualitative data visualization,
we present a computationally efficient solution that combines a customized observer-specific hierarchical palette
with "display time" selection of the number of colors to generate renderings with colors that are easily discriminated
by the intended viewer. The palette design is accomplished via a clustering algorithm, that arranges colors
in a hierarchical tree based on their perceived differences for the intended viewer. A desired number of highly
discriminable colors are readily obtained from the hierarchical palette via a simple truncation. As an illustration,
we demonstrate the application of the methodology to Ishihara style images.
R/G/B color crosstalk characterization and calibration for LCD displays
Show abstract
LCD displays exhibit significant color crosstalks between their red, green and blue channels (more or less depending on
the type of LCD technology). This problem, if it is not addressed properly, leads to (a) a significant color errors in the
rendered images on LCD displays and (b) a significant gray tracking problem. The traditional method for addressing this
problem has been to use a 3x3 color correction matrix in the display processing pipe. Experimental data clearly shows
that this linear model for color correction is not sufficient to address color crosstalk problem in LCD displays. Herein, it
is proposed to use higher order polynomials for color correction in the display processing pipe. This paper presents
detailed experimental results and comparative analysis on using polynomial models with different orders for color
correction.
Color gamut boundary optimization of wide gamut display devices
Show abstract
High-end monitors based on LCD technology increasingly address wide color gamut implementations featuring precise
color calibration within a variety of different color spaces such as extended sRGB or AdobeRGB. Combining a Look-
Up-Table method with linear interpolation in RGB component space using 3x3 matrix multiplication provides optimized
means of tone curve adjustments as well as independent adjustment of device primaries. The proposed calibration
method completes within several seconds compared to traditional color calibration procedures easily taking several
minutes. In addition, the user can be given subjective control over color gamut boundary settings based on dynamic
gamut boundary visualization. The proposed component architecture not only provides independent control over 8 color
vertices but also enables adjustment in quantities of 10-4 of full amplitude range. User defined color patches can be
adjusted manually while simultaneously tracking color gamut boundaries and visualizing gamut boundary violation in
real time. All this provides a convenient approach to fine tuning tone curves and matching particular constraints with
regard to user preferences, for example specific ambient lighting conditions, across different devices such a monitors and printers.
Display II
Color correction for projected image on colored-screen based on a camera
Show abstract
Recently, projector is one of the most common display devices not only for presentation at offices and classes, but for
entertainment at home and theater. The use of mobile projector expands applications to meeting at fields and presentation
on any spots. Accordingly, the projection is not always guaranteed on white screen, causing some color distortion.
Several algorithms have been suggested to correct the projected color on the light colored screen. These have limitation
on the use of measurement equipment which can't bring always, also lack of accuracy due to transform matrix obtained
by using small number of patches. In this paper, color correction method using general still camera as convenient
measurement equipment is proposed to match the colors between on white and colored screens. A patch containing 9
ramps of each channel are firstly projected on white and light colored screens, then captured by the camera, respectively,
Next, digital values are obtained by the captured image for each ramp patch on both screens, resulting in different values
to the same patch. After that, we check which ramp patch on colored screen has the same digital value on white screen,
repeating this procedure for all ramp patches. The difference between corresponding ramp patches reveals the quantity of
color shift. Then, color correction matrix is obtained by regression method using matched values. Differently from
previous methods, the use of general still camera allows to measure regardless of places. In addition, two captured
images on white and colored screen with ramp patches inform the color shift for 9 steps of each channel, enabling
accurate construction of transform matrix. Nonlinearity of camera characteristics is also considered by using regression
method to construct transform matrix. In the experimental results, the proposed method gives better color correction on
the objective and subjective evaluation than the previous methods.
Modeling LCD displays with local backlight dimming for image quality assessment
Show abstract
Traditionally, algorithm-based (objective) image and video quality assessment methods operate with the numerical
presentation of the signal, and they do not take the characteristics of the actual output device into account. This is a
reasonable approach, when quality assessment is needed for evaluating the signal quality distortion related directly to
digital signal processing, such as compression. However, the physical characteristics of the display device also pose a
significant impact on the overall perception. In order to facilitate image quality assessment on modern liquid crystaldisplays (LCD) using light emitting diode (LED) backlight with local dimming, we present the essential considerations
and guidelines for modeling the characteristics of displays with high dynamic range (HDR) and locally adjustable
backlight segments. The representation of the image generated by the model can be assessed using the traditional
objective metrics, and therefore the proposed approach is useful for assessing the performance of different backlight
dimming algorithms in terms of resulting quality and power consumption in a simulated environment. We have
implemented the proposed model in C++ and compared the visual results produced by the model against respective images displayed on a real display with locally controlled backlight units.
Content dependent selection of image enhancement parameters for mobile displays
Yoon-Gyoo Lee,
Yoo-Jin Kang,
Han-Eol Kim,
et al.
Show abstract
Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital
multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content
dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to
improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers'
preference. Relationship between the objective measures and the optimal values of image control parameters are
modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image
control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental
results indicate that dynamic selection of image control parameters yields better image quality.
Saliency driven Black Point Compensation
Show abstract
We present a novel framework for automatically determining whether or not to apply black point compensation
(BPC) in image reproduction. Visually salient objects have a larger influence on determining image quality
than the number of dark pixels in an image, and thus should drive the use of BPC. We propose a simple and
efficient algorithmic implementation to determine when to apply BPC based on low-level saliency estimation.
We evaluate our algorithm with a psychophysical experiment on an image data set printed with or without BPC
on a Canon printer. We find that our algorithm is correctly able to predict the observers' preferences in all cases
when the saliency maps are unambiguous and accurate.
DIN 6164 for gamut mapping?
Show abstract
Perceived color is an empirical phenomenon and, to date, is only approximately understood in complex situations.
In general, color spaces or color order systems, as a mathematical characterization of such empirical observations,
address specific applications such that they may not be adequate in other contexts. In this work, we investigate
four device-independent color spaces (color order systems) with regard to their suitability for a specific gamut
mapping concept called "unsharp mapping".
High Dynamic Range Imaging
Estimation of low dynamic range images from single Bayer image using exposure look-up table for high dynamic range image
Show abstract
High dynamic range(HDR) imaging is a technique to represent the wider range of luminance from the lightest and
darkest area of an image than normal digital imaging techniques. These techniques merge multiple images, called as
LDR(low dynamic range) or SDR(standard dynamic range) images which have proper luminance with different exposure
steps, to cover the entire dynamic range of real scenes. In the initial techniques, a series of acquisition process for LDR
images according to exposure steps are required. However, several acquisition process of LDR images induce ghost
artifact for HDR images due to moving objects. Recent researches have tried to reduce the number of LDR images with
optimal exposure steps to eliminate the ghost artifacts. Nevertheless, they still require more than three times of
acquisition processes, resulting ghosting artifacts. In this paper, we propose an HDR imaging from a single Bayer image
with arbitrary exposures without additional acquisition processes. This method first generates new LDR images which
are corresponding to each average luminance from user choices, based on Exposure LUTs(look-up tables). Since the
LUTs contains relationship between uniform-gray patches and their average luminances according to whole exposure
steps in a camera, new exposure steps for any average luminance can be easily estimated by applying average luminance
of camera-output image and corresponding exposure step to LUTs. Then, objective LDR images are generated with new
exposure steps from the current input image. Additionally, we compensate the color generation of saturated area by
considering different sensitivity of each RGB channel from neighbor pixels in the Bayer image. Resulting HDR images
are then merged by general method using captured images and estimated images for comparison. Observer's preference
test shows that HDR images from the proposed method provides similar appearance with the result images using
captured images.
Flicker reduction in tone mapped high dynamic range video
Show abstract
In order to display a high dynamic range (HDR) video on a regular low dynamic range (LDR) screen, it needs
to be tone mapped. A great number of tone mapping (TM) operators exist - most of them designed to tone
map one image at a time. Using them on each frame of an HDR video individually leads to flicker in the
resulting sequence. In our work, we analyze three tone mapping operators with respect to flicker. We propose
a criterion for the automatic detection of image flicker by analyzing the log average pixel brightness of the tone
mapped frame. Flicker is detected if the difference between the averages of two consecutive frames is larger
than a threshold derived from Stevens' power law. Fine-tuning of the threshold is done in a subjective study.
Additionally, we propose a generic method to reduce flicker as a post processing step. It is applicable to all tone
mapping operators. We begin by tone mapping a frame with the chosen operator. If the flicker detection reports
a visible variation in the frame's brightness, its brightness is adjusted. As a result, the brightness variation is
smoothed over several frames, becoming less disturbing.
Applications
Applying AR technology with a projector-camera system in a history museum
Kimiyoshi Miyata,
Rina Shiroishi,
Yuka Inoue
Show abstract
In this research, an AR (augmented reality) technology with projector-camera system is proposed for a history museum
to provide user-friendly interface and pseudo hands-on exhibition. The proposed system is a desktop application and
designed for old Japanese coins to enhance the visitors' interests and motivation to investigate them. The size of the old
coins are small to recognize their features and the surface of the coins has fine structures on both sides, so it is
meaningful to show the reverse side and enlarged image of the coins to the visitors for enhancing their interest and
motivation. The image of the reverse side of the coins is displayed based on the AR technology to reverse the AR marker
by the user. The information to augment the coins is projected by using a data projector, and the information is placed
nearby the coins. The proposed system contributes to develop an exhibition method based on the combinations of the
real artifacts and the AR technology, and demonstrated the flexibility and capability to offer background information
relating to the old Japanese coins. However, the accuracy of the detection and tracking of the markers and visitor
evaluation survey are required to improve the effectiveness of the system.
Memory preservation made prestigious but easy
Show abstract
Preserving memories combined with story-telling using either photo books for multiple images or high quality products
such as one or a few images printed on canvas or images mounted on acryl to create high-quality wall decorations are
gradually becoming more popular than classical 4*6 prints and classical silver halide posters. Digital printing via electro
photography and ink jet is increasingly replacing classical silver halide technology as the dominant production
technology for these kinds of products. Maintaining a consistent and comparable quality of output is becoming more
challenging than using silver halide paper for both, prints and posters.
This paper describes a unique approach of combining both desktop based software to initiate a compelling project and
the use of online capabilities in order to finalize and optimize that project in an online environment in a community
process. A comparison of the consumer behavior between online and desktop based solutions for generating photo books
will be presented.
A method to estimate the UV content of illumination sources
Show abstract
Where an illumination source includes flux in the UV region, an estimate of the total flux in the UV can be derived from
the fluorescent efficacy of a reference material and the fluorescent emission from the material. A correlate of fluorescent
efficiency was derived using a triangular weighting function in the fluorescent emission region and the flux in the
fluorescent excitation region. This was found to be constant at different flux intensities, and was used to estimate the UV
content of a test illumination. The method gave good results but requires further verification using other materials and
illumination intensities.
Vision
Knowledge exchange in the CREATE project - Colour Research for European Advanced Technology Employment
Show abstract
The presentation will review a four-year European funded project CREATE (Colour Research for European Advanced
Technology Employment), which was established in 2006. The group came together to promote and exchange research
and knowledge through a series of conferences and training courses to researchers working in Europe who were in the
early stages of their career. The long-term objective was to address a broad range of themes in colour and to develop
with artists, designers, technologists and scientists a cross disciplinary approach to improving colour communication and
education and to provide a forum for dialogue between different fields. Now at the end of the funding programme, this
paper will highlight some of the key milestones of the project. Moreover, having completed a supplementary workshop
event in October 2010, researchers considered new themes for the future.
Is it turquoise + fuchsia = purple or is it turquoise + fuchsia = blue?
Show abstract
The first step in communicating color is to name it. The second step is color semiotics. The third step is
introducing structure in the set of colors. In color education at all levels, this structure often takes the form of
formulæ, like red + green = yellow, or turquoise + red = black. In recent times, Johannes Itten's color theory
and its associated color wheel have been very influential, mostly through its impact on Bauhaus, although a
number of color order systems and circles have been introduced over the centuries.
Students get confused when they are trying to formulate the color name arithmetic using the structure of
color order systems and concepts like complementary colors and opponent colors. Suddenly turquoise + fuchsia
= purple instead of blue; purple and violet become blurred, and finally the student's head explodes under the
epistemological pressures of Itten, Albers, Goethe, Runge, Newton, da Vinci, and all the other monsters of color
structure.
In this contribution we propose a systematic presentation of structure in color, from color theories to color
naming. We start from the concept of color perception introduced by da Vinci and work ourselves through color
measurement, color formation, and color naming, to develop the basis for a robust system based on table lookup
and interpolation.
One source of confusion is that color naming has been quite loose in color theory, where for example red can be
used interchangeably with fuchsia, and blue with turquoise. Furthermore, common color terms are intermingled
with technical colorant terms, for example cyan and aqua or fuchsia and magenta. We present the evolution
of a few color terms, some of which have experienced a radical transition over the centuries, and describe an
experiment showing the robustness of crowd-sourcing for color naming.
Human vision based color edge detection
Show abstract
Edge detection can be of great importance to image processing in various digital imaging applications such as digital
television and camera. Therefore, extracting more accurate edge properties are significantly demanded for achieving a
better image understanding. In vector gradient edge detection, absolute difference of RGB values between a center pixel
value, and its neighborhood values are usually used, although such a device-dependent color space does not account for
human visual characteristics well. The goal of this study is to test a variety of color difference equations and propose the
most effective model that can be used for the purpose of color edge detection. Three of synthetic images generated using
perceptibility threshold of the human visual system were used for objectively evaluate to 5 color difference equations
studied in this paper. A set of 6 complex color images was also used to testing the 5 color difference equations
psychophysically. The equations include ΔRGB, ΔE*
ab, ΔECMC, CIEDE2000 (ΔE00) and CIECAM02-UCS delta E
(ΔECAM-UCS). Consequently, there were not significant performance variations observed between those 5 color difference
equations for the purpose of edge detection. However, ΔE00 and ΔECAM-UCS showed slightly higher mean opinion score
(MOS) in detected edge information.
Color universal design: analysis of color category dependency on color vision type (2)
Show abstract
The present study investigates the tendency of individuals to categorize colors. Humans recognize colors by categorizing
them using specific color names, such as red, blue, and yellow. When an individual with a certain type of color vision
observes an object, they categorize its color using a particular color name and assume that other people will perceive the
color in an identical manner. However, there are some variations in human color vision as a result of differences in
photoreceptors in the eye, including red and green confusion. Thus, another person with a different type of color vision
may categorize a color using a completely different name. To address this issue, we attempted to determine the
differences in the ranges of color that people with different types of color vision observe. This is an important step
towards achieving Color Universal Design, a visual communication method that is viewer-friendly irrespective of color
vision type. Herein, we report on a systematic comparison among individuals with trichromat (C-type), protan (P-type)
and deutan (D-type) color vision. This paper is a follow-up to SPIE-IS & T / Vol. 7528 752805-1.
Image Processing
Object classification by color normalization or calibration?
Show abstract
Model-based approaches to object recognition rely on shape and contours while appearance-based approaches use information provided by the object intensity or color. Color histograms as an object characteristics are commonly used to solve this task. TheRGBcolor values formed by a camera depend heavily on the image
formation process - especially the illumination involved. Mainly for this reason color normalization algorithms are applied to estimate the impact of position and color of the illumination and eliminate or at least minimize their influence to the image appearance. Providing information about the image acquisition settings another
color normalization is applicable: color calibration. We compare several color normalization procedures to a
colorimetric calibration method proposed by Raymond L. Lee, Jr. By estimating the spectral reflectance of
object surfaces one obtain a colorimetrically correct image representation. The impact of color normalization on
the recognition rates is explored and is set in contrast to a calibration approach. Additionally our experiments
test several histogram distance measures for histogram based object recognition. We vary the number of bins, the order of two processing steps, and the dimensionality of color histograms to determine a most suitable parameter setting for object recognition.
Contrast preserving color fusion
Jan Kamenicky,
Barbara Zitova
Show abstract
We will address the problem of edge preserving image fusion for visualization and for printing of two intensity images
from different modalities. The most important point is that instead of degrading the contained information by intensity
output only, we will compute a color image in the way that gives us better possibility to control edge preservation. The
most common approach in these situations is the use of an alpha-blending, leading in some cases to worse visibility of
edges present in one of the input images. The proposed method is meant to solve this issue by preserving intensity
changes in the input images equally, independently of the other modality. The main idea of the method is based on the
perceptual color difference. A 2D rectangular color mapping scheme is created in such a way, that color differences as
perceived by the human eye in all points are nearly the same. Then, this mapping scheme is applied to generate the
output color. The proposed method can help to distinguish even slight differences in any input modality without risk of
losing details. Modifications of the proposed method can be derived for special cases where certain intensity intervals are
more important than others.
Evaluating the smoothness of color transformations
Show abstract
Multi-dimensional look up tables (LUTs) are widely employed for color transformations due to its high accuracy and
general applicability. Using the LUT model generally involves the color measurement of a large number of samples. The
precision and uncertainty of the color measurement will be mainly represented in the LUTs, and will affect the
smoothness of the color transformation. This, in turn, strongly influences the quality of the reproduced color images. To
achieve high quality color image reproduction, the color transformation is required to be relatively smooth. In this study,
we have investigated the inherent characteristics of LUTs' transformation from color measurement and their effects on
the quality of reproduced images. We propose an algorithm to evaluate the smoothness of 3D LUT based color
transformations quantitatively, which is based on the analysis of 3D LUTs transformation from RGB to CIELAB and the
second derivative of the differences between adjacent points in vertical and horizontal ramps of each LUT entry. The
performance of the proposed algorithm was compared with a those proposed in two recent studies on smoothness, and a
better performance is reached by the proposed method.
High capacity image barcodes using color separability
Show abstract
Two-dimensional barcodes are widely used for encoding data in printed documents. In a number of applications,
the visual appearance of the barcode constitutes a fundamental restriction. In this paper, we propose high
capacity color image barcodes that encode data in an image while preserving its basic appearance. Our method
aims at high embedding rates and sacrifices image fidelity in favor of embedding robustness in regions where
these two goals conflict with each other. The method operates by utilizing cyan, magenta, and yellow printing
channels with elongated dots whose orientations are modulated in order to encode the data. At the receiver, by
using the complementary sensor channels to estimate the colorant channels, data is extracted in each individual
colorant channel. In order to recover from errors introduced in the channel, error correction coding is employed.
Our simulation and experimental results indicate that the proposed method can achieve high encoding rates
while preserving the appearance of the base image.
The Dark Side of Color
HDR imaging and color constancy: two sides of the same coin?
Show abstract
At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances.
Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However,
on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below
middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance
reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on
closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene
segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?
ICC profiles: are we better off without them?
Show abstract
Before ICC profiles, a device-independent document would encode all color in a device independent CIE space like
CIELAB. When the document was to be printed, the press person would measure a target and create a color transformation
from the CIE coordinates to device coordinates. For office and consumer color printers, the color transformation
for a standard paper would be hardwired in the printer driver or the printer firmware.
This procedure had two disadvantages: the color transformations required deep expertise to produce and were hard to
manage (the latter making them hard to share), and the image data was transformed twice (from input device to colorimetric
and then to output device coordinates) introducing discretization errors twice. The first problem was solved with the ICC
profile standard, and the last problem was solved by storing the original device dependent coordinates in the document-
together with an input ICC profile-so the color management system could first collapse the two profiles and then perform
a single color transformation.
Unfortunately, there is a wide variety in the quality of ICC profiles. Even worse, the real nightmare is that quite
frequently the incorrect ICC profiles are embedded in documents or the color management systems apply the wrong
profiles.
For consumer and office printers, the solution is to forgo ICC profiles and reduce everything to the single sRGB color
space, so only the printer profile is required. However, the sRGB quality is insufficient for print solution providers. How
can a modern print workflow solve the ICC profile nightmare?
Color Management and Control
Soft proofing of printed colours on substrates with optical brightening agents
Show abstract
Paper Fluorescent Whitening Agent combined with differences in relative Ultra Violet levels between
instrument Illuminants, and real world viewing Illuminants, can be a significant source of error in characterising
a printing process, and hence in the ability to accurately reproduce coloured images in print and proof. The
appearance and measurement of fluorescent substrates depends strongly on the UV-amount in the source
illuminating a fluorescent sample which varies in the different viewing booths available.
The appearance of colours printed on substrates with optical brightening agents has been studied with help of a
colour matching experiment where the observers matched a colour patch displayed on a LCD monitor, by
adjusting its L*a*b* values, to another color patch printed out on paper viewed under varying amount of UV
content in lighting condition in the viewing booth. A customised viewing booth was built for this purpose and
substrates with varying amount of optical brightners were considered for the study.
A model based on CIECAM02 and a scaling technique has been developed to predict the perceived colour
match on a LCD display, of colours printed on substrates with optical brightners and viewed under the viewing
booth with varying amount of UV content in the viewing illumination. According to the obtained results, the
appearance of the colours printed on substrates containing optical brightners varied with variation in the amount
of UV content in the viewing illumination. The developed model gave good prediction of the XYZ tristimulus
values for the perceived match on the LCD display from the XYZ tristimulus values of the printed colours on
the substrate with acceptable ▵Eab . This shows that CIECAM02 can be effectively used for soft proofing.
Ghostscript color management
Michael J. Vrhel,
Raymond Johnston
Show abstract
This document introduces an updated color architecture that has been designed for Ghostscript. Ghostscript
is a well known open source document rendering and conversion engine. Prior to this update, the handling of
color in Ghostscript was based primarily upon PostScript color management. The new design results in a flexible
ICC-based architecture that works well in Ghostscript's multi-threaded rendering environment.
Color control of a lighting system using RGBW LEDs
Show abstract
A lighting system is proposed to render objects under a variety of colored illumination. The proposed system is
constructed with a LED unit, white diffusion filters, dimmers, and a personal computer as a controller. The LED unit is
composed of four kinds of color LED lamps which are 12 red (R), 14 green (G), 12 blue (B) and 10 white (W) colors.
The LED lamps have a linear input-output relationship and a larger color gamut than Adobe RGB. Since the lighting
system has an independent white light source, white illumination can be produced using the white light source and a
mixture of RGB primary sources. Therefore, to determine illumination color we have to solve a mapping problem from
3D color space to 4D space of RGBW digital values. This paper proposes an effective algorithm for determining the
digital control signals of the RGBW lights, so that colored light is generated with arbitrary (x, y) chromaticity and
luminance value Y. The performance of proposed method is examined in an experiment, where the accuracy of the
colored light is evaluated with regard to the CIE color difference.
Color Correction and Color Spaces
Brightness contrast under high surround luminance levels: psychophysical data vs CIECAM02
Show abstract
The purpose of this study can be divided into two descriptions. First, we investigated perceived brightness contrast to
varied surround luminance levels from dark to over-bright conditions by measuring psychophysical data using magnitude
estimation. As a result, the perceived brightness contrast increases until surround changes from dark to average, it
decreases from average to over-bright. Second, so obtained experimental results are compared with brightness contrast
estimates of CIECAM028 and MobileCAM9 and we refined a surround factor c and brightness correlate Q of CIECAM02. Consequently, the refined results appear matched to brightness contrast. A Pearson correlation between the
refined CIECAM02 prediction and the visual results was 0.95.
LabRGB: optimization of bit allocation
Fumio Nakaya
Show abstract
Spectral distribution can be written as a linear combination of eigenvectors and the eigenvectors method gives the least
estimation error, but eigenvectors depend on a sample selection of population and encoding values have no physical
meaning. Recently reported LabPQR [1] is to convey physical values, but still is dependent on a sample selection of
population. Thus, LabRGB [2][3][4] was proposed in 2007. LabRGB is to provide "sample selection of population" free
spectral encoding/decoding methods. LabRGB consists of six unique trigonometric base functions and physically
meaningful encoding values. LabRGB was applied to the real multispectral images and showed almost equal
performance to traditional orthogonal eigenvector method in spectral estimation, and even better performance in
colorimetric estimation. In this paper, bit allocation to the weighting factors were examined in terms of spectral and
colorimetric distance of nearest neighbors. The optimum way of minimizing the unusable combination of weighting
factors were obtained by using the correlation of the weighting factors. The optimum way of minimizing the spectral and
colorimetric distance of nearest neighbors was also obtained by using the nonlinear mapping method. The two methods thus obtained give a good clue for explicitly defining the number of bits of respective scores for the future applications and standardization.
Spatio-temporal colour correction of strongly degraded movies
Show abstract
The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these
cinematography collections are vulnerable to different distortions such as colour fading which is beyond the
capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful
tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms.
We present an automatic colour correction technique for digital colour restoration of strongly degraded movie
material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly
correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed
in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and
scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour
correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial
colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial
domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve
an 80 percent reduction of the computational time compared to processing every single frame individually. We
performed two user experiments and found that the visual quality of the resulting frames was significantly better
than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality
and computational efficiency.
Color correction optimization with hue regularization
Show abstract
Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the
original scene. When no reference is available, observers can extract the apparent objects in an image and compare them
with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results
indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the
appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived
image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues
and higher color saturation.
The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color
space to a target color space, usually through a color correction matrix which in its most basic form is optimized through
linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error.
Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably.
In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue
regularization and present some experimental results.
Printing I
Spectral model of an electro-photographic printing system
Show abstract
At EI 2007 in San Jose, California detailed
physical models for monochrome and color
electro-photographic printers were presented.
These models were based on computer
simulations of toner-dot formation for a variety
of halftone structures. The optical interactions
between the toner-dots and the paper substrate
were incorporated by means of an optical
scattering function, which allowed for the
calculation of optical dot-gain (and physical
dot-gain) as function of the halftone structure.
The color model used simple red-green-blue
channels to measure the effect of the
absorption and scattering properties of the
cyan, magenta, yellow and black toners on the
final half-tone image. The new spectral model
uses the full absorption and scattering
spectrum of the image toners in calculating the
final color image in terms of CIE XYZ values
for well-defined color and gray patches. The
new spectral model will be used to show the
impact of halftone structure and toner-layerorder
on conventional dot-on-dot, rotated dot
and error diffusion color halftone systems and
how to minimize the impact of image toner
scattering. The model has been expanded to
use the Neugebauer equations to approximate
the amount of cyan, magenta, and yellow
toners required to give a "good" neutral in the
rotated dot halftone and fine tuning is achieved by adjusting the development threshold level for each layer to hold a good neutral over the full tonal range. In addition to the above fine-tuning, cyan, yellow and magenta offsets are used to find an optimum use of the halftone dither patterns. Once a "good" neutral is obtained the impact on dot gain, color reproduction and optimum layer order can studied with an emphasis on how the full spectral model differs from the simpler three-channel model. The model is used to explore the different approaches required in dot-on-dot, rotated dot and error diffusion halftones to achieve good results.
Optimized selection of image tiles for ink spreading calibration
Show abstract
The Yule-Nielsen modified spectral Neugebauer model (YNSN) enables predicting reflectance spectra from ink
surface coverages of halftones. In order to provide an improved prediction accuracy, this model is enhanced with
an ink spreading model accounting for ink spreading in all superposition conditions (IS-YNSN). As any other
spectral reflection prediction model, the IS-YNSN model is conceived to predict the reflection spectra of uniform
patches. Instead of uniform patches, we investigate if tiles located within color images can be accurately predicted
and how they can be used to facilitate the calibration of the ink spreading model. In the present contribution, we
first detail an algorithm to automatically select image tiles as uniform as possible from color images by relying on
the CMY or CMYK pixel values of these color tiles. We show that if these image tiles are uniform enough, they
can be accurately predicted by the IS-YNSN model. The selection algorithm incorporates additional constraints
and is verified on 6 different color images. We finally demonstrate that the ink spreading model can be calibrated
with as few as 5 to 10 image tiles.
Preferred skin color enhancement for photographic color reproduction
Show abstract
Skin tones are the most important colors among the memory color category. Reproducing skin colors pleasingly is an
important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center
improves the color preference of skin color reproduction. Several methods to morph skin colors to a smaller preferred
skin color region has been reported in the past. In this paper, a new approach is proposed to further improve the result of
skin color enhancement. An ellipsoid skin color model is applied to compute skin color probabilities for skin color
detection and to determine a weight for skin color adjustment. Preferred skin color centers determined through
psychophysical experiments were applied for color adjustment. Preferred skin color centers for dark, medium, and light
skin colors are applied to adjust skin colors differently. Skin colors are morphed toward their preferred color centers. A
special processing is applied to avoid contrast loss in highlight. A 3-D interpolation method is applied to fix a potential
contouring problem and to improve color processing efficiency. An psychophysical experiment validates that the
method of preferred skin color enhancement effectively identifies skin colors, improves the skin color preference, and
does not objectionably affect preferred skin colors in original images.
Kubelka-Munk theory for efficient spectral printer modeling
Show abstract
In the context of spectral color image reproduction by multi-channel inkjet printing a key challenge is to accurately
model the colorimetric and spectral behavior of the printer. A common approach for this modeling is to assume that the
resulting spectral reflectance of a certain ink combination can be modeled as a convex combination of the so-called
Neugebauer Primaries (NPs); this is known as the Neugebauer Model. Several extensions of this model exist, such as the
Yule-Nielsen Modified Spectral Neugebauer (YNSN) model. However, as the number of primaries increases, the
number of NPs increases exponentially; this poses a practical problem for multi-channel spectral reproduction.
In this work, the well known Kubelka-Munk theory is used to estimate the spectral reflectances of the Neugebauer
Primaries instead of printing and measuring them, and subsequently we use these estimated NPs as the basis of our
printer modeling. We have evaluated this approach experimentally on several different paper types and on the HP
Deskjet 1220C CMYK inkjet printer and the Xerox Phaser 7760 CMYK laser printer, using both the conventional
spectral Neugebauer model and the YNSN model. We have also investigated a hybrid model with mixed NPs, half
measured and half estimated.
Using this approach we find that we achieve not only cheap and less time consuming model establishment, but also,
somewhat unexpectedly, improved model precision over the models using the real measurements of the NPs.
Printing II
A simple color prediction model based on multiple dot gain curves
Show abstract
Most of the color prediction models use single dot gain curve, few assume that dot gain changes when ink superposition
happens, but still, use single dot gain curve for each ink to compensate the effective ink coverage. Considering the fact
that optical dot gain is the effect of light scattering in paper, it is reasonable that light with different wavelength might
produce different optical dot gain for each ink. In this study, for each primary ink we utilized three different curves
obtained by CIEX, Y and Z, which approximately stand for three special wavelength bands, to calculate color
coordinates. In addition, we noticed that dot gain curves obtained from the print samples with single ink printed on paper
do not work well for the prints where ink is printed on another, or others. Therefore, dot gain curves for different ink
superposition situations are optimized by matching the calculated tri-stimulus values of training patches to their
measurement counterpoints. For each ink, dividing the dot gain into several dot gain actions responding to different ink
superposition situations, we got the final dot gain as a group of multiple curves that takes into account all possible 'dot
gain actions' with certain probability coefficients.
Subsampled optimal noise management method for a robust separation based calibration of color printing systems
Show abstract
For many color printing systems, printer calibration is often utilized to return the printer to a known state
to ensure consistent color output. In particular, the key visual response of color balance often depends the
calibration state return. Input color signal noise, generated from the printing system natural variation when
printing the calibration target, affects the accuracy and robustness of the calibration outcome. Noise management
techniques for managing input color signal noise prior to system calibration are often absent or rely on ad hoc
analysis and are usually not based on the return of a well developed printer response that has been extracted from
measured signal using advanced noise management methods. This paper describes Part II of an overall method
for developing a robust noise management system for printer calibration. In Part I, an 8-bit full resolution
calibration target is described and an iterative filtering noise management metric and method are defined and
developed. In this Part II, the specific development of a low resolution calibration target and corresponding noise
free representation of the printer system state, as defined by quantitative metrics relative to the printer response
derived from high resolution signal in Part I is defined and developed. This subsampled calibration target using
the proposed noise management method can increase the productivity and reduce operator error in print shop
workflow with minimal loss of accuracy.
Investigating the wavelength dependency of dot gain in color print
Show abstract
By separating the optical dot gain from the physical dot gain, it is possible to study different behaviors of color
inks on different papers. In this study we are investigating the dependency of dot gain and wavelength in color
print. Microscopic images have been used to separate optical and physical dot gain from each other. The optical
behavior of primary color inks in different absorbing wavelength bands has been studied. It has been illustrated
that the light scattering in the paper is wavelength independent, and therefore the Point Spread Function which
indicates the probability of light scattering of the paper does not change in visible wavelengths (380 nm -700
nm). We have shown that it is possible to separate two printed color inks on one specific wavelength, due to
the filtering behavior of the color inks. By considering the fact that light scattering in the paper is wavelength
independent, it was possible to separately analyze the dot gain of each color.
Fast approach for toner saving
Show abstract
Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological
impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is
worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational
costs.
We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font
cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript
and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is
an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner
saving for typical office documents up to 15-20%. Rate of toner saving is adjustable.
Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to
random placement of small holes inside the character regions. The suggested method automatically skips small fonts to
preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned
hardcopy successfully too.
A virtual printer and reference printing conditions
Show abstract
In a late binding workflow, data is commonly prepared in an output-referred state based on a reference intermediate RGB
colour encoding. Such encodings may have a larger gamut than the target printing condition, and so there is some
ambiguity over how to preview the data before it has been converted to the target printing condition.
Here we propose an additional intermediate encoding, referred to as a 'virtual printer' which bridges the gap between
three-component reference RGB or PCS encodings, and reference CMYK printing conditions.
The virtual printer has a large colour gamut which represents a superset of most available print gamuts. It is defined here
in terms of the reflectance and colorimetric coordinates of the virtual colorants, and associated colour mixing model.
When used in a colour reproduction workflow, documents can be initially rendered to the printer-like gamut of the
virtual printer, and channel preferences (such as black generation) can be defined. Re-rendering to a reference printing
condition and associated colour gamut is deferred, thus supporting re-purposing of the document.
Halftoning I
Cost function analysis for stochastic clustered-dot halftoning based on direct binary search
Show abstract
Most electrophotographic printers use periodic, clustered-dot screening for rendering smooth and stable prints.
However, periodic, clustered-dot screening suffers from the problem of periodic moir´e resulting from interference
between the component periodic screens superposed for color printing. There has been proposed an approach,
called CLU-DBS for stochastic, clustered-dot halftoning and screen design based on direct binary search. This
method deviates from conventional DBS in its use of different filters in different phases of the algorithm. In this
paper, we derive a closed-form expression for the cost metric which is minimized in CLU-DBS. The closed-form
expression provides us with a clearer insight on the relationship between input parameters and processes, and
the output texture, thus enabling us generate better quality texture. One of the limitations of the CLU-DBS
algorithm proposed earlier is the inversion in the distribution of clusters and voids in the final halftone with
respect to the initial halftone. In this paper, we also present a technique for avoiding the inversion by negating
the sign of one of the error terms in the newly derived cost metric, which is responsible for clustering. This
not only simplifies the CLU-DBS screen design process, but also significantly reduces the number of iterations
required for optimization.
Stochastic clustered-dot screen design for improved smoothness
Show abstract
Printers employing electrophotographic technology typically use clustered-dot screening to avoid potential artifacts
caused by unstable dot rendering. Periodic clustered-dot screens are quite smooth, but also suffer from
periodic moir´e artifacts due to interference with other color channels. Stochastic, clustered-dot screens provide
an alternative solution. In this paper, we introduce a new approach for stochastic, clustered-dot screen design
based on Direct Binary Search (DBS). The method differs from the conventional DBS in its use of a modified cost
metric which was derived in an earlier work from using different filters in the initialization and update phases
of DBS. The objective of the chosen approach is to design screen for improved print smoothness by generating
a homogeneous distribution of compact, uniformly-sized clusters. The results include halftone of a screened
folded-ramp, compared against a screen designed with a previous method.
Design of color screen tile vector sets
Show abstract
For electrophotographic printers, periodic clustered screens are preferable due to their homogeneous halftone
texture and their robustness to dot gain. In traditional periodic clustered-dot color halftoning, each color plane
is independently rendered with a different screen at a different angle. However, depending on the screen angle
and screen frequency, the final halftone may have strong visible moiré due to the interaction of the periodic
structures, associated with the different color planes.
This paper addresses issues on finding optimal color screen sets which produce the minimal visible moiré and
homogeneous halftone texture. To achieve these goals, we propose new features including halftone microtexture
spectrum analysis, common periodicity, and twist factor. The halftone microtexture spectrum is shown to predict
the visible moiré more accurately than the conventional moiré-free conditions. Common periodicity and twist
factor are used to determine whether the halftone texture is homogeneous. Our results demonstrate significant
improvements to clustered-dot screens in minimizing visible moiré and having smooth halftone texture.
UV fluorescence encoded image using two halftoning strategies
Show abstract
A method is provided for embedding a UV fluorescent watermark in a color halftone image printed on paper. The
described method implements two different strategies to halftone a watermark region and a background region. One
strategy uses dot-on-dot halftoning to maximize the usage of black ink and minimize ink dispersion, while the other
strategy uses successive-filling halftoning to maximize ink dispersion. An accurate color look-up-table (LUT) is built to
directly transform the colorant values for one halftoning strategy to the colorant values for the other strategy. With the
color transformation applied on one region, the binary outputs in both watermark and background regions halftoned with
different strategies exhibit similar color appearance under normal lighting condition. However, under UV illumination,
due to the fluorescent effect caused by different paper coverages in two regions, the embedded watermark becomes clearly visible.
Moiré-free color halftoning using hexagonal geometry and spot functions
Show abstract
A halftone configuration is presented that utilizes three or four rotated hexagonal screens, or more precisely, screens with
hexagonally tiled clusters, for moiré-free color printing. Halftone designers consider many options to deliver a screen
with desirable characteristics, and often must settle for less than desirable results. The present method presents a new
option with several beneficial properties compared to conventional square-cell-based screens. Hexagonal screens can
appear to have smoother texture. Due to differences in packing geometry and touch point geometry, hexagons have the
potential to possess different tone reproduction characteristics, which may be favorable for some marking processes. A
fourth screen (e.g., yellow) can be included moiré-free, thereby avoiding problems associated with stochastic solutions
for yellow. We also present a corresponding parametrically controlled hexagonal halftone spot function that allows for
optimization of dot touch points and provides compact growth. The optimized touch points can prevent a tone
reproduction bump, while the compact growth throughout the gray range ensures maximum stability. Examples are
provided.
Halftoning II
A hybrid adaptive thresholding method for text with halftone pattern in scanned document images
Songyang Yu,
Wei Ming
Show abstract
In this paper, a hybrid adaptive thresholding method for scanned document images containing text with halftone pattern
is presented. The method is based on the topological feature and gray level statistics of those text with halftone pattern.
Global histogram based thresholding methods often miss some halftone text after binarization, especially those close to
background gray level. The proposed method first divides the document image into non overlap windows and extracts
text characters as connect component in each window. The Euler number of each text character is then calculated and
used as topological feature to identify halftone text character. After all the halftone text characters are identified, the
document image is segmented into halftone text region and non-halftone text region. Each region is then binarized using
its own pixel value statistics respectively. Comparing to global histogram based thresholding methods; the proposed
method produced better binarization result on scanned document images containing both halftone text and non-halftone text.
Window-based spectral analysis of color halftone screens
Show abstract
Improper design of color halftone screens may create visually objectionable moire patterns in the final prints due
to the interaction between the halftone screens of the color primaries. The prediction of such interactions from the
screens' bitmaps helps to identify and avoid problematic patterns, reducing the time required to design effective
color halftone screens. In this paper, we detect the moire patterns by examining the spatial frequency spectra of
the superimposed screens. We study different windowing techniques including Hann, Hamming, and Blackman,
to better estimate the moire strength, frequency and orientation. The window-based spectral estimation has
the advantage of reducing the effect of spectral leakage associated with the non-windowed discrete signals. Two
methods are used to verify the detected moire from the bitmaps. First, we analyze scans of the printed halftones,
using the same technique that we applied to the bitmaps. Second, we independently inspect the printed halftones
visually. Our experiments show promising results by detecting the moire patterns from both the bitmap images
as well as the scans of the actual prints verified by visual inspection.
Descreening of color halftone images in the frequency domain
Show abstract
Scanning a halftone image introduces halftone artifacts, known as Moir´e patterns, which significantly degrade the
image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a
periodic pattern. Therefore, frequencies relating halftoning are easily identifiable in the frequency domain. This
paper proposes a method for descreening scanned color halftone images using a custom band reject filter designed
to isolate and remove only the frequencies related to halftoning while leaving image edges sharp without image
segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped
windows. The windows are filtered individually in the frequency domain, then pieced back together in a method
that does not show blocking artifacts.
Analog image backup with steganographic halftones
Show abstract
Hardcopy (analog) backup of photographs is an important addition to digital storage. It offers a means to visually enjoy
the "storage format" decoupled from a digital storage media which can have a shorter archival life than hardcopy, along
with shorter lifetime of hardware support. The paper describes a means to eliminate the need to include unsightly text
that is part of earlier solutions by embedding all required metadata in a small steganographic halftone with the print. The
solution works with any image scanner, which we can safely assume will be available far into the future, when readers of
today's digital storage media will be long gone. Examples of the resulting archival compositions and metadataembedded
halftones are included.
Interactive Paper Session
Spectral reflection and transmission prediction model of halftone image on fluorescent supports
Show abstract
Fluorescent brighteners in paper can emit visible blue lights or fluorescence after absorbing invisible
UV (ultraviolet) lights and the visible blue lights can increase the whiteness of the paper in the visual
effects. In this paper, we use the enhanced Clapper-Yule model to establish a new predicted reflectance
model for halftone image. The reflective law of halftone image on fluorescent supports is generalized
by utilizing the idea that the reflected light by fluorescent supports is divided into two parts: the
primary streams which consist of originally incident light and the fluorescent streams which are created
by absorption of the UV lights. Firstly, the spectral reflectance of the vacant fluorescent supports and
ink layer on fluorescent supports are analyzed. Secondly, the reflectance and transmittance of ink layer
on fluorescent supports are studied. Then the physical dot gain that results from the real extension of an
ink dot (i.e., ink spreading) is studied. Finally, we establish a reflection and transmission model for a
halftone image on paper with fluorescent additives. To prove the accuracy of the model, we make data
simulation with Matlab software and two reflectance curves (the reflectance of halftone image on paper
with and without fluorescent additives) were generated. From the results, we can make a conclusion
that the new model has a good accuracy to predict the reflectance of halftone image on fluorescent
supports.
The transmission of light affect the color reproduction of plastic print
Show abstract
By analyzing the different paths that the incident light traverses in the printing, the paper aims to study
the effect the transmission of light produces on the color reproduction of the plastic printing. The
article also analyzes object characteristics about the three color properties and the color density, so as to
make an accurate prediction on the color reproduction of the printing where ink is printed on the end of
the plastic base directly. In the research, the incident light on the plastic print are divided into two parts:
the reflection of diffuse light in the ink and the optical multi-layer internal reflection of the light
through ink layer onto the plastic substrate.
In this paper, we use kubelka-munk theory to analyze the transmission of the incident light on the
surface of the printing product and Clapper-Yule theory to analyze the incident light which through the
ink to the plastic film surface. When the incident light through the ink to the film surface, we have a
series of mutually parallel reflected beam and refracted beam, and then obtain the synthesis of the
reflected light complex amplitude, using the similar methods to obtain the total reflected and refraction
light intensity. Combining the total reflection light intensity through the plastic substrate and the overall
reflectivity through a plastic print surface by the kubelka-munk theory, color density and light
transmission factor of the plastic substrate can be drawn in the formula: D ∞ f (δ,d,i1 ). From the above equation, we can find that optical phase retardation δ, the thickness of plastic d and the angle of incidence on the plastic surface i1 affect the color reproduction of plastic print.
Reflectance model of plastic substrate halftone image based on Markov chain
Show abstract
The research of color prediction model is one of the most important tasks in print
reproduction. By the conception of regular quadriface composed of two bifaces we
obtain the global transfer matrix of the quadriface from the single-step transition
probability matrix of the Markov chain. According to the optics character of
transparent plastic substrate, using the Markov chain of stochastic process theory,
considering the fact of total reflectance when light propagate to optically thinner
medium (air) from denser medium(ink and plastic substrate),we modify the
mathematic model of reflectivity and obtain the mathematic reflectivity model of
plastic substrate homochromous presswork.
Color image segmentation on region growing and multi-scale clustering
Zong-pu Jia,
Wei-xing Wang,
Jun-ding Sun,
et al.
Show abstract
This paper presents a color image segmentation method by combining region growing and color clustering algorithms.
This method considers the both color and location information in a transformed color space. After multi-scale clustering
(MSC), it does a spatial processing - region growing. MSC can perform better in conquering the over-segment problem
than equal distance clustering. Compared with the previous methods only depending on MSC, the region growing can
enhance the ability of noise suppression. This method inherits the idea that operates clustering first and then carries out
spatial processing. Both clustering algorithm and spatial processing algorithm are improved, so this method (the two
algorithms) can obtain more satisfactory results.
Regression based characterization of color measurement instruments in printing applications
Show abstract
In the context of print quality and process control colorimetric parameters and tolerance values are clearly defined.
Calibration procedures are well defined for color measurement instruments in printing workflows. Still, using more than
one color measurement instrument measuring the same color wedge can produce clearly different results due to random
and systematic errors of the instruments. In certain situations where one instrument gives values which are just inside the
given tolerances and another measurement instrument produces values which exceed the predefined tolerance
parameters, the question arises whether the print or proof is approved or not accepted with regards to the standard
parameters. The aim of this paper was to determine an appropriate model to characterize color measurement instruments
for printing applications in order to improve the colorimetric performance and hence the inter-instrument agreement. The
method proposed is derived from color image acquisition device characterization methods which have been applied by
performing polynomial regression with a least square technique. Six commercial color measurement instruments were
used for measuring color patches of a control color wedge on three different types of paper substrates. The
characterization functions were derived using least square polynomial regression, based on the training set of 14 BCRA
tiles colorimetric reference values and the corresponding colorimetric measurements obtained by the measurement
instruments. The derived functions were then used to correct the colorimetric values of test sets of 46 measurements of
the color control wedge patches. The corrected measurement results obtained from the applied regression model was
then used as the starting point with which the corrected measurements from other instruments were compared to find the
most appropriate polynomial, which results in the least color difference. The obtained results demonstrate that the
proposed regression method works remarkably well with a range of different color measurement instruments used on
three types of substrates. Finally, by extending the training set from 14 samples to 38 samples the obtained results clearly
indicate that the model is robust.
Printing anaglyph maps optimized for display
Show abstract
Although anaglyphs have a big advantage that they can be presented using traditional single channel media such as print,
film, display, etc., a media type must be determined as a pair of views is combined into a single image to minimize
retinal rivalry and stereo crosstalk. Most of anaglyph maps and map tools are optimized for display and assumed using
red-cyan filtered glasses for viewing. Due to the large difference between a display gamut and a printer gamut, red and
cyan colors that are used to separate the left view and the right view are changed considerably as they are mapped from a
display color space to a printer color space for printing and results in serious retinal rivalry. A solution using a special
gamut mapping method to preserve the relative relationship of cyanish and reddish colors was developed to gamut map
colors from display to printer. And the color characterization to balance neutral colors for specific red/cyan glasses is
applied to further improve the color appearance.
A restoration method for distorted image scanned from a bound book
Show abstract
When a bound document such as a book is scanned or copied with a flat-bed scanner, there are two kinds of defects in the scanned
image; the geometric and photometric distortion. The root cause of the defects is the imperfect contact between the book to be scanned
and the scanner glass plate. The long gap between the book center and the glass plate causes the optical path from the surface of the
book and the imaging unit(CCD/CIS) to be different from the optimal condition.
In this paper, we propose a method for restoring bound document scan images without any additional information or sensor. We
correct the bound document images based on the estimation of the boundary feature and background profile. Boundary Feature is
obtained after calculating and analyzing the Minimum Boundary Rectangle which encloses the whole foreground contents with
minimum size and the extracted feature is used for correcting geometric distortion; de-skew, warping, and page separation.
Background profile is estimated from the gradient map and it is utilized to correct photometric distortion; exposure problem.
Experimental results show effectiveness of our proposed method.