Proceedings Volume 5293

Color Imaging IX: Processing, Hardcopy, and Applications

cover
Proceedings Volume 5293

Color Imaging IX: Processing, Hardcopy, and Applications

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 18 December 2003
Contents: 13 Sessions, 51 Papers, 0 Presentations
Conference: Electronic Imaging 2004 2004
Volume Number: 5293

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Color and Applications
  • Poster Session
  • Color and Applications
  • Spectral Imaging
  • Color Reproduction
  • Printing
  • Applications I
  • Applications II
  • Algorithms
  • Characterization and Color Management
  • Error Diffusion
  • Implementation: Models, Architectures, and Algorithms
  • Texture Appearance and Hybrid/Adaptive Methods
  • Screen Design
  • Poster Session
Color and Applications
icon_mobile_dropdown
Color categories are diverse in thought as well as language: evidence from New Guinea and Africa
Following recent findings of cultural and linguistic relativity in other fields of categorization (e.g. shape, number, space) we report a series of cross-cultural studies of color categorization in adults and young children that address the particular question of whether and to what extent color categories are learned, and free to vary, or innate and universal. Adult speakers of different languages were found to show different patterns of discrimination and memory for the same set of colors and their cognitive representations of color categories appeared to be isomorphic with their linguistic categories. Longitudinal studies of two groups of children in Africa (children from the semi-nomadic Himba tribe in Namibia) and the UK examined the extended process of both lexical and non-lexical color category acquisition. Gradual category acquisition was observed in both groups, rather than all-or-nothing performance and even with intensive adult input (for the English children) color category acquisition appeared to be universally slow and effortful.
Poster Session
icon_mobile_dropdown
Generation of realistic scene using illuminant estimation and mixed chromatic adaptation
Jae-Chul Kim, Sang-Gi Hong, Dong-Ho Kim, et al.
The algorithm of combining a real image with a virtual model was proposed to increase the reality of synthesized images. Currently, synthesizing a real image with a virtual model facilitated the surface reflection model and various geometric techniques. In the current methods, the characteristics of various illuminants in the real image are not sufficiently considered. In addition, despite the chromatic adaptation plays a vital role for accommodating different illuminants in the two media viewing conditions, it is not taken into account in the existing methods. Thus, it is hardly to get high-quality synthesized images. In this paper, we proposed the two-phase image synthesis algorithm. First, the surface reflectance of the maximum high-light region (MHR) was estimated using the three eigenvectors obtained from the principal component analysis (PCA) applied to the surface reflectances of 1269 Munsell samples. The combined spectral value, i.e., the product of surface reflectance and the spectral power distributions (SPDs) of an illuminant, of MHR was then estimated using the three eigenvectors obtained from PCA applied to the products of surface reflectances of Munsell 1269 samples and the SPDs of four CIE Standard Illuminants (A, C, D50, D65). By dividing the average combined spectral values of MHR by the average surface reflectances of MHR, we could estimate the illuminant of a real image. Second, the mixed chromatic adaptation (S-LMS) using an estimated and an external illuminants was applied to the virtual-model image. For evaluating the proposed algorithm, experiments with synthetic and real scenes were performed. It was shown that the proposed method was effective in synthesizing the real and the virtual scenes under various illuminants.
Color and Applications
icon_mobile_dropdown
Maximum color separation in illuminant estimation
Xiaoyun Jiang, Noboru Ohta
The phenomenon has long been noticed that colors are more vivid under white illuminants. A new illuminant estimation method, maximum color separation, is based on the assumption that image gamut reaches its maximum when under white illuminants. It was also found out that when the image gamut reaches its maximum through the diagonal transformation in (r, g) space, the centroid of the gamut locates at (1/3, 1/3), which makes the method to have strong connection with the most widely used gray world method. In this paper, the above consequence is proven to be true when the representation of gamut is extended from triangle to polygon. In addition, the basic assumption is modified for application use. One modification is to do maximum color separation at each lightness level, and another modification is the adjustment of the reference illuminant. The method is proved to be an effect illuminant estimation method through testing on real images.
Spectral Imaging
icon_mobile_dropdown
Selection of filters for multispectral acquisition using the filter vectors analysis method
One of the most important components in a multispectral acquisition system is the set of optical filters that allow acquisition in different bands of the visible light spectrum. Typically, either a rather large set of traditional optical filters or a tunable filter capable of many different configurations are employed. In both cases, minimising the actual number of filters used while keeping the error sufficiently low is important to reduce operational costs, acquisition time, and data storage space. In this paper, we introduce the Filter Vectors Analysis Method for choosing an optimal subset of filters / filter configurations among those available for a specific multispectral acquisition system. This method is based on a statistical analysis of the data resulting from an acquisition of a representive target, and tries to identify those filters that yield the most information in the given environmental conditions. We have compared our method with a simple method (ESF, for 'evenly spaced filters') that chooses filters so that their transmittance peak wavelengths are as evenly spaced as possible within the considered spectrum. The results of our experiments suggest that the Filter Vectors Analysis Method method can not bring substantial improvements over the ESF method, but also indicate that the ideas behind our method deserve further investigation.
A measurement-based spectral generator for colors
Haiying Xu, Qiqi Wang, Yinlong Sun
This paper proposes a new approach to constructing spectra for colors based on measured spectra. Reflectances of a set of 1,400 color samples are measured using a spectroradiometer. Given any color, its spectrum is generated in terms of a tightly enclosing tetrahedron formed by measured color points. A method is also proposed to enlarge the span region of the base spectra for spectral generation. Using our approach, the derived spectra correspond closely to the reality and the deriving operation works almost for any color.
A subspace matching color filter design methodology for a multispectral imaging system
Attaching filters to an imaging system will modify the way it captures a color stimulus. In this paper, we develop a methodology to design filters to improve the accuracy of the measurements for families of reflective surfaces. We derive the necessary and sufficient conditions that the sensor space of the system must obey in order to measure the spectral reflectance of the surfaces accurately. These conditions are applied in our filter design method. We also compare our results to those of Wolski’s, a conventional filter design method that attempts to make the system work colorimetrically. We show that our method produces filters that capture the spectral reflectance better given the same number of measurements.
Color Reproduction
icon_mobile_dropdown
Did Jan van Eyck build the first photocopier in 1432?
Recently it has been claimed that some early Renaissance painters used concave mirrors to project real inverted images onto their supports (paper, canvas, oak panel, ...) which they then traced or painted over, and that this was an important source of the increase in realism in European painting around 1420. Key exhibits adduced as evidence in support of this bold theory are a pair of portraits by Jan van Eyck of Cardinal Niccolo Albergati(&?) - a silverpoint study of 1431 and a larger oil of 1432. The contours in these two works bear striking resemblance in form (after being appropriately scaled) and at least one distinctive "relative shift" - evidence that has led proponents of the projection theory to claim that the oil was copied by means of an epidiascope or primitive opaque projector, the shift due to an accidental "bump" during the copying process. We find several difficulties with this optical explanation: there are at least two relative shifts (one horizontal and one vertical), the latter being somewhat unlikely given the putative projection equipment and setup; these shifts are in the ratio of distances of nearly 1:2, a ratio that has no natural role in the projection explanation; any accidental "bump" would surely have been noticed by van Eyck, and if so desired, corrected by him; recent analysis shows physical evidence (tiny pinpricks presumably from mechanical compass) consistent with mechanical transfer that has no role in the optical explanation; and several other points. The fidelity of the copy as well as the direction and relative magnitudes of these shifts are, however, consistent with the use of a familiar grid construction and with mechanical transfer using drawing compass and ruler or Reductionszirkel. Further, there are prominent vertical Bruchkanten (fold or fraction) lines on the grounded paper in the silverpoint study whose orientation and separation have no natural role in an optical theory, but have a plausible role in other explanations. Our rebuttal to the projection theory for these works is supposed by considertaion of hte lack of documentary evidence from both artists and scientists, of surviving optical devices, and of the artistic goals and established painting praxis in the early Renaissance.
Document image enhancement algorithm for digital color copier
Jonghyon Yi, Sunghyun Lim
A typical document image to be copied generally consists of various types of elements, for example, texts, halftone images, line drawings, and continuous tone images printed on background. To enhance the copy quality of such a compound document image, one or more image enhancement processes can be applied. However, it is not suitable to apply an enhancement method, which is probably appropriate to a typical kind of document element, on the document image entirely. Instead, it is preferable to discriminate each document element and to apply adequate image enhancement methods on respective document elements. The proposed document image enhancement method is comprised of a segmentation phase and an enhancement phase. In the segmentation phase, it classifies each pixel of the input document image into text, continuous tone image, halftone image and background by using a state transition machine, pixel-based features and run-based features. In the enhancement phase, it applies different enhancement methods on respective document elements. In this way, the dark gray texts are converted into black texts, and the edges of texts and continuous tone images are emphasized, while prohibiting halftone image regions from inducing erroneously enhanced artifacts.
Workflow modeling in the graphic arts and printing industry
The last few years, a lot of effort has been spent on the standardization of the workflow in the graphic arts and printing industry. The main reasons for this standardization are two-fold: first of all, the need to represent all aspects of products, processes and resources in a uniform, digital framework and, secondly, the need to have different systems communicate with each other without having to implement dedicated drivers or protocols. Since many years, a number of organizations in the IT sector have been quite busy developing models and languages on the topic of workflow modeling. In addition to the more formal methods (such as, e.g., extended finite state machines, Petri Nets, Markov Chains etc.) introduced a number of decades ago, more pragmatic methods have been proposed quite recently. We hereby think in particular of the activities of the Workflow Management Coalition that resulted in an XML based Process Definition Language. Although one might be tempted to use the already established standards in the graphic environment, one should be well aware of the complexity and uniqueness of the graphic arts workflow. In this paper, we will show that it is quite hard though not impossible to model the graphic arts workflow using the already established workflow systems. After a brief summary of the graphic arts workflow requirements, we will show why the traditional models are less suitable to use. It will turn out that one of the main reasons for the incompatibility is that the graphic arts workflow is primarily resource driven; this means that the activation of processes depends on the status of different incoming resources. The fact that processes can start running with a partial availability of the input resources is a further complication that asks for additional knowledge on process level. In the second part of this paper, we will discuss in more detail the different software components that are available in any graphic enterprise. In the last part, we will discuss how these components can communicate together efficiently in a standardized way within the JDF framework.
Printing consumers' digital files
Reiner Fageth, Wulf Schmidt-Sacht
Digital photography is becoming increasingly prevalent. The general public want to preserve their memories using media other than digital files. Printing images is a popular alternative but home printing is both time consuming and costly. This paper wants to address mainly wholesale finishing using original photo paper for prints from 3,5" to 8" and other printing technologies for additional products. The goal in this industry is to print a high volume with optimal photo quality at a low price.
Evaluation of raster image compression in the context of large-format document processing
Cedric Sibade, Stephane Barizien, Mohamed Akil, et al.
We investigate the task of wide format still image manipulation and compression, within the framework of a document printing and copying data path. A typical document processing chain can benefit from the use of data compression, especially when it manages wide format color documents. In order to develop a new approach to use data compression for wide format printing systems, we expose in this article the benchmarking process of compression applied to large documents. Standard algorithms, from the imaging and document processing industry have been chosen for the compression of wide format color raster images. A database of image files has been created and classified for this purpose. The goal is to evaluate the performance in terms of data-flow reduction, along with quality losses in case of lossy compression. For the sake of a precise evaluation of performance of these compression algorithms, we include time measurements of the sole compression and decompression processes. A comparison of the memory footprint of each compression and decompression algorithms helps also to appreciate their resource consumptions.
Printing
icon_mobile_dropdown
Reproduction of colored images on substrates with varying chromaticity
Phil J. Green, Boris Oicherman
Printing images on coloured substrates is an interesting challenge, since observers are partially adapted to the colour of the substrate. Existing colour management methods are not appropriate for coloured papers, since they ignore this partial adaptation and also experience problems in compressing the coordinates of the original to the gamut that can be reproduced on the substrate. In the first two experiments, it was found that the degree of adaptation to a background colour was not dependent on the lightness or chroma of the background, and could thus be modelled with a constant degree of adaptation. In the second experiment, different methods of determining the coordinates to print on coloured substrates were compared. A method in which the coordinates for printing on white, adjusted by a partial adaptation model using a degree of adaptation of 0.66, gave better results than the other methods tested. The method works best on papers of low colour strength, since as the colour strength of the substrate increases, the available colour gamut is reduced and the performance of different methods becomes more similar.
Color reproduction on inkjet printers and paper colorimetric properties
Jesus Fernandez-Reche, Joan Uroz, Jose A. Diaz, et al.
The goal of this work is to study the relationship between the colorimetric characteristics that identify a kind of paper and those that allow us to evaluate its color reproduction capabilities on inkjet printers. A set of 29 different commercial papers from several companies has been tested. The category of those papers ranged from photo quality to prepress proof and ordinary office papers, being their finishing matte, semi-matte or glossy. For each sample, we have measured their reflectance, intrinsic reflectance, opacity, CIE whiteness index and tint. All these measurements followed the procedures established in the international standards about paper and board. Then, we have printed on three different sheet of each paper the color chart proposed in the international standard for color printer characterization ANSI IT8/7.3. When calculated the CIELAB coordinates using the D50 standard illuminant, we studied the dynamic range, color gamut and the rendering linearity. The results show that the colorimetric properties and reproduction capabilities of the 29 commercial papers let us cluster them in accordance with their behavior. However, we found no systematic correlation between color reproduction and specific colorimetric properties of the types of paper: we should search for other physical (not just colorimetric) properties (for instance, gloss or ink absorption capacity).
Six-color separation for improving graininess in a middle tone region
Chang-Hwan Son, Yun-Tae Kim, Cheol-Hee Lee, et al.
This paper proposes an improved six-color separation method that reduces the graininess in middle tone regions based on the standard deviation of lightness and chrominance in SCIELAB space. Graininess is regarded as the visual perception of the fluctuation of the lightness of light cyan and cyan or light magenta and magenta. In conventional methods, granularity is extremely heuristic and inaccurate due to the use of a visual examination score. Accordingly, this paper proposes an objective method for calculating granularity for six-color separation. First, the lightness, redness-greenness, and yellowness-blueness of SCIELAB space is calculated, reflecting the spatial-color sensitivity of the human eye and the sum of the three standard deviations normalized. Finally, after assigning the proposed granularity to a lookup table, the objective granularity is applied to six-color separation , thereby reducing the graininess in middle tone regions.
How scalable are gamut mapping algorithms?
The ability of gamut mapping algorithms to handle a wide range of relative gamut volumes was evaluated. Five gamut mapping algorithms were tested on reproduction media ranging from glossy, coated paper to newsprint. Original media were photographic transparency and print, and CRT. The psychophysical results indicate that the performance of gamut mapping algorithms is not greatly dependent on gamut volume of either original or reproduction media. Those algorithms which apply a linear scaling of lightness between original and reproduction are more consistent in their performance across different image types and reproduction media. The methods which performed best tend to be those that give more emphasis to preserving lightness over chroma.
Dot-for-dot proofing: how to zoom in to the dots without losing the big picture
In proofing, the accurate reproduction of prints is pursued. One of the major properties which determine the appearance of prints, is the halftoning. We refer to digital proofing methods as dot for dot proofing when they try to reproduce this property. We identify three basic requirements for a good dot for dot proof: colorimetric match, halftone match and print colorant match. They can be met simultaneously because they relate to different scales of resolution. The best starting point for a dot for dot method is the final rasterised separation data. Then, the proofing workflow maximally shares its processing components with the printing workflow, which helps minimising differences. Since print and proof generally differ in resolution and colorants, colour and resolution conversion are basic components of dot for dot methods. We propose a general flow in which an intermediate colour space is used. The fundamental issue is to accurately handle the colour information together with the halftone information. Normally, an image either represents the colour accurately by giving contone values, or the halftoning by giving high resolution binary data. Therefore, the choice of image representation in a processing flow becomes critical. Solutions can be found in hybrid representations containing information about both, or in dual representations.
Applications I
icon_mobile_dropdown
A modular procedure for automatic red eye correction in digital photos
The paper describes an algorithm for the automatic removal of "redeye" from digital photos. First an adaptive color cast removal algorithm is applied to correct the color photo. This phase not only facilitates the subsequent steps of processing, but also improves the overall appearance of the output image. A skin detector, based mainly on analysis of the chromatic distribution of the image, creates a probability map of skin-like regions. A multi-resolution neural network approach is then exploited to create an analogous probability map of candidate faces. These two distributions are then combined to identify the most probable facial regions in the image. Redeye is searched for within these regions, seeking areas with high “redness” and applying geometric constraints to limit the number of false hits. The redeye removal algorithm is then applied automatically to the red eyes identified. Candidate areas are opportunely smoothed to avoid unnatural transitions between the corrected and original parts of the eyes. Experimental results of application of this procedure on a set of over 300 images are presented.
CMOS CFA database under varying illumination for benchmarking of face detection algorithms
Sara Bocchio, Fabrizio Beverina, Alberto Rosti, et al.
In this paper we present a database containing human faces images, for benchmarking face detection algorithms. Face detection is one of the most critical steps for applications such as recognition, identification, and surveillance. We developed the database systematically, choosing a set of twenty subjects of different gender, performing more acquisitions for each one of them. All faces have different poses and expressions and various characteristics of haircut, beard and accessories. Complex backgrounds and noise conditions reflect the variability of a typical image capture in practical office applications. We performed also experiments of multi-face acquisition. All the subjects are acquired under different illuminants, such as incandescent and halogen lamps, to reproduce realistic indoor environments. The database is a color one, because most face location algorithms are based on skin location, which depends on color identification. For every picture the database contains the full color images in bitmap format and the Color Filter Array (CFA) images with the classic Bayer pattern. We also use this database as a test for our Coupled Metal Oxide Semiconductor (CMOS) sensor, to introduce low cost devices for digital color imaging acquisition and elaboration.
Underwater color constancy: enhancement of automatic live fish recognition
Majed Chambah, Dahbia Semani, Arnaud Renouf, et al.
We present in this paper some advances in color restoration of underwater images, especially with regard to the strong and non uniform color cast which is typical of underwater images. The proposed color correction method is based on ACE model, an unsupervised color equalization algorithm. ACE is a perceptual approach inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy. A perceptual approach presents a lot of advantages: it is unsupervised, robust and has local filtering properties, that lead to more effective results. The restored images give better results when displayed or processed (fish segmentation and feature extraction). The presented preliminary results are satisfying and promising.
Applications II
icon_mobile_dropdown
Evaluation of color differences in nearly neutral Munsell chips by a 3CCD color camera
Edison Valencia, Maria S. Millan, Montse Corbalan
A method to evaluate the discrimination capability of a camera to measure small color differences in the very pale color region is proposed. The measurements obtained by the camera are compared with those obtained by a reference instrument (spectrophotometer). Such comparisons indicate the reliability of the camera for this colorimetric purpose. CIEDE2000 formula of the CIELAB system has been used for the estimation of color differences. The method is applied to an acquisition system composed by a commercial 3CCD color camera capturing under standard D65 illumination. The results for the very pale color set of Munsell matte chips show that the color differences obtained by the given camera are quite close to those measured by the spectrophotometer over the circle of hue.
Scanner show-through reduction using reflective optics
Document scanners are used to convert paper documents to digital format for document distribution or archiving. Scanners are also used in copier and fax machine to convert document to electrical signal in analog and digital format. Most document scanners use white backing to avoid black border or black hole in scanned images. One problem with white backing is that show-through from the backside is visible for duplex printed (two sided) documents. This paper describes an optical method to eliminate show-through without reverting back to the black border or black hole. The scanner cover is made into a saw-tooth shaped mirror surface. The surface is oriented so that it reflects the light from the scanner lamp to the scanner lens. When scanning the scanner cover as in the case of a hole in the paper, it reflects light (specular reflection) from the scanner lamp directly to the scanner lens. Because the scanner lamp is much brighter than the reflected light from the document, only a small portion of the reflected light is needed to have the same output as scanning a piece of white paper. Radiometric calculation shows that this new approach can reduce the overall reflection from the scanner cover to 8% when scanning a document, and yet, appear to be white when no document is in between the cover and scan bar. The show-through is greatly reduced due to this reduced overall reflection from the scanner cover.
Single-spectral image obtaining and processing of argon-helium mixture arc
Chenming Xu, Hongming Gao, Guangjun Zhang, et al.
In this work, the spectral distribution of argon-helium mixture arc was acquired by using an advanced spectral analysis system. The HeI 667.8nm line was decided to obtain single-spectral image of helium based on the spectral distribution. A narrow-band pass filter was used in front of CCD. The single-spectral image of helium was processed by Abel inversion. Thus, The spatial distribution of the plasma emission coefficient was reconstructed. The conclusion could be drawn that helium is centralized in the argon-helium mixture arc.
Algorithms
icon_mobile_dropdown
Color-to-grayscale conversion to maintain discriminability
Monochrome devices that receive color imagery must perform a conversion from color to grayscale. The most common approach is to calculate the luminance signal from the three color signals. The problem with this approach is that the distinction between two colors of similar luminance (but different hue) is lost. This can be a significant problem when rendering colors within graphical objects such as pie charts and bar charts, which are often chosen for maximum discriminability. This paper proposes a method of converting color business graphics to grayscale in a manner that preserves discriminability. Colors are first sorted according to their original lightness values. They are then spaced equally in gray, or spaced according to their 3-D color difference from colors adjacent to them along the lightness dimension. This is most useful when maximum differentiability is desired in images containing a small number of colors, such as pie charts and bar graphs. Subjective experiments indicate that the proposed algorithms outperform standard color-to-grayscale conversions.
Proposal for a new method to speed up local color correction algorithms
Carlo Gatta, Samuele Vacchi, Daniele Marini, et al.
There is a class of non-linear filtering algorithms for digital color enhancement characterized by data driven local effect and high computational cost. In this paper we propose a new method, called LLL for Local Linear LUT, to speed-up these filters without loosing their local effect. Usually LUT based methods are global while our approach uses the principles of LUT transformation in a local way. The main idea of the proposed method is to apply the algorithm to a small sub-sampled version of the original image and to employ a modified Look Up Table technique to maintain the local filtering effect of the original algorithm. In this way three functions, one for each chromatic channel, are created for each pixel of the original image. These functions filter the original full size image in a very short time. We have tested LLL with two of these filters, a Brownian Retinex implementation (BR) and ACE (Automatic Color Equalization). The computational cost for this algorithms is very high. The proposed method increases the speed of color filtering algorithms reducing the number of pixel involved in the computation by sub-sampling the original image. Results, comparison and conclusion are presented.
Estimation of a color reflection model using range image data
A method is proposed for estimating various parameters of a reflection model using both the image data and the range data of an object surface. A unified measuring system, combining a laser range and a multi-band camera system, is made for acquiring the 3D shape data and the spectral reflectance data of the object surface. First, the diffuse reflection component and the specular reflection component at every pixel are obtained using the observed images at two illumination directions and the surface normal vectors calculated from the range data. The spectral reflectance is then estimated from the diffuse reflection component. Next, the extracted specular reflection component is fitted to the specular function of the Torrance-Sparrow model. The performance of the proposed method is examined on an experiment using a painted object in details. We show the estimation results for (1) spectral reflectance, (2) surface roughness, and (3) diffuse and specular intensities. The overall feasibility of the proposed method is confirmed based on computer graphics images created by using the estimated parameters.
Reflectance function estimation from tristimulus values
Information about the spectral reflectance of a color surface is useful in many applications. Assuming that reflectance functions can be adequately approximated by a linear combination of a small number of basis functions and exploiting genetic algorithms, we address here the problem of synthesizing a spectral reflectance function, given the standard CIE 1931 tristimulus values. Different sets of basis functions have been experimented and different data sets have been used for benchmarking.
Characterization and Color Management
icon_mobile_dropdown
An interactive perception-based model for characterization of display devices
Attila Neumann, Alessandro Artusi, Georg Zotti, et al.
This paper describes a simple to use, yet accurate way to obtain the Tone Reproduction Curve (TRC) of display devices without the use of a measurement device. Human vision is used to compare a series of dithered color patches against interactively changeable homogeneously colored display areas. Results comparing this method with spectrophotometer measurements are given for three monitors.
Combine 1D and 3D color calibration methods for ensuring consistent color reproduction
Printer color calibration is a crucial step to ensure consistent color reproduction. In this paper, we present a color calibration system that can not only ensure the color consistency for a same printer at different times, but also ensure the color consistency for different printers of the same model. We will analyze the most significant sources of color variations in an ink jet printing system, and show that some factors produce only the luminance (optical density) variation, and some other factors produce both luminance and chrominance (hue and saturation) variations. Two adequate color calibration methods are proposed to compensate for these variations: one is based on 1D linearization, which is used to compensate for the luminance variation; and the other is based on a 3D search in an existing color conversion table, this method is particularly designed to compensate for the chrominance variation.
Two-dimensional transforms for device color calibration
Color device calibration is traditionally performed using one-dimensional per-channel tone-response corrections (TRCs). While one-dimensional TRCs are attractive in view of their low implementation complexity and efficient real-time processing of color images, their use severely restricts the degree of control that can be exercised along various device axes. A typical example is that 1-D TRCs in a printer can be used to either ensure gray balance along the C = M = Y axis or to provide a linear response in ΔE units along each of the individual (C, M and Y) axis but not both. This paper proposes a novel two-dimensional calibration architecture for color device calibration that enables significantly more control over the device color gamut with a modest increase in implementation cost. Results show significant improvement in calibration accuracy and stability when compared to traditional 1-D calibration.
A Fast linking approach for CMYK to CMYK conversion preserving black separation in ICC color management system
In the linking step of the standard ICC color management workflow for CMYK to CMYK conversion, a CMM takes an AToBn tag (n = 0, 1, or 2) from a source ICC profile to convert a color from the source color space to PCS (profile connection space), and then takes a BToAn tag from the destination ICC profile to convert the color from PCS to the destination color space. This approach may give satisfactory result perceptually or colorimetrically. However, it does not preserve the K channel for CMYK to CMYK conversion, which is often required in graphic art’s market. The problem is that the structure of a BtoAn tag is designed to convert colors from PCS to a device color space ignoring the K values from the source color space. Different approaches have been developed to control K in CMYK to CMYK printing, yet none of them well fits into the "Profile - PCS - Profile" model in the ICC color management system. A traditional approach is to transform the source CMYK to the destination CMYK by 1-D TRC curves and GCR/UCR tables. This method is so simple that it cannot accurately transform colors perceptually or colorimetrically. Another method is to build a 4-D CMYK to CMYK closed-loop lookup table (LUT) (or a deviceLink ICC profile) for the color transformation. However, this approach does not fit into opened color management workflows for it ties the source and the destination color spaces in the color characterization step. A specialized CMM may preserve K for a limit number of colors by mapping those CMYK colors to some carefully chosen PCS colors in both the AToBi tag and the BToAi tag. A more complete solution is to move to smart linking in which gamut mapping is performed in the real-time linking at a CMM. This method seems to solve all problems existed in the CMYK to CMYK conversion. However, it introduces new problems: 1) gamut mapping at real-time linking is often unacceptable slow; 2) gamut mapping may not be optimized or may be unreliable; 3) manual adjustment for building high quality maps does not fit to the smart CMM workflow. A new approach is described in this paper to solve these problems. Instead of using a BtoAn tag from the destination profile for color transformation, a new tag is created to map colors in PCS (L*a*b* or XYZ) with different K values to different CMY values. A set of 3-D LUTs for different K values are created for the conversion from PCS to CMY, and 1-D LUTs are created for the conversion from luminance to K and to guide a CMM to perform the interpolation from KPCS (K plus PCS) to CMYK. The gamut mapping is performed in the step to create the profile, thus avoiding realtime gamut mapping in a CMM. With this approach, the black channel is preserved; the "Profile - PCS - Profile" approach is still valid; and the gamut mapping is not performed during linking in a CMM. Therefore, gamut mapping can be manually adjusted for high quality color mapping, the linking is almost as easy and fast as the standard linking, and the black channel is preserved.
Process control and color management implementation
ICC color management technology can be adopted in a number of workflows. Unfortunately, not all color-management systems, as implemented, achieve equal success in real world. There are a number of factors that limit color management performance. This paper reviews factors such as end user's expectations and color management limitations. However, the objective of this paper is to discuss device limitations and how to use statistics and process control methodology to assess and reduce these variations. Test targets, color measurement, and press sheet sampling are utilized to assess spatial uniformity as well as temporal consistency of hard copy output devices. By comparing temporal variation of the process against specifications, process capability indices, CP and CpK, are used to analyze run-to-run color repeatability. To enhance color management performance, a demerit system, based on amplitude responses, was used to determine the best press sheet for printer profiling application. Color management performance ultimately depends on our ability to minimize and control spatial, temporal, and run-to-run variations.
Gray tracking correction for TFT-LCDs
The consistency of the white point with the input gray level is referred in the world of display technology as gray tracking. Gray tracking is an issue for Twisted Nematic (TN) TFT-LCD screens which typically show a bluish color shift of the white point when the gray level decreases. This paper discusses some causes of this color shift in TN displays and proposes a method for correcting it by using the existing video card look-up tables (VLUT) in the graphic controller driving the panel. The correction process performs the gamma correction and the gray tracking correction using a single set of three 1D look-up tables. The gray tracking correction uses the luminance and the chrominance information of the R, G, B channels. For a target gamma and white point, each VLUT entry that corresponds to a certain luminance and chrominance (target color), the method computes the output RGB values of the VLUT such that, the resulted gray has the minimum color difference to the target color. The method proves to be effective in removing the colorcast on the TN TFT-LCD screens. The method is different from a previous published paper from the same author by minimizing the loss of the luminance due to the tuning of the chrominance.
CRT calibration techniques for better accuracy including low-luminance colors
We present a new CRT characterization technique that improves the accuracy of the characterization. This is achieved by optimizing the linear transformation matrix of the two stage model in the uniform CIE/L*a*b* space. We also introduce an approach to improve the characterization performance for low luminance colors. These methods are used to calibrate two CRT monitors and better accuracies are obtained compared to existing methods, especially for low luminance colors. We present a systematic way to adjust the white point of the monitor using hardware settings. This allows us to adjust the monitor white accurately without losing any digital counts which is the case if a software approach is used. We propose a novel search algorithm to achieve very high accuracy calibration for experiments where a limited number of colors has to be displayed. We apply this search algorithm to the case of monochrome image display application and verify the performance our method.
Error Diffusion
icon_mobile_dropdown
Some funny things about error diffusion
Error diffusion has been a topic of study for many years and a large number of improvements have been made to the original form as it was proposed by Floyd and Steinberg. This paper will show some lesser known behaviors and modifica-tions to the algorithm along with some surprising or perplexing results.
A channel-dependent color error diffusion method based on distance constraint
Ki-Min Kang, Eul-Hwan Lee
It is well known that the homogeneous dot distribution is one of the important features affecting the image quality for B/W error diffusion method. However, when B/W error diffusion method is independently applied to each color channel, homogeneity between channels would not appear in color binary image. Non-homogeneity in color binary image often generates the overlap of the dots between channels. Especially, the overlap of cyan dots and magenta dots is noticeable to human eyes in color highlights. In order to prevent the overlap of cyan dots and magenta dots, this paper modulates the threshold value that makes the distance between dots in cyan channel and magenta channel be equal to the principal distance. Therefore, cyan dots and magenta dots are homogeneously distributed and the overlap of cyan dots and magenta dots can be prevented. The threshold value is increasing or decreasing according to the difference between the principal distance and the minimum distance. In color highlights, the principal distance is adjusted for satisfying the homogeneity between channels and the homogeneity in the respective channel. For the calculation of the minimum distance, this paper describes the 2D-MPOA(two dimensional minor pixel offset array) that is able to calculate the minimum distance efficiently.
Channel-dependent error diffusion algorithm for dot-off-dot printing
The simplest way of halftoning color images using error diffusion is to apply scalar error diffusion technique to each of color channels independently. When processed independently for C, M, Y and K channels, cyan and magenta dots are often printed at the same pixel location. Such overlaps between cyan and magenta would appear as color noise in highlight area. Thus, it is desirable to minimize dot-on-dot printing of cyan and magenta especially in highlight area. In order to further improve image quality, combined dot distribution of cyan and magenta should be homogeneous. Also, dot distribution of individual color channel should be even. In this paper, tone dependent error diffusion kernels and serpentine direction of processing are employed for homogeneous dot distribution of individual color channel. A decision rule based on updated values of cyan and magenta is applied to achieve dot-off-dot printing. A channel dependent threshold modulation is proposed to improve combined distribution of cyan and magenta. A criterion to measure homogeneity of dot distribution is also proposed.
Fast multilevel vector error diffusion based on adaptive primary color selection
Tae-Yong Park, Yang-Ho Cho, Myong-Young Lee, et al.
This paper proposes multi-level vector error diffusion based on adaptive primary color selection for fast and accurate color reproduction. Conventional bi-level vector error diffusion uses eight primary colors(R, G, B, C, M, Y, W, K). However, multi-level vector error diffusion uses more primary colors (this paper uses 64 primary colors) depending on the printing device, thereby significantly increasing the time complexity due to the additional increment of computation. Moreover, the output image can also include color artifacts that have a noticeable primary color under the influence of large quantization error and increased primary color. Accordingly, to reduce these problems, we proposed the quantization process to decide a candidate primary among the 64 primary colors using lightness difference. First, we classified the 64 primary colors into 60 chromatic colors and 4 achromatic colors and then we exclude primary colors with the large lightness difference against the input color from a set of 60 chromatic primary colors. Using both 4 achromatic primary colors and a candidate primary colors, we calculated the vector norm to select output color. Also this paper determine optimal threshold experimentally to remove smear artifacts resulting from the diffusion of large quantization error. As a result, this paper archives fast multi-level vector error diffusion by avoiding additional computation and produces visually pleasing halftone pattern by excluding noticeable primary colors.
Input-level-dependent approach to color error diffusion
Conventional grayscale error diffusion halftoning produces worms and other objectionable artifacts. Tone dependent error diffusion (Li and Allebach) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. Li and Allebach optimize error filter weights and thresholds for each (input) graylevel based on a human visual system model. This paper extends tone dependent error diffusion to color. In color error diffusion, what color to render becomes a major concern in addition to finding optimal dot patterns. We present a visually optimum design approach for input level (tone) dependent error filters (for each color plane). The resulting halftones reduce traditional error diffusion artifacts and achieve greater accuracy in color rendition.
Boundary stitching algorithm for parallel implementation of error diffusion
Zhen He, Tichiun Chang, Jan P. Allebach, et al.
Error diffusion1{3 is a popular halftoning algorithm extensively used in digital printing. It renders di erent tone levels by adaptively modulating local dot density. Moreover, because of its random dot placement nature, error di usion is free of Moir e artifacts when rendering an image with strong periodic components. This makes it very attractive to render scanned images which often have strong embedded periodic screen frequencies. However, one potential drawback of error di usion for high speed printing applications is its computation load. Unlike screening algorithms4, 5 which only require one threshold operation per pixel, error di usion also must compute and di use the ltered pixel errors to the neighboring pixels. In practice, it may be desirable to implement error di usion in parallel to speed up the computation. One scenario is shown in Figure 1. The input image is rst equally split into four stripes. Each image stripe is then fed to a DSP chip programmed to run error di usion. Each DSP chip runs error di usion independently without synchronization or communication between processors. The halftone outputs from four DSP chips are nally merged to form the whole halftone image. While this can speed up the algorithm by a factor of four, one potential problem with this parallel implementation is that dot clusters or holes can be very visible along the stripe boundaries in the merged halftone image. This is because the pixel error can not be di used across the stripe boundary, so the \blue noise" characteristics of the halftone texture are destroyed near the stripe boundaries. These artifacts are most visible in midtone areas, and somewhat less visible in the shadow areas. In highlight areas the dots are sparse, so these boundary artifacts are much less visible.
Implementation: Models, Architectures, and Algorithms
icon_mobile_dropdown
Spectral prediction and dot surface estimation models for halftone prints
Roger D. Hersch, Fabien Collaud, Frederique Crete, et al.
We propose a new spectral prediction model as well as new approaches for modeling ink spreading which occurs when printing ink layer superpositions. The spectral prediction model enhances the classical Clapper-Yule model by taking into account the fact that proportionally more incident light through a given colorant surface is reflected back onto the same colorant surface than onto other colorant surfaces. This is expressed by a weighted mean between a component specifying the part of the incident light which exits through the same colorant as the colorant from which it enters (Saunderson corrected Neugebauer component) and a component specifying the part of the incident light whose emerging light components exit from all colorants, with a probability to exit from a given colorant equal to that colorant surface coverage (Clapper-Yule component). We also propose two models for taking into account ink spreading, a phenomenon which occurs when printing an ink halftone in superposition with one or several solid inks. Besides the physical dot gain present within a single ink halftone print, we consider in the first model the ink spreading which occurs when an ink halftone is printed on top of one or two solid inks. In the second more advanced model, we generalize this concept to ink halftones printed on top or below solid inks. We formulate for both ink spreading models systems of equations which allow to compute effective ink coverages as a combination of the individual ink coverages which occur in the different superposition cases. The new spectral prediction model combined with advanced ink spreading yields excellent spectral predictions for clustered-dot color halftone prints, both in the case of offset (75 to 150 lpi) and in the case of thermal transfer printers (50 to 75 lpi).
Design of high-performance coprocessor for color error diffusion
In this paper, we present an architecture of a color halftoning coprocessor. The design is based on a software/hardware design approach in which the flexibility and adaptability of the programmable processor and the high performance, low power of ASIC design are utilized. We employ the concurrency and locality concepts in computer architecture to address the computational intensive and data intensive issues of the color halftoning algorithm. Both instruction parallelism and data parallelism are exploited to speed up the performance. In addition, the fine-grain and middle-grain instruction level parallelism (ILP) are utilized to accelerate the computation in the color error diffusion halftoning process.
Halftoning processing on a JPEG-compressed image
Cedric Sibade, Stephane Barizien, Mohamed Akil, et al.
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
Visual cryptography via halftoning
Gonzalo R. Arce, Zhi Zhou, Giovanni Di Crescenzo
Visual cryptography encodes a secret binary image SI into n shares of random binary patterns. The secret image can be visually decoded by superimposing a qualified subset of shares, but no secret information can be obtained from the superposition of a forbidden subset. Such a scheme is mathematically secure, however, the binary patterns of the n shares have no visual meaning, raising the suspicion of data encryption. In order to achieve a higher level of security, halftone visual cryptography was proposed to encode a secret binary image into n halftone shares (images) carrying significant visual information. The method is further extended in this paper. Based on blue-noise dithering principles, a global optimization method is proposed to improve the overall visual quality of all n halftone shares. Thus, the adversaries are less likely to suspect the presence of hidden cryptographic information.
Texture Appearance and Hybrid/Adaptive Methods
icon_mobile_dropdown
Resolution-dependence of perceived contrast of textures
Spurred by technological improvements, displays (as well as printers) are increasingly available in a wide range of resolutions. Increased resolution improves perceptual quality in at least two different ways: reducing the perceived contrast of undesirable artifacts (such as halftoning or dithering textures), and increasing the perceived contrast of desirable image features (particularly when rendering text and high precision graphics). Much of the past literature addresses questions of how to optimize one or both of these for a given resolution, but there is little guidance on tradeoffs when the resolution itself is variable. In this paper, we present an analytic framework for quantifying how the perceived visual contrast of textures changes with resolution, and a simple, tractable model that accurately predicts visual contrast of grayscale-rendered text at different resolutions. These contrast metrics provide a solid basis for evaluating the effectiveness of grid-fitting and similar techniques for perceptually tuned grayscale font rendering, and can also be a useful tool for evaluating engineering tradeoffs such as choosing an optimum resolution relative to cost, speed, or bandwidth constraints.
Generating stochastic dispersed and periodic clustered textures using a composite hybrid screen
In electrophotographic printing, a periodic clustered-dot halftone pattern is preferred for a smooth and stable result. In addition, the screen frequency should be high enough to minimize the visibility of the halftone textures and to ensure good detail rendition. However, at these frequencies, the halftone cell may contain too few pixels to provide a sufficient number of distinct gray levels. This will result in contouring and posterization. The traditional solution is to grow the clusters asynchronously within a repeating block of clusters known as a supercell. The growth of each individual cluster is governed by a microscreen. The order in which the clusters grow within the supercell is determined by a macroscreen. Typically, the macroscreen is a recursive pattern due to Bayer. In highlights and shadows, this ordering results in visible artifacts. Replacing the Bayer screen by a stochastic macroscreen eliminates these artifacts, but results in new artifacts. In this paper, we propose a new composite screen architecture that employs multiple microscreens and multiple macroscreens in the highlights and shadows. These screens are jointly designed by using the direct binary search (DBS) algorithm.
An adaptive halftone algorithm for composite documents
Jincheng Huang, Anoop K. Bhattacharjya
A composite document such as a scanned magazine page usually contains a variety of content, such as, text and halftoned images. Different kinds of content on a page have different spatial and color characteristics that are best rendered by different halftone techniques. A direct application of multiple halftone techniques in a composite document can lead to disturbing boundary artifacts caused by the switching of halftone algorithms. In this paper, we present an adaptive halftone algorithm for rendering composite documents such as scanned magazine pages, on color laser printers. The method uses a combination of error diffusion and clustered-dot screening to generate an edge-preserving halftone that also has the desirable property of low noise and minimal halftoning artifacts in smooth image regions. In the presented method, narrow boundaries around image edge regions are treated as content-transition regions, in which clustered-dot screening and error diffusion are operated simultaneously. The halftone output within a transition region is controlled by the distance of the pixel to the boundary of an identified image edge. In our simulations, the proposed algorithm results in visually superior halftones.
Screen Design
icon_mobile_dropdown
The 30-year evolution of digital halftoning from the viewpoint of a participant
The 30-year history of the development of digital halftone technology within one company (Xerox) is followed from the viewpoint of the author’s involvement and participation. The history has an emphasis on the evolution in complexity from very simple threshold arrays through multi-center dots, high-addressability writing, non-orthogonal screens and other methods for avoidance of color moire. The paper is not meant to address all forms of digital halftoning, but concentrates on the requirements of laser-scanned xerography and clustered dots. Graphic examples of various halftone dot-growth sequences are provided. Key advances and lessons in the development of halftoning are summarized.
G/M dither or color dither from monochrome dither matrices
In this paper we propose a simple method to obtain a Cartesian color dither-screen from a given monochrome dither-screen. The monochrome dot placement pattern (e.g. cluster or scatter), as well as its frequency domain features are maintained, while optimizing for color quality. Color quality is measured against the Minimal Brightness Variation Criterion.
Stochastic screens robust to misregistration in multipass printing
A new technique for design of stochastic screens is proposed that produces screens that are robust against mis-registration in multi-pass printing. Conventional stochastic screens are designed through an optimization process that minimizes low-frequency structure in halftone images under the assumption that the placement of pixels is accurate. In inkjet printing, however, a page is often printed in multiple passes to allow for better drying of inks and to minimize appearance of a head signature. Any potential mis-registration between the passes is typically not comprehended in the conventional stochastic screen design process. The mis-registration between the passes can therefore cause significantly increased graininess (low-frequency structure) in printed images produced with stochastic screens even though the corresponding electronic bitmaps are free from low-frequency structure. In this paper, we propose modifications to the stochastic screen design process that take the two pass printing into account and produce halftones that are robust to inter-pass mis-registration errors. This allows reduced tolerances and alignment requirements in manufacturing that translate to lower cost. The proposed technique works by modifying the screen design process to ensure that a majority of the minority pixels are concentrated in a single pass, which provides improved robustness to mis-registration between the passes. Experimental results demonstrate that the proposed design technique performs significantly better than conventional stochastic screens in the presence of mis-registration errors.
AM-FM screen design using donut filters
In this paper we introduce a class of linear filters called 'donut filters' for the design of halftone screens that enable robust printing with stochastic0 clustered dots. The donut filter approach is a simple, yet efficient method to produce pleasing stochastic clustered-dot halftone patterns (a.k.a AM-FM halftones) suitable for systems with poor isolated dot reproduction and/or significant dot-gain. The radial profile of a donut filter resembles the radial cross section of a donut shape, with low impulse response at the center that rises to a peak and drops off rapidly as the pixel distance from the center is increased. A simple extension for the joint design of any number of colorant screens is given. This extension makes use of several optimal linear filters that may be treated as a single donut multi-filter having matrix-valued coefficients. A key contribution of this paper is the design of the parametric donut filters to be used at each graylevel. We show that given a desired spatial pair-correlation profile (a.k.a. spatial halftone statistics), optimum donut filters may be generated, such that the donut filter based screen design produces patterns possessing the desired profile in the maximum-likelihood sense. In fact, 'optimal green-noise' halftone screens having the spatial statistics described by Lau, Arce and Gallagher may be produced as a special case of our design. We will also demonstrate donut filter designs that do not use an 'optimum green-noise' target profile in the design and yet produce excellent stochastic clustered-dot halftone screens.
Poster Session
icon_mobile_dropdown
A new halftoning technique to eliminate ambiguous pixels for stable printing
Shinji Sasahara, Tetsuo Asano
All printing processes basically involve some instabilities of analog nature. To overcome such instabilities, in this paper, we have developed a unique halftoning technique for stable printing. Based on the charateristics of xerography, a combination of Gaussian filter with a sigmoid nonlinear function was used to calculate the probability of toner transfer in print. We can obtain a simulated i mage in print for xerography using this nonlinear printer model. Halftone images are formed to have a good image quality under this model in a way so that the perceptive error with respect to an original gray scale image is small. To achieve this, ambiguous pixels that have their transfer probabilities within a band in our nonlinear printer model as eliminated as much as possible by an iterative improvement method. As a result, we can obtain an ideal halftone screen without anisotropy for xergraphic printer.