Proceedings Volume 5008

Color Imaging VIII: Processing, Hardcopy, and Applications

cover
Proceedings Volume 5008

Color Imaging VIII: Processing, Hardcopy, and Applications

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 13 January 2003
Contents: 15 Sessions, 56 Papers, 0 Presentations
Conference: Electronic Imaging 2003 2003
Volume Number: 5008

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Color Perception and Appearance
  • Image Enhancement
  • Applications I
  • Applications II
  • Capturing
  • Spectral Imaging
  • Printing
  • Posters
  • Color Control on Displays
  • Standards and Color Management
  • Color Reproduction
  • Color and Moire
  • New Architectures and Halftoning
  • Compression and Data
  • Implementation Issues
  • Posters
Color Perception and Appearance
icon_mobile_dropdown
Subjective perception of natural scenes: the role of color
The subjective perception of colors has been extensively studied, with a focus on single colors or on combinations of a few colors. Not much has been done, however, to understand the subjective perception of colors in other contexts, where color is not a single feature. This is what the Kansei community in Japan has set itself to, by exploring subjective experiences of perceptions, and colors in particular, given its obvious influence on humans' emotional changes. The motivation is to create computational models of user visual perceptions, so that computers can be endowed with the ability to personalize visual aspects of their computational task, according to their user. Such a capability is hypothesized to be very important in fields such as printing, information search, design support, advertisement, etc. In this paper, we present our experimental results in the study of color as a contextual feature of images, rather than in isolation. The experiments aim at understanding the mechanisms linked to the personal perception of colors in complex images, and to understand the formation of color categories when labeling experiences related to color perception.
Cross-media tonal mapping model obtained from psychometric experiments
Stefan Livens, Ans Anthonis, Marc F. Mahy, et al.
We study cross-media image reproduction by constructing a tonal range mapping model. We aim at making reproductions that optimally represent the overall appearance of the originals despite a reduction in dynamic range. We propose a general mapping that works for all input and output white and black points. It is described by a two parameter functional model. The parameters are chosen so that one primarily corresponds to black point variations and the other to white point variations. We set up psychometric experiments to estimate optimal parameter values. Paired comparison was employed because of its ease of use and accurate results. To keep the number of observations down, a small pilot experiment marks out a narrower range of values first. Furthermore, the two parameter optimisation is split into a sequential single parameter optimisation. The experiments are repeated for different white and black points. The model is completed by interpolating between the experimental points and determining the correlation between the parameters. A separate verification experiment proves the validity of the model within the experimental accuracy. A comparison of our model with the CIECAM97s colour appearance model clarifies the fundamental difference between them. The tonal mapping model aims at the best overall reproduction of images. It produces more pleasing images by giving them higher overall contrast, whereas CIECAM faithfully models the appearance.
YACCD: yet another color constancy database
Different image databases have been developed so far to test algorithms of color constancy. Each of them differs in the image characteristics, according to the features to test. In this paper we present a new image database, created at the University of Milano. Since a database cannot contain all the types of possible images, to limit the number of images it is necessary to make some choices and these choices should be as neutral as possible. The first image detail that we have addressed is the background. Which is the more convenient background for a color constancy test database? This choice can be affected by the goal of the color correction algorithms. In developing this DB we tried to consider a large number of possible approaches considering color constancy in a broader sense. Images under standard illuminants are presented together with particular non-standard light sources. In particular we collect two groups of lamps: with a weak and with a strong color casts. Another interesting feature is the presence of shadows, that allow to test the local effects of the color correction algorithms. The proposed DB can be used to test algorithms to recover the corresponding color under standard reference illuminants or alternatively assuming a visual appearance approach, to test algorithms for their capability to minimize color variations across the different illuminants, performing in this way a perceptual color constancy. This second approach is used to present preliminary tests. The IDB will be made available on the web.
Unconstrained web-based color naming experiment
This paper describes an ongoing web-based approach to collecting color names or color categories. Previous studies have tended to require a large number of observations from a small number of observers. These studies have also tended to limit responses to one-word or monolexical replies. Many studies have also focused on response time or levels of intra-observer agreement in order to identify focal colors. This web-based study uses a distributed design to collect a small number of names from a large number of observers. The responses are neither limited to nor restricted from being monolexical. The focal color analysis is then based on statistical analysis of monolexcially named colors. This paper presents the methodology and infrastructure, as well as considerations for data analysis. Finally, preliminary results of the experiment are results are considered. The data from over 700 participants results in CIELAB hues and lightnesses for the basic colors that agree with previous investigations as well as those investigations agree with each other.
Modifying CIECAM97s surround parameters for complex images in graphic arts viewing conditions
Reproducing a transparency original on a hard copy print is a cross-media reproduction task, and for the two to match when viewed simultaneously the different viewing conditions for the two media must be taken into account. When CIECAM97s is used to predict the appearance of the transparency using the cut-sheet surround parameters, the impact of the viewing condition is over-predicted for this simultaneous viewing task. New values for the c and Nc surround parameters were derived for simultaneous viewing, where both print and transparency conditions are defined by ISO 3664. In a second phase, a background luminance factor Yb was calculated from the luminance of the image, border and background fields, assuming that the effect of the background decays exponentially with distance. New values for c and Nc were also calculated which assume that the transparency surround when viewed simultaneously with a print is a weighted combination of the surround conditions for the two media. When evaluated in a psychophysical experiment, reproductions made according to the optimised surround parameters found in phase 1, and the modified surround and background parameters found in phase 2, were judged to give a better match in the appearance of the two media than the CIECAM97s parameters.
Image Enhancement
icon_mobile_dropdown
Improvement of color quality with modified linear multiscale retinex
Tatsumi Watanabe, Yasuhiro Kuwahara, Akio Kojima, et al.
With popular of Digital Still Camera-DSC, higher image quality is required. One of the subjects is that image quality at shadow area caused by the narrow dynamic range of the CCD devices is improved automatically. Conventionally, gamma transformation, histogram equalization, and etc. have been utilized for this improvement, but these are not always enough improvement. Recently, examinations applying to retinex theory taking into account of human eyes characteristics proposed by Land are paid attention. This algorithm renders image at shadow area clearly and effectively using spatial information between surrounding pixels arranged into two dimensions. Typical methods are Single-scale retinex(SSR) and Multi-Scale Retinex(MSR). These methods, however, does not always work on practical use in terms of color correction of the printed images with different RGB density distribution. In order to improve the issues of MSR, we propose the Modified Linear Multi-scale retinex (ML-MSR) method. A modified method consists of (a) linear computation processing and (b) synthesis both the original images and the images obtained by the linear MSR. By the simulation for the images printed by DSC, we show that ML-MSR can improve the visibility at shadow areas keeping with both the color balance and saturation, comparing with the conventional methods, such as histogram equalization and MSR proposed by Jobson. In general, a processing time of MSR remarkably increases with the size of Gaussian averaging filter to compute the weighted average. We describe about faster processing method of the ML-MSR algorithm, which has been shorten by using the thinning out of surrounding pixels and simplicity of average processing.
Restoring color document images with show-through effects by multiscale analysis
Hirobumi Nishida, Takeshi Suzuki
This paper describes a new approach to restoring scanned color document images where the backside image shows through the paper sheet. A new framework is presented for correcting show-through components using digital image processing techniques. First, the foreground components on the front side are separated from the background and backside components through locally adaptive binarization for each color component and edge magnitude thresholding. Background colors are estimated locally through color thresholding to generate a restored image, and then corrected adaptively through multi-scale analysis along with comparison of edge distributions between the original and the restored image. The proposed method does not require specific input devices or the backside to be input; it is able to correct unneeded image components through analysis of the front side image alone. Experimental results are given to verify effectiveness of the proposed method.
Color image enhancement technique using gamut mapping based on color space division
This paper proposes a gamut mapping algorithm based on color space division for color reproduction of cross media. As each color device has a limited range of producible colors, the reproduced colors on a destination device are different from those of the original device. In order to reduce the color difference between those devices, the proposed method divides the whole gamut into parabolic shapes based on intersecting lightness by the “just noticeable difference” (JND) and the boundary of original gamut. By dividing the gamut with parabolic shapes and piecewise mapping of each region, it not only considers gamut characteristics but also provides for mapping uniformity. The lightness variations are more sensitive to the human visual system and by using lightness JND it can restrict lightness mapping variations that are unperceivable. As a result, the proposed algorithm is able to reproduce high quality color images using low-cost color devices.
Tunable cast remover for digital photographs
The paper describes an adaptive and tunable color cast removal algorithm. This multi-step algorithm first quantifies the strength of the cast by applying a color cast detector, which classifies the input images as having no cast, evident cast, ambiguous cast (images with low cast, or for which whether or not the cast exists is a subjective opinion), or intrinsic cast (images presenting a cast that is probably due to a predominant color we want to preserve, such as in underwater images). The cast remover, a modified version of the white balance algorithm, is then applied in the two cases of evident or ambiguous casts. The method we propose has been tuned and tested, with positive results, on a large data set of images downloaded from personal web-pages, or acquired by various digital cameras.
Applications I
icon_mobile_dropdown
Geometric invariance in describing color features
We present a projective geometry framework for color invariants using the Extended Dichromatic Reflection Model, in which more realistic and complicated illuminations are considered. Many assumptions which have been used by other methods are relaxed in our framework. Specifically some of the proposed invariants do not require any additional assumption except the ones assumed by the Extended Dichromatic Reflection Model. By putting the color invariance into the projective geometry framework, we can generate different types of invariants and clarify the assumptions under which they are valid. Experiments are presented that illustrate the results derived within our framework.
Using [Delta]E metrics for measuring color difference in hard copy pictorial images
Current color difference metrics such as ΔE*ab, ΔE*94, and ΔE00 were developed using uniformly colored patches. The quantification of color variation in pictorial images is far more complex and generally requires the use of sophisticated color appearance models such as CIECAM97s and CIECAM02. In a recent study of printer color variation, the question was raised as to whether, in certain well-bounded situations, ΔE metrics could be used as a measure of color difference in pictorial, hard-copy images. A psychophysical scaling experiment was designed and conducted to examine this possibility. In the experiment, observers rated test prints of three scenes relative to anchor prints for apparent color difference. The correlation between observer scaling values of color difference for pictorial images and ΔE*ab, ΔE*94, and ΔE00 was examined. It was found that, for the color shifts that were introduced into the test prints under constant media and viewing conditions, the ΔE metrics were effective measures of color variation in pictorial image samples. It was also found, however, that the efficacy of these metrics depended strongly on how the metrics were calculated. The procedure of using colors representative of the important colors in the prints being measured produced significantly better results than other methods of calculating the ΔE metrics.
Improving and streamlining the workflow in the graphic arts and printing industry
In order to survive in the economy of today, an ever-increasing productivity is required from all the partners participating in a specific business process. This is not different for the printing industry. One of the ways to remain profitable is, on one hand, to reduce costs by automation and aiming for large-scale projects and, on the other hand, to specialize and become an expert in the area in which one is active. One of the ways to realize these goals is by streamlining the communication of the different partners and focus on the core business. If we look at the graphic arts and printing industry, we can identify different important players that eventually help in the realization of printed material. For the printing company (as is the case for any other company), the most important player is the customer. This role can be adopted by many different players including publishers, companies, non-commercial institutions, private persons etc. Sometimes, the customer will be the content provider as well but this is not always the case. Often, the content is provided by other organizations such as design and prepress agencies, advertising companies etc. In most printing organizations, the customer has one contact person often referred to as the CSR (Customers Service Representative). Other people involved at the printing organization include the sales representatives, prepress operators, printing operators, postpress operators, planners, the logistics department, the financial department etc. In the first part of this article, we propose a solution that will improve the communication between all the different actors in the graphic arts and printing industry considerably and will optimize and streamline the overall workflow as well. This solution consists of an environment in which the customer can communicate with the CSR to ask for a quote based on a specific product intent; the CSR will then (after the approval from the customer's side) organize the work and brief his technical managers to realize the product. Furthermore, the system will allow managers to brief the actors and follow up on the progress. At all times, the CSR's - as well as the customers - will be able to look at the over-all status of a specific product. If required, the customers can approve the content over the web; the system will also support local and remote proofing. In the second part of this article, we will focus on the technical environment that can be used to create such a system. To this end, we propose the use of a multi-tier server architecture based on Sun’s J2EE platform. Since our system performs a communicating role by nature, it will have to interface in a smart way with a lot of external systems such as prepress systems, MIS systems, mail servers etc. In order to allow a robust communication between the server and its subsystems that avoids a failure of the over-all system if one of the components goes down, we have chosen for a non-blocking, asynchronous communication method based on queuing systems. In order to support an easy integration with other systems in the graphic industry, we also will describe how our communication server supports the JDF standard, a new standard in the graphic industry established by the CIP4 committee.
Applications II
icon_mobile_dropdown
The practical application of an artist's color model as an alternative to CMYK
This paper presents an alternative view of colour, from the artist's perspective. It highlights problems that are current in inkjet and wideformat printing. And how other print processes, such as (silk)screenprint, can offer answers to developing inkjet technology; such as colour saturation, surface quality, translucency and opacity. The paper introduces the Centre for Fine Print Research (CFPR), gives a context to the work that is undertaken at the Centre, and examples the International Digital Miniature Print as dissemination of research. The paper provides a historical context to colour and colour printing, and introduces the notion that white and varying translucencies of white could offer an alternative or to enhance current CMYK+ colour sets.
Perceptual approach for unsupervised digital color restoration of cinematographic archives
The cinematographic archives represent an important part of our collective memory. We present in this paper some advances in automating the color fading restoration process, especially with regard to the automatic color correction technique. The proposed color correction method is based on the ACE model, an unsupervised color equalization algorithm based on a perceptual approach and inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy. There are some advantages in a perceptual approach: mainly its robustness and its local filtering properties, that lead to more effective results. The resulting technique, is not just an application of ACE on movie images, but an enhancement of ACE principles to meet the requirements in the digital film restoration field. The presented preliminary results are satisfying and promising.
Image segmentation of stained glass
Alfredo Giani, Lindsay William MacDonald, Caroline Machy, et al.
Several approaches have been applied to a digital image of a stained glass window in order to segment the image to match the window's physical structure of separate pieces of glass joined by strips of lead. A three-stage neural network with optimal thresholding strategy gave satisfactory results when followed by a tuned set of Gabor filters.
Capturing
icon_mobile_dropdown
Hierarchical approach to the optimal design of camera spectral sensitivities for colorimetric and spectral performance
Shuxue Quan, Noboru Ohta, Roy S. Berns, et al.
The optimal design of spectral sensitivity functions for digital color imaging devices has been studied extensively. This paper analyzed the important requirements for designing sensor sensitivity functions. A hierarchical approach is proposed to the optimal design of camera spectral sensitivity functions by incorporating spectral fitting, colorimetric performance and noise. The approach is directly based on the filter fabrication parameters to avoid approximation deviation. A six-channel camera is designed via this approach, with the first three channels aiming at colorimetric performance and the total six channels for spectral performance.
Estimation of spectral distribution of scene illumination from a single image with chromatic illuminant
Yun-Tae Kim, Yang-Ho Cho, Cheol-Hee Lee, et al.
The current paper proposes an illuminant estimation algorithm that estimates the spectral power distribution of an incident light source using its chromaticity determined based on the perceived illumination and highlight method. The proposed algorithm is composed of three steps. First, the illuminant chromaticity of the global incident light is estimated using a hybrid method that combines the perceived illumination and highlight region. Second, the surface spectral reflectance is then recovered from the image after decoupling the global incident illuminant for each channel. The surface spectral reflectance calculation is limited to the maximum achromatic region (MAR), which is the most achromatic and brightest region in the image, and estimated using the principal component analysis (PCA) method along with a set of given Munsell samples. Third, the closest colors are selected from a spectral database composed of reflected-lights generated by the given Munsell samples and a set of illuminants. Finally, the illuminant of the image is calculated using the average spectral distributions of the reflected-lights selected for the MAR region and its average surface reflectance. Simulations were performed using artificial color-biased images and the results confirmed the accuracy of the estimates produced by the proposed method for various illuminants.
Fast linear method of illumination classification
We present a simple method for estimating the scene illuminant for images obtained by a Digital Still Camera (DSC). The proposed method utilizes basis vectors obtained from known memory color reflectance to identify the memory color objects in the image. Once the memory color pixels are identified, we use the ratios of the red/green and blue/green to determine the most likely illuminant in the image. The critical part of the method is to estimate the smallest set of basis vectors that closely represent the memory color reflectances. Basis vectors obtained from both Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are used. We will show that only two ICA basis vectors are needed to get an acceptable estimate.
New constraint on spectral reflectance and its application in illuminant detection
For a long time, the constraints on surface spectral reflectances are the range of 0 to 1, smooth and low frequency. Those constraints are tested to be too loose in practical use, typically for illuminant estimation with spectral recovery. The proposal of linear model and PCA decomposition made it possible to effectively reconstruct spectral reflectances with small numbers of parameters. Based on that, a new constraint on surface spectral reflectance is proposed to have better limitation and description of their characteristics. It is defined as a two-dimensional histogram of the coefficients for the spectral reflectances in the real world. The variables in the two dimensions are the ratios of the parameters from PCA, which describe the “saturation” property of reflectances. There are differences between the application of gamut and histogram in illuminant estimation. Histogram is preferred to gamut when the color space is composed of relative values. Based on that, the original color by correlation method is modified to have better performance especially on real images. The proposed constraint is applied to illuminant detection with spectral recovery. In the method, the recovered surface reflectances are examined by the constraint, and the scene illuminant is detected through possibility comparison. The proposed method is tested to have good efficiency compared with others, both on synthetic and real images.
Characterization of a digital camera as an absolute tristimulus colorimeter
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Spectral Imaging
icon_mobile_dropdown
Comparative study on sensor spectral sensitivity estimation
Shuxue Quan, Noboru Ohta, Xiaoyun Jiang
The spectral characterization of digital imaging devices overcomes the drawbacks of conventional colorimetric characterization by determining the sensor spectral sensitivity functions. Direct measurement of the sensitivities requires expensive instruments and takes long time. A “quick and easy” yet accurate enough estimation of those functions are desired in some circumstances. The estimation is realized by imaging some selected set of reflectance samples. In this paper, some primary available approaches to the sensor sensitivity estimation are reviewed, followed by the description of the proposed iterative multiscale basis functions method. The performance of the new method is compared with some of the available approaches. The implementation of the new method is relatively simple and the results show that it is superior by offering more degrees of freedoms and yielding nonnegative, smooth, and close approximation under either noiseless or noisy condition.
Spectral estimation of human skin color using the Kubelka-Munk theory
The present paper describes a method for modeling human skin coloring and estimating the surface-spectral reflectance by using the Kubelka-Munk theory. First, human skin is modeled as two layers of turbid materials. Second, we describe the reflectance estimation problem as the Kubelka-Munk equations with unknown six parameters. These parameters are the regular reflectance at skin surface and the five weights for spectral absorption of such different pigments as melanin, carotene, oxy-hemoglobin, deoxy-hemoglobin, and bilirubin. Moreover, the optical coefficients of spectral absorption and scattering for the two skin layers and the thickness values of these layers are used for the solution. Finally, experiments are done for estimating the skin surface-spectral reflectance on some body parts, such as the cheeks of human face, the palm, the backs of hand, the inside of arm, and the outside of arm. It is confirmed that the proposed method is more reliable in all cases.
Spectrum-based color reproduction system for motion picture
Kenro Ohsawa, Hiroyuki Fukuda, Yasuhiro Komiya, et al.
The architecture of color reproduction system based on spectral information is addressed. This system aims to realize the accurate color reproduction under arbitrary illuminants and color matching functions and anticipates the ultimate reproduction of spectral reflectance and spectrum of objects. This system is like an expanded version of that proposed by the International Color Consortium (ICC). One of the features of this system is the usage of the spectral space for the profile connection space alternative to the colorimetric space. It retains compatibility with the indigenous color management functions of the current system. The proposed system is further expanded to construct the spectrum-based color reproduction system for motion picture. Six-band HDTV camera and six-primary projection display system, which is realized in our laboratory as experimental spectrum-based color reproduction system for motion picture, is briefly introduced.
One-parameter subgroups and the chromaticity properties of time-changing illumination spectra
Understanding the properties of time-varying illumination spectra is of importance in all applications where dynamical color changes due to changes in illumination characteristics have to be analyzed or synthesized. Examples are (dynamical) color constancy and the creation of realistic animations. In this article we show how group theoretical methods can be used to describe sequences of time changing illumination spectra with only few parameters. From the description we can also derive a differential equation that describes the illumination changes. We illustrate the method with investigations of black-body radiation and measured sequences of daylight spectra.
Hyperspectral imaging: the colorimetric high ground
Color is the human sensory perception triggered by a portion of the electromagnetic spectrum commonly called light. Mechanisms for capturing and reproducing these perceptions can trace their origins to four events. First, Newton’s deduction that “white light” was a mixture of rays able to induce the sensation of color in a human. Some 140 years later Young offered a physiological explanation for color perception, photosensitive receptors in the eye, which came to be known as the trichromatic theory. About 55 years later Maxwell applied Young’s theory to photography, demonstrating the three primary process that even now underpins commercial methods of capturing and reproducing color. And finally, In 1931, an international scientific standards organization, the International Illumination Commission (CIE), offered a precise, reproducible system for measuring and specifying color. However, CIE31was never integrated into the generally accepted procedure for reproducing color. The goal of this paper is to demonstrate, via discussion of technical issues and disclosure of a practical image capture device, that the CIE31 method and related improvements, collectively described as hyperspectral imaging, can be integrated into the general process of color reproduction. The author maintains hyperspectral imaging is the path to virtually all future color reproduction techniques.
Printing
icon_mobile_dropdown
3D color separation maximizing the printer gamut
Besides having CMY colorants, most of color printers include at lease one extra colorant, black (K), to increase the density for shadow colors and to reduce the colorants required for printing shadow colors. In recent years, CMYKcm, CMYKcmk (Cyan, Magenta, Yellow, blacK, light-cyan, light-magenta, and light-black), and CMYKOG (O and G stand for Orange, and Green) or CMYKOV (V stands for Violet) ink-sets have been used in printers to reduce graininess or to extend printer color gamut. No matter how many colorants are used, a printer is often configured as a three-channel printer to simplify the color mapping process. The traditional GCR/UCR approach has been widely applied for CMY to CMYK color separation. However, this approach is not flexible for controlling K usage locally; it does not guarantee reasonable gamut usage; and it does not work very well for more than CMYK colorants. In order to solve the problems existed in traditional GCR approaches, a color separation method based on 3-D interpolation was developed. In this process, we first determine the color conversion for some important node points, which include primary colors, neutral colors, and other color ramps in the gamut surface. Then different interpolation approaches are applied to fill the entire 3-D lookup table. This approach solves the problem existed in traditional GCR that a lot of high-chroma shadow colors may be lost in the color separation step. It controls K usage globally as well as locally. It well controls ink limit in the entire gamut. It also works for the color separation for more than CMYK four colorants. Because it performs automatically without human interaction, it can be applied to general printer color calibration as well as ICC profile recreation and smart CMM implementation.
Using genetic algorithms for spectral-based printer characterization
Silvia Zuffi, Raimondo Schettini, Giancarlo Mauri
In recent years, many methods have been proposed for the spectral-based characterization of inkjet printers. To our knowledge, the majority of these are based on a physical description of the printing process, employing different strategies to deal with mechanical dot gain and the physical interaction among inks. But our experience tells us that as printing is a physical process involving a large number of effects and unpredictable interactions, it is not unusual to be unable to fit a mathematical model to a given printer. The question becomes, therefore, whether it is feasible, and to what degree, to employ an analytical printer model even if it appears to be incapable of describing the behavior of a given device. A key objective of our work is to obtain a procedure that can spectrally characterize any printer, regardless of the paper and the printer driver used. We consider in fact the printers RGB devices, and incorporate the printer driver operations, even if they are unknown to us, into the analytical model. We report here our experimentation on the use of genetic algorithms to tune a spectral printer model based on the Yule-Nielsen modified Neugebauer equation. In our experiments we have considered three different inkjet printers and used different kinds of paper and printer drivers. For each device the printer model has been tuned, using a genetic algorithm, on a data set of some 150 measured reflectance spectra. The test set was composed of 777 samples, uniformly distributed in the RGB color space.
Posters
icon_mobile_dropdown
Gamut boundary determination for a color printer using the face triangulation method
Tasks such as assessing the capabilities of a device or performing gamut mapping require the accurate estimation of device gamut boundaries in colorimetric space. We propose here a new method (which we call the Face Triangulation Method) to estimate the physical gamut boundary of a colour printer. The method has been tested on a real device, using two different paper media, and a common type of Gamut Boundary Descriptor. Given the small number of sample colours used for the estimate, and the high number of test colours employed, we conclude that the results achieved are rather good. We also indicate how to extend the Face Triangulation Method so that the boundary estimation can be further improved either locally or globally.
Color Control on Displays
icon_mobile_dropdown
Color quality management in advanced flat panel display engines
Fritz Lebowsky, Charles F. Neugebauer, David M. Marnatti
During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device. STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.
User-preferred color temperature conversion for video on TV or PC
The term "color temperature" represents the color of light source or the white point of image displaying devices such as TV and PC monitor. By controlling the color temperature, we can convert the reference white color of images. This is equivalent to the illuminant change, which alters all colors in the scene. In this paper, our goal is to find an appropriate method of converting the color temperature in order to reproduce the user-preferred color temperature in video displaying devices. It is essential that the relative difference of color temperature between successive image frames should be well preserved as well as the appearance of images should seem natural after applying the user-preferred color temperature. In order to satisfy these conditions, we propose an adaptive color temperature conversion method that estimates the color temperature of an input image and determines the output color temperature in accordance with the value of the estimated one.
Absolute and relative colorimetric evaluation for precise color on screen
Franz H. Herbert, Jo S. Kirkenaer, Jack A. Ladson
This paper deals with assessing and controlling the variables required to present accurate and precise color on screen. The objective is to generate a representation of an accurate, precise, soft copy of an object color with little difference in their color and appearance. This opens new vistas in product design and quality control. We obtained duplicate sets of 23 colors including two neutral chips that are distributed and widely spaced at different color centers throughout color space. We used these sets to evaluate color and appearance at different locations remote to one another. We obtained CIE L* a* b* values for the color representations displayed on the screen under multiple illuminants, and compared those colorimetric values to the corresponding object color sample values with a Pearson Correlation coefficient greater than 0.95 for all illuminants. Multiple personnel in different locations performed psychometric evaluations of the color and appearance presented by the display to that of the perceived color and appearance of the object under multiple illuminants. We quantitatively assessed and ranked the quality of the perceived color matches. We judged the precise color on screen to be accurate using our rating system and applying business statistics to evaluate and quantify the results. The evaluation of the data validate that we achieved excellent colorimetric (measured) accuracy and quantifiable perceptual agreement of the soft copy color to the color and appearance of objects.
Standards and Color Management
icon_mobile_dropdown
Standardization: colorful or dull?
After mentioning the necessity of standardization in general, this paper explains how human factors, or ergonomics standardization by ISO and the deployment of information technology were linked. Visual display standardization is the main topic; the present as well as the future situation in this field are treated, mainly from an ISO viewpoint. Some observations are made about the necessary and interesting co-operation between physicists and psychologists, of different nationality, who both may be employed by either private enterprise or governmental institutions, in determining visual display requirements. The display standard that is to succeed the present ISO standards in this area: ISO 9241-3, -7, -8 and ISO 13406-1, -2, will have a scope that is not restricted to office tasks. This means a large extension of the contexts for which display requirements have to be investigated and specified especially if mobile use of displays, under outdoor lighting conditions, is included. The new standard will be structured in such a way that it is better accessible than the present ones for different categories of standards users. The subject color in the new standard is elaborated here. A number of questions are asked as to which requirements on color rendering should be made, taking new research results into account, and how far the new standard should go in making recommendations to the display user.
Color management with a hammer: the B-spline fitter
Ian E. Bell, Bonny H. P. Liu
To paraphrase Abraham Maslow: If the only tool you have is a hammer, every problem looks like a nail. We have a B-spline fitter customized for 3D color data, and many problems in color management can be solved with this tool. Whereas color devices were once modeled with extensive measurement, look-up tables and trilinear interpolation, recent improvements in hardware have made B-spline models an affordable alternative. Such device characterizations require fewer color measurements than piecewise linear models, and have uses beyond simple interpolation. A B-spline fitter, for example, can act as a filter to remove noise from measurements, leaving a model with guaranteed smoothness. Inversion of the device model can then be carried out consistently and efficiently, as the spline model is well behaved and its derivatives easily computed. Spline-based algorithms also exist for gamut mapping, the composition of maps, and the extrapolation of a gamut. Trilinear interpolation---a degree-one spline---can still be used after nonlinear spline smoothing for high-speed evaluation with robust convergence. Using data from several color devices, this paper examines the use of B-splines as a generic tool for modeling devices and mapping one gamut to another, and concludes with applications to high-dimensional and spectral data.
Derivation of efficient color-space conversion formulae for n-dimensional interpolation
Gordon W. Braudaway
Conversions from one color-space to another are frequently used in image processing. Many conversion methods have been described mathematically, a few of which have been elevated to be international standards. However, for some conversions, purely mathematically based methods have been found wanting, and the method of measurement and interpolation has been used to produced more accurate results. Nowhere is this method more prevalently used than in dealing with colors resulting from combinations of inks or toners applied to paper. Although ink or toner colors of Cyan, Magenta, Yellow and Black are most commonly used, the use of a larger number of inks or toners that expand the color gamut is becoming more important for high fidelity color printing. The subject of this paper is the derivation of true n-dimensional linear interpolation formulae that are much more efficient than "tri-linear" or "quad-linear" and that can be used to convert from one n-dimensional color-space to another of equal, fewer or greater dimensions. The mathematical principle to be used for deriving the formulae is called axiom based induction. An interesting application of these formulae might be the conversion of a seven-dimensional color-space to a four dimensional color-space that would allow a seven-color master image to be "re-purposed" for printing by a less costly four-color method. Another application might be the use of a seven-dimensional interpolation, applied iteratively to produce a "corner turn," that could allow the direct mapping of three-dimensional color into seven-dimensional ink densities. An example of interpolation errors resulting from round-trip conversion of three to four and back to three dimensions will be given.
Neutral gray adjustment in printer ICC profiles
Neutral gray balance is very important in printing black and white images and printing images that have gray or near gray contents. Accurate gray balance is often difficult to achieve by ICC profiling software packages, especially by commercial software packages designed for users with little knowledge in color science. Furthermore, neutral gray may be judged differently by different individuals. In order to solve these problems, we developed different mechanisms to improve and to modify the neutral gray balance in printer ICC profiles. In this paper, a method to modify the transformation of the neutral gray balance for printer ICC profiles is presented. A colorimetric table, which is derived from the AToB1 tag and the white point tag or derived from a colorimetric data set measured from a near-gray target, is applied to recreate the neutral gray points in BToAi tags. The neutral gray point in a* and b* is determined by color appearance modeling or by a user for personal preference. Different a* and b* values for different lightness can be supplied to compensate the inaccuracy in AToB1 tag or color shift of the printer. To achieve highly accurate neutral gray balance, a target surrounds the neutral gray from white to black is printed for neutral gray balance calibration. Black adaptation is also taken into account to compensate the chrominance difference between the white point and the black point.
Color Reproduction
icon_mobile_dropdown
Retaining color fidelity in multiple-generation reproduction using digital watermarks
In most existing color reproduction systems, color correction is performed in an open-looped fashion. For multiple generation color copying, color fidelity cannot be guaranteed as the errors introduced in color correction may accumulate. In this paper, we propose a method of solving the error accumulation problem by embedding color information as invisible digital watermark in hardcopies. When the hardcopy is scanned, the embedded information can be retrieved to provide real-time calibration. As the method is close-looped in nature, it may reduce error accumulation and improve color fidelity, particularly when copies go through multiple generation reproduction.
Determining visually achromatic colors on substrates with varying chromaticity
Phil J. Green, L. Otahalova
Visually neutral colours on three different graphic arts substrates were chosen in a psychophysical experiment. Observers selected the most neutral patch from 49 patches of similar lightness, where the substrate was allowed to provide an adapting field. The results showed that the colours seen as visually neutral tend to have CIELAB a*, b* coordinates of 0 at the black point and to approach the a*, b* coordinates of the substrate at higher lightnesses. The adaptation to the substrate could be considered complete for a white substrate, but incomplete in non-white substrates. The results were modelled by transforming the achromatic axis in CIELAB by two methods - one by normalizing to the XYZ of the paper white, and the other by the chromatic adaptation transform CMCCAT2000. The best results were obtained when the adopted white point was taken as the substrate XYZ, with the chroma scaled by a factor of 0.6. Chroma scaling had more impact on the results for normalization than for CMCCAT2000.
Technology of duotone color transformations in a color-managed workflow
Duotone refers to an image with various shades of a hue mapped in an vector or wedge through a color space. The colorant, the gradient curve, and the number of colorants used define the slice through the color space. The image is printed with two or more analogue colorants. The colorants may be custom formulated or selected from a named color system. Typically two colorants are placed on a substrate by a halftone procedure, and the visual result, the mixture of the two colorants, is a third color. A gamut map of the colorants requires an accurate model of the thrid color that results from halftoning and printing the two inks. Color management procedures convert this gamut model to a vector through a monitor RGB color space and then to CMYK for proofing. This paper describes such a color management procedure.
Color and Moire
icon_mobile_dropdown
Variations on error diffusion: retrospectives and future trends
Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Since error diffusion is 2-D sigma-delta modulation (Anastassiou, 1989), Kite et al. linearize error diffusion by replacing the thresholding quantizer with a scalar gain plus additive noise. Sharpening is proportional to the scalar gain. Kite et al. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. These unsharpened halftones are particularly useful in perceptually weighted SNR measures. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit flipping quantizer (Damera-Venkata and Evans, 2001). We review other variations on grayscale error diffusion to reduce false textures in shadow and highlight regions, including green noise halftoning Levien, 1993) and tone-dependent error diffusion (Li and Allebach, 2002). We then discuss color error diffusion in several forms: color plane separable (Kolpatzik and Bouman, 1992); vector quantization (Shaked et al. 1996); green noise extensions (Lau et al. 2000); and matrix-valued error filters (Damera-Venkata and Evans, 2001). We conclude with open research problems.
The relevance of 19th century continuous-tone photomechanical printing techniques to digitally generated imagery
Stephen Hoskins, Paul Thirkell
Collotype and Woodburytype are late 19th early 20th century continuous tone methods of reproducing photography in print, which do not have an underlying dot structure. The aesthetic and tactile qualities produced by these methods at their best, have never been surpassed. Woodburytype is the only photomechanical print process using a printing matrix and ink, that is capable of rendering true continuous tone; it also has the characteristic of rendering a photographic image by mapping a three-dimensional surface topography. Collotype’s absence of an underlying dot structure enables an image to be printed in as many colours as desired without creating any form of interference structure. Research at the Centre for Fine Print Research, UWE Bristol aims to recreate these processes for artists and photographers and assess their potential to create a digitally generated image printed in full colour and continuous tone that will not fade or deteriorate. Through this research the Centre seeks to provide a context in which the development of current four-colour CMYK printing may be viewed as an expedient rather than a logical route for the development of colour printing within the framework of digitally generated hard copy paper output.
Nonorthogonal screen and its application in moire-free halftoning
Shen-ge Wang, Zhigang Fan, Zhenhuan Wen
In color reproduction, the most troublesome moire pattern is the second-order moire, or the three-color moire, usually produced by mixing of cyan, magenta and black halftone outputs. A classical 3-color zero-moire solution is using three identical cluster halftone screens with different rotations: 15, 45 and 75°, respectively. However, for most digital printing devices, the size and shape of halftone screens are constrained by the "digital grid", which defines the locations of printed dots; and therefore, an exact 15 or 75° rotation of a cluster screen is impossible. Although there are many alternative approaches for moire-free color halftoning, most of them only provide approximate solutions and/or have a tendency to generate additional artifacts associated with halftone outputs. The difficulty to achieve moire-free color halftoning is greatly relieved by using non-orthogonal halftone screens, i.e., screens in general parallelogram shapes. In this paper, a general condition for 3-color zero-moire solutions is derived. A procedure using integer equations to search moire-free solutions for different applications is also described.
Novel color palettization scheme for preserving important colors
Jiebo Luo, Kevin E. Spaulding, Qing Yu
Color palettization is the process that converts an input color image having a large set of possible colors to an output color image having a reduced set of palette colors. For example, a typical input 24-bit input image has possibly millions of colors, whereas a typical color palette has only 256 colors. It is desirable to determine the set of palette colors based on the distribution of colors in the input image. Furthermore, it is also desirable to preserve important colors such as human skin tones in the palettized image. We propose a novel scheme to accomplish these goals through supplementing the distribution of input colors by a distribution of selected important colors. In particular, skin color supplementation is achieved by appending to the input image skin tone patches generated from statistical sampling of the skin color probability density function. A major advantage of this scheme is that explicit skin detection, which can be error-prone and time consuming is avoided. In addition, this scheme can be used with any color palettization algorithms. Subjective evaluation has shown the efficacy of this scheme.
New Architectures and Halftoning
icon_mobile_dropdown
Model-based digital image halftoning using iterative reduced-complexity grid message-passing algorithm
Phunsak Thiennviboon, Antonio Ortega, Keith M. Chugg
An iterative grid message-passing algorithm for model-based digital image halftoning is introduced. Based on the standard message-passing algorithm on the grid graphical model, the algorithm is designed to suboptimally solve general two-dimensional (2D) digital least metric (DLM) problems and is found to be very successful (i.e., nearly optimal) for 2D data detection in page-oriented optical-memory (POM) systems. In contrast to many 2D (iterative) optimization techniques, this grid algorithm attempts to achieve a globally optimal solution via a local-metric computation and message-passing scheme. Using a reduced-complexity technique, the simplified grid algorithm is proposed for the halftoning problem and is shown to provide similar image quality as compared to the best halftoning algorithms in the literature. Since the grid algorithm does not exploit the properties of a specific metric, it is directly applicable to other digital image processing tasks (e.g., optimal near-lossless coding, entropy-constrained halftoning, or image/video dependent quantization).
Halftoning on the wavelet domain
Wavelet representations of images are increasingly important as more image processing functions are shown to be advantageously executed in the wavelet domain. Images may be inverse halftoned, compressed, denoised, and enhanced in the wavelet domain. In conjunction with other wavelet processing, it would be efficient to halftone directly from the wavelet domain. In this paper we demonstrate how to perform error diffusion in the wavelet domain. The wavelet coefficients are modified by a normalization factor and re-arranged. Then, traditional feed-forward raster scan error diffusion is performed and quality halftones are shown to result. Error diffusing in the wavelet domain is noted to be non-causal with respect to the pixels, and thus the method is not reproducible by feed-forward raster scan error diffusion of pixels. It is shown that the wavelet halftones preserve the average value of the input for constant patches. The resulting halftones may appear smoother in smooth regions and sharper at edges than the corresponding pixel-domain halftones. Disadvantages may include a greater susceptibility to moire and false contouring. Error diffusion is a two-dimensional sigma-delta modulation, and the ideas presented may also be useful for one-dimensional sigma-delta modulation applications.
Halftoning over a hexagonal grid
Pierre-Marc Jodoin, Victor Ostromoukhov
In this contribution, we present an optimal halftoning algorithm that uniformly distributes pixels over a hexagonal grid. This method is based on a slightly modified error-diffusion approach presented at SIGGRAPH 2001. Our algorithm's parameters are optimized using a simplex downhill search method together with a blue noise based cost function . We thus present a mathematical basis needed to perform spectral and spatial calculations on a hexagonal grid. The proposed algorithm can be used in a wide variety of printing and visualization tasks. We introduce an application where our error-diffusion technique can be directly used to produce clustered screen cells.
Bit-depth extension using spatiotemporal microdither based on models of the equivalent input noise of the visual system
Continuous tone, or “contone”, imagery usually has 24 bits/pixel as a minimum, with eight bits each for the three primaries in typical displays. However, lower-cost displays constrain this number because of various system limitations. Conversely, high quality displays seek to achieve 9-10 bits/pixel/color, though there may be system bottlenecks limited at 8. The two main artifacts from reduced bit-depth are contouring and loss of amplitude detail; these can be prevented by dithering the image prior to these bit-depth losses. Early work in this area includes Roberts’ noise modulation technique, Mista’s blue noise mask, Tyler’s technique of bit-stealing, and Mulligan’s use of the visual system’s spatiotemporal properties for spatiotemporal dithering. However, most halftoning/dithering work was primarily directed to displays at the lower end of bits/pixel (e.g., 1 bit as in halftoning) and higher ppi. Like Tyler, we approach the problem from the higher end of bits/pixel/color, say 6-8, and use available high frequency color content to generate even higher luminance amplitude resolution. Bit-depth extension with a high starting bit-depth (and often lower spatial resolution) changes the game substantially from halftoning experience. For example, complex algorithms like error diffusion and annealing are not needed, just the simple addition of noise. Instead of a spatial dither, it is better to use an amplitude dither, termed microdither by Pappas. We have looked at methods of generating the highest invisible opponent color spatiotemporal noise and other patterns, and have used Ahumada’s concept of equivalent input noise to guide our work. This paper will report on techniques and observations made in achieving contone quality on ~100 ppi 6 bits/pixel/color LCD displays with no visible dither patterns, noise, contours, or loss of amplitude detail at viewing distances as close as the near focus limit (~120 mm). These include the interaction of display nonlinearities and their role of generating a low-spatial frequency flicker from mutually high-pass spatial and temporal noise, as well as the temporal response symmetries.
Improved vector error difussion for reduction of smear artifact in the boundary regions
This paper proposes a vector error diffusion method for smear artifact reduction in the boundary region. This artifact mainly results from a large accumulation of quantization errors. In particular, color bands with a smear artifact the width of a few pixels appear along the edges. Accordingly, to reduce this artifact, the proposed halftoning process excludes the large accumulated quantization error by comparing the vector norms and vector angles between the error-corrected vector and eight primary color patches. When the vector norm of the error corrected vector is larger than those of eight primary color patches, the quantization error vector is excluded from the quantization error distribution process. In addition, the quantization error is also excluded when the angle between eight primary color patches and error-corrected vector is large. As a result, the proposed method enables a visually pleasing halftone pattern to be generated by all three color separations into account in a device-independent color space and reduces smear artifact in the boundary regions.
Compression and Data
icon_mobile_dropdown
Compressible halftoning
We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.
Compressible error diffusion
This paper presents a combined halftoning and compression approach, which achieves high quality halftones and compression ratios comparable to those of JBIG with much simpler encoding and decoding schemes. In the proposed algorithm, we halftone the input continuous-tone image with constraints in a controlled manner, so that we can decrease the entropy of the resulting halftones while maintaining a high level of visual image quality.
Image barcodes
A Visually significant two-dimensional barcode (VSB) developed by Shaked et. al. is a method used to design an information carrying two-dimensional barcode, which has the appearance of a given graphical entity such as a company logo. The encoding and decoding of information using the VSB, uses a base image with very few graylevels (typically only two). This typically requires the image histogram to be bi-modal. For continuous-tone images such as digital photographs of individuals, the representation of tone or "shades of gray" is not only important to obtain a pleasing rendition of the face, but in most cases, the VSB renders these images unrecognizable due to its inability to represent true gray-tone variations. This paper extends the concept of a VSB to an image bar code (IBC). We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base-images such as those acquired with a digital camera. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. The IBC supports a high information capacity that differentiates it from common hardcopy watermarks. The reason for the improved image quality over the VSB is a joint encoding/halftoning strategy based on a modified version of block error diffusion. Encoder stability, image quality vs. information capacity tradeoffs and decoding issues with and without explicit knowledge of the base-image are discussed.
Image rendering for digital fax
Guotong Feng, Michael G. Fuchs, Charles A. Bouman
Conventional halftoning methods such as error diffusion and ordered dithering are poorly suited to the compression of halftone images using the baseline fax compression schemes CCITT G3 and G4. This paper proposes an efficient and flexible solution for binary representation of mixed content documents using CCITT G3/G4 compression. The solution includes two variations which we refer to as FastFax and ReadableFax. FastFax performs edge detection and text detection by applying locally adaptive binary thresholding and combines the two detection results together. The FastFax algorithm produces an accurate representation of binary mixed document content with high compressibility using CCITT G3/G4 compression. ReadableFax is based on FastFax and applies clustered dot screening to background and halftone regions to enhance graphic content. Both methods provide accurate representation of image content while allowing for substantial compressibility, and provide a tradeoff between representation quality and bit rate.
Implementation Issues
icon_mobile_dropdown
Incorporating memory constraints in the design of color error diffusion halftoning systems
We analyze color error diffusion with memory constraints. Color error diffusion requires the storage of error terms in an error buffer. We explore memory reduction by representing the error buffer in the YIQ space and allocating a finite number of bits to each channel. The error buffer is represented in a packed-bit format. This constrains the error buffer to a desired bit-width. However such a constraint degrades performance. The degradation is observed as an increase in perceived color quantization noise. We derive an optimal solution for the error filter coefficients which minimize the visual effect of the memory constraint. Our formulation allows the filter coefficients to be matrix-valued allowing cross-channel diffusion of color errors. The optimal filter depends on the color characteristics of the device, viewing distance and the specific bit allocations used for the error buffer and the the rendering frame buffer.
Memory efficient error diffusion
Li and Allebach recently proposed parameter-trainable tone dependent error diffusion (TDED) which yields outstanding halftone quality among error diffusion based algorithms. In TDED, the tone dependent weights and thresholds as well as a halftone bitmap for threshold modulation are implemented as look-up tables (LUTs) which consume on-chip memory. In addition, the diffused errors must be buffered in on-chip memory and in most cases, transferred to off-chip memory. However, off-chip memory access considerably deteriorates system performance. In this paper, we propose two approaches to improve memory efficiency. First, we use deterministic bit flipping to replace threshold modulation, and linearize the weights and thresholds of TDED. This reduces the memory requirement by using only a few constants, rather than full LUTs, and generates halftones whose quality is nearly indistinguishable from that of standard TDED. Secondly, we propose a block-based processing strategy which significantly reduces off-chip memory access. We devise a novel scan-path which enables our algorithm to process any input image block-by-block without yielding block-boundary artifacts. Special filters are designed and optimized for the block diagonals so that the resulting halftone quality is comparable to that of standard TDED.
Practical issues in color inkjet halftoning
Even Toned Screening has evolved from a research project into a practical module for halftoning on color inkjet printers, with many commercial and free software users worldwide. Feedback from these users has motivated tuning and other modifications to make the algorithm more practical. This paper discusses both the core algorithms and the practical issues involved in driving real printers for real users. The specific issues include: Nonsquare aspect ratios, Interaction between dither microstructure and weaving patterns, multilevel dot generation, processing speed, and interactions between microstructures in overlapping planes.
Posters
icon_mobile_dropdown
Semi-automated segmentation of microbes in color images
Chandankumar K. Reddy, Feng-I Liu, Frank B. Dazzo
The goal of this work is to develop a system that can semi-automate the detection of multicolored foreground objects in digitized color images that also contain complex and very noisy backgrounds. Although considered a general problem of color image segmentation, our application is microbiology where various colored stains are used to reveal information on the microbes without cultivation. Instead of providing a simple threshold, the proposed system offers an interactive environment whereby the user chooses multiple sample points to define the range of color pixels comprising the foreground microbes of interest. The system then uses the color and spatial distances of these target points to segment the microbes from the confusing background of pixels whose RGB values lie outside the newly defined range and finally finds each cell's boundary using region-growing and mathematical morphology. Some other image processing methods are also applied to enhance the resultant image containing the colored microbes against a noise-free background. The prototype performs with 98% accuracy on a test set compared to ground truth data. The system described here will have many applications in image processing and analysis where one needs to segment typical pixel regions of similar but non-identical colors.
Neutralizing paintings with a projector
A painting needs illumination to be visible. If the illumination is provided by an LCD data projector, different regions of the painting can be illuminated separately. Modern projectors have large color gamuts and can provide a wide range of illumination effects. One possible effect is to project a captured digital image of the painting onto the painting; the resulting superposition of like colors intensifies the contrast and saturation of the image. The opposite effect is to project the complement of the image onto the painting to "neutralize" it. When carefully done, with correct registration, the painting fades into a nearly uniform gray. Although a simple idea, in practice it is not trivial to accurately find the complementary color for each part of the painting, even when it is captured by a calibrated digital camera. This research examines the problems of accurately capturing the image, combining the projector gamut with typical paint reflectances, and determining the available range of complementary projector colors and the final lightness of the neutral image. The work was initially inspired by a student's fine art project, wherein computer animation was superimposed on a painting, bringing it to life.