Proceedings Volume 3300

Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts III

cover
Proceedings Volume 3300

Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 2 January 1998
Contents: 9 Sessions, 46 Papers, 0 Presentations
Conference: Photonics West '98 Electronic Imaging 1998
Volume Number: 3300

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Electronic Publishing I
  • Electronic Publishing II
  • Color Systems
  • Color Reproduction
  • Color Calibration and Measurement
  • Color Reproduction
  • Imaging Systems
  • Imaging for Peripherals
  • Color Calibration and Measurement
  • Halftoning
  • Poster Session
Electronic Publishing I
icon_mobile_dropdown
Future of printing: changes and challenges, technologies and markets
Helmut Kipphan
Digitalization within the graphic arts industry is described and it is explained how it is improving and changing the print production strategies and which new kinds of print production systems are developed or can be expected. The relationship of printed media and electronic media is analyzed and a positioning for the next century is given. The state of the art of conventional printing technologies, especially using direct imagine techniques, and their position within the digital workflow are shortly described. Non-impact printing multicolor printing systems are explained, based on general design criteria and linked to existing and newly announced equipment. The use of high-tech components for building up successful systems with high reliability, high quality and low production costs is included with some examples. Digital printing systems open many opportunities in print production: distributed printing, personalization, print and book on demand are explained as examples. The overview of the several printing technologies and their positioning regarding quality and productivity leads to the scenario about the important position of printed media, also in the distant future.
Future of electronic printing
Since the mid-1970s, electronic printing has been available to businesses. Its growth has been truly amazing. Original predictions of a need for no more than 500 laser printers has now given way to millions of printers of all types from home and personal office to the print shop. This paper is intended to look at what the future of electronic printing might look like. Any prognostication of the future is always risky and the intent here is to look at what has happened and what is happening in the marketplace and thus get some view of the future. Will the printer vendors of today be the printer vendors tomorrow? What will be the market leading characteristics of future electronic printers? Can we measure value to the first order? We shall try to briefly explore these and some other questions in order to pursue some idea of where the business is going.
Electronic Publishing II
icon_mobile_dropdown
Historical documents reveal their secrets: the digiterati look past the darkness
Robert H. Johnston, Roger L. Easton Jr., Keith T. Knox
Advances in digital imaging technology have presented new opportunities for scholars to better study ancient texts through image clarification and enhancement, Textual material exists over a period of at least 5000 years, written on a great variety of material including leather, clay, stone, papyrus, copper and, of course, paper. Much of the knowledge we have of the past emerges through this textual material. The written material provides us with the knowledge and traditions of our past. The histories of religion, technology, science, medicine, astronomy, cartography and commerce are ensconced in the ancient writings left by civilizations and traditions of the past. Not only is knowledge transmitted from this ancient past but records and events of the past are there for use and applications in current times. Records of astronomical occurrences, migratory patterns, climatic incidences, warfare, visions, predictions, pestilence, legends, myths, histories, heroes and despots all play a role in understanding current and future schemes and strategies. Our knowledge base builds on the past and the more we learn about the past the better we can understand and deal with the present and plan for the future.
Walls are falling: electronic publishing and the cybrary
Giuliana A. Lavendel
Electronic publishing rightfully belongs with that elusive puzzle academics and government officials call 'The Digital Library,' which focuses almost exclusively on the storage and retrieval aspects of information management. But electronic publishing is also part of the new 'knowledge ecology' of how information is created, distilled disseminated and utilized. It provides a staple which will remain tied to printers and paper in the foreseeable future; this is because human factors intervene to limit how much can be learned and retained from what we hear or see on a screen, no matter how richly pixel- endowed. This presentation by a longtime information manager/librarian in a Fortune 100 corporation will touch on the dilemmas faced by her profession, by publishers, and by all participants in the information market, which now represents 60% of the GDP in the United States.
Special color design for homepages to gain attention of the user
Werner K. Sobotka
The paper deals with colors used for homepage design and their sensitivity for the human eye. Different homepages were investigated and the color reproduction process analyzed. Which type of colors are useful to in homepage design and which colors do not attract the viewers attention. Also some investigations were carried out to minimize the time factor for the appearance of different colors on the TV-monitor. Another important part of good Internet-page is the size of a colored picture. At the end of the paper different factors are pointed out which need to be considered for a good homepage design especially for color images.
Structure and navigation for electronic publishing
John Tillinghast, Giordano B. Beretta
The sudden explosion of the World Wide Web as a new publication medium has given a dramatic boost to the electronic publishing industry, which previously was a limited market centered around CD-ROMs and on-line databases. While the phenomenon has parallels to the advent of the tabloid press in the middle of last century, the electronic nature of the medium brings with it the typical characteristic of 4th wave media, namely the acceleration in its propagation speed and the volume of information. Consequently, e-publications are even flatter than print media; Shakespeare's Romeo and Juliet share the same computer screen with a home-made plagiarized copy of Deep Throat. The most touted tool for locating useful information on the World Wide Web is the search engine. However, due to the medium's flatness, sought information is drowned in a sea of useless information. A better solution is to build tools that allow authors to structure information so that it can easily be navigated. We experimented with the use of ontologies as a tool to formulate structures for information about a specific topic, so that related concepts are placed in adjacent locations and can easily be navigated using simple and ergonomic user models. We describe our effort in building a World Wide Web based photo album that is shared among a small network of people.
Digital Distribution of Advertising for Publications (DDAP): a graphic arts prototype of electronic intermedia publishing (EIP)
Patrice M. Dunn
The Digital Distribution of Advertising for Publications (DDAP) is a graphic arts industry prototype of Electronic Intermedia Publishing (EIP). EIP is a strategic, multi- industrial concept that seeks to enable the capture and input of volumes of data (i.e., both raster and object oriented data -- as well as the latter's antecedent which is vector data -- color data and black-and-white data) from a multiplicity of devices; then flowing, controlling, manipulating, modifying, storing, retrieving, transmitting, and shipping, that data through an industrial process for output to a multiplicity of output devices (e.g., ink on paper, toner on paper, bits and bytes on CD ROM, Internet, Multimedia, HDTV, etc.). As the technical requirements of the print medium are among the most rigorous in the Intermedia milieu the DDAP prototype addresses some of the most challenging issues faced in Electronic Intermedia Publishing (EIP).
Graphic arts color standards update: 1998
Graphic arts related color standards activities are being pursued by ISO Technical Committee 130 (TC130) Graphic technology, and ANSI Committee CGATS (Committee for Graphic Arts Technologies Standards). The content and application of the existing graphic arts color related standards are summarized. These are focused primarily on issues of metrology, color characterization targets, and reference images. The ongoing work has grown out of the need to understand the applicability of color management in the graphic arts work flow and the associated meaning of CMYK data. This has resulted in a growing understanding that, through definition and characterization of the output printing process, CMYK data can be effectively 'device independent,' although possibly not gamut independent. The ongoing work relating to the definition and characterization of various output printing process is also summarized.
Color Systems
icon_mobile_dropdown
Color conversion using neural networks
Neural network methods are described for color coordinate conversion between color systems. We present solutions for two problems of (1) conversion between two color-specification systems and (2) conversion between a color-specification system and a device coordinate system. First we discuss the color-notation conversion between the Munsell and CIE color systems. The conversion algorithms are developed for both directions of Munsell-to-L*a*b* and L*a*b*-to-Munsell. Second we discuss a neural network method for color reproduction on a printer. The color reproduction problem on the printer using more than four inks is considered as the problem of controlling an unknown system. The practical algorithms are presented for realizing the mapping from the L*a*b* space to the CMYK space. Moreover the method is applied to the color control using CMYK plus light cyan and light magenta.
Insight into the solutions of the Neugebauer equations
Marc F. Mahy
Up till now, the inversion of the Neugebauer model is performed by making use of iterative methods. In this way, at most one colorant combination will be obtained to render a given color. In this publication, however, it will be demonstrated that there are in general multiple solutions for the Neugebauer equations for three colorants. Several interesting characteristics of the printing process are related to the occurrence of multiple solutions. These characteristics are represented for the Neugebauer equations for two colorants and two color values because this model is from a mathematical point of view quite simple. Subsequently the results are extended to the Neugebauer equations for three colorants and three color values. All three characteristics are related to the so called natural boundaries. Natural boundaries are extra surfaces in colorant space that for some color reproduction devices should be taken into account in color gamut calculations. They are especially important if there are multiple colorant combinations inside the colorant domain to render a color. In colorant space these colorant combinations with which a given color can be obtained are laid out symmetrically compared to natural boundaries. If these boundaries are transformed to color space, they divide the color space into several regions in such a way that colors in every region can be obtained with a constant number of colorant combinations.
ColorSync 2.5: building a color platform
Steve Swen
Since ColorSync 1.0 was introduced in 1993, color management systems (CMS) are becoming a standard building block of modern operating systems. However, because CMS is an enabling technology, without other components in the system taking advantage of its capability, the end users may not experience the best results from this technology. This paper examines the approach that ColorSync takes to build a complete color platform which includes the following components: color management system with open framework, color savvy imaging model, easy ways to navigate and select color, device setup and calibration, scriptable interfaces, tightly integrated system. The paper points out the improvements over the previously released version.
Client/server approach to image capturing
Chris Tuijn, Earle Stokes
The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.
Color Reproduction
icon_mobile_dropdown
Transforming an analytically defined color space to match psychophysically gained color distances
Ingeborg Tastl, Guenther R. Raidl
In this paper a new method for defining a transformation between a source color space and a more perceptual uniform color space is presented. The main idea is to use a three dimensional free-form deformation to deform a source color space in such a way, that the new distances between chosen color samples match psychophysically estimated data as close as possible. This deformation of space is controlled via a set of control points being placed on a tree dimensional grid. The essential task of finding suitable control point coordinates in the destination color space has been solved with an evolution strategy.
Finding constant hue surfaces in color space
Fritz Ebner, Mark D. Fairchild
A colorimetrically calibrated CRT display was used to measure constant perceptual hue surfaces in color space. Three hundred six points over fifteen equally spaced hue angles (every 24 degrees) in CIELAB color space were sampled. An average of 20 lightness-chroma combinations per reference hue plane was sampled. Thirty observers performed the matching task three times each. Intra-observer variation was used to weight mean observer hue matches for each of 306 colors. Analysis of perceived hue uniformity was performed in CIELAB and CIECAM97s color spaces. Other constant hue experimental results are analyzed and compared to data obtained here.
Further development of the analytical color gamut representation
This paper describes the analytical representation of the color gamut surfaces of arbitrary print processes. Such a method was published earlier, but has now been improved and tested against a number of different print processes. Moreover, an algorithm to determine the model parameters that reflect the characteristics of a considered print process is described.
Image-dependent gamut mapping using a variable anchor point
Shin Dong Kim, Cheol-Hee Lee, Kyeong-Man Kim, et al.
Currently many devices reproduce electronic images in a variety of ways. However, the colors that are reproduced are different from the original color due to the differences in the gamut between devices. In this paper, an image dependent gamut mapping method is proposed. This method clips the chroma while compensating for the change of lightness caused by the lightness scaling required for a reproduction gamut. In this paper, the anchor point, which is the color center point in the mapping, is set at a lower point than the conventional mapping method. As a result, this reduces the difference between the original image on the monitor and the results on the reproduction device. Our mapping algorithm is applied to the gamut mapping between the monitor and printer. Consequently, the printer output image is highly consistent with the corresponding monitor image.
White point adaptation revisited
Randall G. Guay
Limitations in the International Color Consortium (ICC) method for adapting from one white point to another are illustrated and discussed. A new method, called 'inferred spectra,' is proposed which gives a better prediction of color viewed under various illuminants.
Color Calibration and Measurement
icon_mobile_dropdown
Calculation of best CMY basis functions
Werner Praefcke
We consider the Neugebauer model for printing of colors in a context where the print output is to be viewed under a set of different illuminants. In order to yield the minimal color error we present an iterative scheme to calculate the spectral characteristic of appropriate CMY basis functions. It is shown that an improvement of color quality regarding varying illuminants can be achieved by only slight modifications of standard Cromalin basis dyes.
Color Reproduction
icon_mobile_dropdown
Color appearance matching in hard-copy and soft-copy images in different office environments
Yoshinobu Shiraiwa, Yumiko Hidaka, Toshiyuki Mizuno, et al.
This paper describes a method for matching the color appearances of hard copies and soft copies in ordinary office environments where lighting conditions usually vary. To develop this method, we studied ways of easily calculating the colorimetric values of hard copies under an arbitrary illuminant. In this report, we converted the colorimetric values of hard copies taken under a reference illuminant to the colorimetric values taken under a viewing illuminant by using a 3-by-3 matrix. The conversion matrix CR under an arbitrary illuminant is calculated by using the following equation: CR equals IH X CRh + (1-IH) X CRl. Here, CRh represents a conversion matrix for a high color-rendering illuminant, and CRl represents a conversion matrix for a low color-rendering illuminant. IH is the coefficient that represents the mixture ratio of high and low color-rendering illuminants. We calculated the colorimetric values of a hard copy under an arbitrary illuminant by using the above method, performed a color adaptation correction, and displayed the soft copy on a monitor. The color appearance of the soft copy matched that of the hard copy well under various illuminants.
Sharp transformations for color appearance
Paul M. Hubel, Graham D. Finlayson
In this paper we describe the benefit of using a sharp transformation in the context of a color appearance model. The proposed scheme is shown to perform better than other models for the limited set of conditions tested. The testing method is similar to that described by Braun and Fairchild involving paired comparisons between prints under different illumination conditions and images calculated by the models for rendering on a CRT. Our testing shows that using a model that employs spectral sharpening for illuminant color compensation achieves better results than previous methods.
Multiresolution color correction
Raja Balasubramanian, Ricardo L. de Queiroz, Zhigang Fan
In this paper, a color correction system is embedded into a multiresolution representation with the goal of reducing the complexity of 3D look-up table transformations. A framework is assumed wherein the image undergoes a multiresolution decomposition, e.g. discrete wavelet transform, for the purpose of image compression or other processing. After the image is reconstructed from its multiresolution representation, color correction is usually required for rendering to a specific device. The color correction process is divided into two phases: a complex multidimensional transform (Phase 1), and a series of essentially 1-D transforms (Phase 2). Phase 1 correction is then moved within the multiresolution reconstruction process in such a way that a small subset of the image samples undergoes the multidimensional correction. Phase 2 correction is then applied to all image samples after the image is reconstructed to its full resolution. The recently proposed spatial CIELAB model is used to evaluate the algorithm. The computational cost incurred by the color correction is considerably reduced, with little loss in image quality.
Adaptive color rendering for images containing flesh-tone content
Minghui Xia, Eli S. Saber, A. Murat Tekalp, et al.
A novel color rendering algorithm for printing images with flesh tone content is addressed in this paper. Instead of rendering the whole image with one color rendering dictionary (CRD), two CRDs, a global CRD and a flesh tone CRD are employed to correct the image adaptively. First, the optimal CRD for flesh-tone is designed by densely sampling, in color space, the flesh tone region and selecting special Neugebauer primaries. Then, a given image is segmented into flesh-tone and non-flesh-tone regions. The flesh-tone region is corrected by utilizing the flesh-tone CRD while the remainder of the image is corrected by employing the global CRD. The resulting separation of cyan, magenta, yellow and black could be sent directly to printer or could be compressed and stored for later use. The quality of the flesh tone content in an image will be improved by using optimally designed CRD for flesh tone while the rest of the image is corrected by the global CRD.
Automatic favorite-color control for reference color
Eung-Joo Lee, Ki-Ho Hyun, Yeong-Ho Ha
The color control of reproduced images has been a critical problem in TV system. The viewer can adjust the color control at the receiver for optimal color reproduction, but frequent color adjustment is the most common problem experienced by the viewer. In this paper, we propose an automatic favorite color control system which represents the favorite color to the viewer on demand. The system consists of phase detector to detect the favorite colors at real time from the color burst signal and color signal, comparators to discriminate the types of favorite color. The proposed system reproduces flesh tone, blue color, and green color. In the proposed algorithm, the variation range of phase detector output voltage was minimized for the favorite color saturation changes and also the color signal phase is readjusted from the color burst signal. Thus, the favorite color was easily detected from the other colors without overlapping of correction range and it provides reference color to viewer.
Imaging Systems
icon_mobile_dropdown
Obtaining and reproduction of accurate color images based on human perception
Yoichi Miyake, Yasuaki Yokoyama
We have developed high definition color imaging systems for digital artworks. The system is consisted of multiband camera, personal computer and projection type monitor. The multiband camera is composed by five band filters and single chip CCD camera with 2048 by 2048 pixels. On the basis of principal component analysis and Wiener estimation method, the reflectance spectra of each pixel of the object were estimated using five band images taken by CCD camera. Images of paintings are reproduced onto both the projection type monitor and CRT under the consideration of color appearance. In this paper, method to estimate of reflectance spectra of paintings from five band images is described, and the color reproduction of paintings based on the color appearance models is also discussed.
Characterization of a color-sensitive photodetector implemented in a BiCMOS technology
The operation and the colorimetric characterization of a buried triple p-n junction (BTJ) tristimulus detector are presented. A method defining a linear transformation between the detector color space and the C.I.E. standard is proposed. With the least squares fitting to the third order a mean color difference of 2.15 CIELAB units, between the detector response and the C.I.E. specification is predicted. The temperature effects on the detector and the linear transformation accuracy are studied between minus 60 degrees Celsius and 60 degrees Celsius. The color shifts in the detector specifications due to a temperature variations are smaller than 0.5.
Color filter arrays based on mutually exclusive blue noise patterns
The ordered color filter arrays (CFA) used in single sensor, color digital still cameras introduce distracting color artifacts. These artifacts are due to the phase shifted, aliased signals introduced by the sparse sampling by the CFAs. This work reports the results of an investigation on the possibility of using random patterns as a CFA for single sensor, digital still cameras. From a single blue noise mask pattern, three mutually exclusive, random CFAs are constructed representing the red, green, and blue color filters. An edge adaptive method consisting of missing-pixel edge detection and boundary sensitive interpolation is employed to reconstruct the entire image. Experiments have shown that the random CFA alleviates the problem of the low-frequency color banding associated with ordered arrays. This method also has the advantage of better preserving color free, sharp neutral edges, and results in less deviation from neutral on high frequency, monochrome information.
Effect of user controls on CRT monitor characteristics
Tatsuya Deguchi, Naoya Katoh
CRT monitors are widely used for desktop publishing (DTP) and to view images on the Internet. The color images on the computer graphic display can be printed out or displayed on other monitors through the Internet. Here, color matching between the original image on the monitor and the printed image or the image displayed on other monitors is very important. The current color management systems (CMSs) are useful for the color matching. These CMSs utilize device profiles, such as ICC profiles, in which color characteristic information is stored. These profiles are generated by device characterization. Thus, an accurate characterization of the monitor is essential for better color matching. According to the ICC specification, monitor characteristics can be described by the chromaticity and the tone reproduction curves (TRCs) of the red, green and blue channels. Although the monitor profiles on the current CMSs are based on the assumption that the monitors are maintained at the default adjustment and are viewed in a dark room, in fact the user often adjusts the settings of contrast/brightness and usually views the monitor under ambient light. In this paper, we investigated the effect of user controls on CRT monitor characteristics. We reconsidered the two characteristics of the CRT monitor: the effect of user controls on the TRCs and interaction among the channels. We compared several models of the TRC and various matrices to transform linearized RGB to CIE 1931 XYZ with different settings of the user controls. Based on these experimental results, we propose a method for more accurate monitor characterization.
Imaging for Peripherals
icon_mobile_dropdown
Pixel bit-depth increase by bit replication
Robert A. Ulichney, Shiufun Cheung
In many applications, such as inverse dithering, it is necessary to increase the pixel bit-depth of images by expanding q-bit integer values to m-bit integer values (m greater than q). This paper describes a simple and efficient method that uses bit replication, instead of conventional multiplication, to achieve this expansion. First, we show that the optimal number of repetitions is given by ceiling (m/q) and that the method is equivalent to multiplication by the ideal gain when m/q is an integer. We then demonstrate that, in the case where m/q is not an integer, truncating the fraction bits to the right of the decimal point will lead to zero average error. The paper also includes two suggestions for implementing the bit-replication process, both of which have a vast complexity advantage over a multiplier. Two examples are given at the end to illustrate the bit- replication process in action.
Complexity reduction on two-dimensional convolutions for image processing
Luca Chiarabini, Jonathan Yen
Presented here is a method for reducing the computational complexity of two-dimensional linear convolutions used in image processing like binary image scaling. This method is a hybrid of convolving at run-time and convolving by table lookup. The convolution step in image processing usually calculates a weighted average of an area of the input image by calculating the entry-by-entry multiplication of the input pixels with a weight table. This method partitions the calculations in the convolution step and stores pre-calculated partial results in lookup tables. When the convolution step takes place, a binary indexing is used to retrieve the partial results and the final result is obtained by summing up the partial results. A line cache and a double buffering scheme are designed to reduce memory access in table lookup. Space and time complexities are analyzed and compared to the conventional two-dimensional linear convolutions. We demonstrate that an order of magnitude reduction in the computational cost can be achieved. Examples, test images and performance data are provided.
Degree of quantization and spatial addressability trade-offs in the perceived quality of color images
Alexander A. Vaysman, Mark D. Fairchild
Investigation of the tradeoffs between the number of quantization levels and spatial addressability of printed color images was performed. Error diffusion in CMYK color space was used to quantize the images. Quantized images were printed on a single color printer simulating different spatial addressabilities. A psychophysical experiment was conducted to evaluate the perceived image quality (IQ) of the prints. The conclusions on the tradeoffs were drawn based on the results of the consequent statistical analysis. It was determined that the tradeoffs were scene dependent with pictorial scenes being able to sustain greater reduction in addressability without perceived IQ being decreased than graphics. The results of the experiment demonstrated that pictorial scenes were sufficient to be printed with 5 bits per color (bpc) per pixel at 100 dots per inch (dpi), and graphics -- 3 bits per color per pixel at 300 dpi in order to match the perceived IQ of the best possible, 8 bpc - 300 dpi, combination for the given system at normal viewing distance. If a single bpc-dpi combination was to be named as the optimum, it would have to be 3 bpc - 300 dpi.
Fast autocorrelation-based context template selection scheme for lossless compression of halftone images
Koen N.A. Denecker, Steven Van Assche, Ignace L. Lemahieu
Recently, new applications such as printing on demand and personalized printing have arisen where lossless halftone image compression can be useful for increasing transmission speed and lowering storage costs. State-of-the-art lossless bilevel image compression schemes like JBIG only achieve moderate compression ratios due to the periodic dot structure of classical halftones. In this paper, we present two improvements on the context modeling scheme. Firstly, we adapt the context template to the periodic structure of the halftone image. This is a non-trivial problem for which we propose a fast close-to-optimal context selection scheme based on the calculation and sorting of the autocorrelation function on a part of the image. Secondly, increasing the order of the model produces an additional gain in compression. We have experimented with classical halftones of different resolutions and sizes, screened under different angles, and produced by either a digital halftoning algorithm or a digital scanner. The additional coding time for context selection is negligible, while the decoding time remains the same. The global improvement with respect to JBIG (which has features like one adaptive pixel, histogram rescaling and multiresolution decomposition) is about 30% to 65%, depending on the image.
Color Calibration and Measurement
icon_mobile_dropdown
Calibrating spectrophotometers using neural networks
Hsiao-Pei Lee, Guoping Qiu, Ming Ronnier Luo
This paper describes a neural network based method to improve inter-instrument agreement. For each instrument, a three-layer feed-forward neural network was trained using standard reference materials with known reflectance values. The BCRA- NPL tiles were measured by each instrument. The neural network models were derived to correct the measured data in agreement with those measured by the CERAM (standard). Twelve BCRA-NPL tiles were used for training and 32 glossy paint samples selected from OSA Uniform Color Scales were used to test the method. Experimental results for two different spectrophotometers are presented which show good improvement in inter-instrument agreement for both the training and testing samples.
CGATS data evaluation protocol for printing process characterization
David Q. McDowell, Lawrence C. Steele
Color management in the graphic arts industry requires control of the printing conditions and knowledge of the intended output color characteristics. As the industry continues its accelerated move toward complete digital work flow, the need for reference printing characterization data is a requirement to enable color management of digital data. The Committee for Graphic Arts Technologies Standards, CGATS, has been supporting the industry by developing a protocol including measurement, data analysis, and data reporting format that results in characterization data for specific reference printing processes. This paper provides a summary of the protocol that has evolved from the continued standards development within the CGATS SC4 Process Control Subcommittee.
Minitargets: a new dimension in print quality control
Hansjoerg Kuenzli, Freddy Deppner, Karl Heuberger, et al.
Running a printing press requires frequent measurements in order to achieve and maintain an acceptable level of print quality. Ideally, on-line measurements should be made within an image and automatically used to adjust the inking system to achieve and maintain the desired target values. However, there are still several obstacles to this approach. As an intermediate solution, we have developed a system for quality control in newspaper printing, consisting of a mini-target, an optical device, and a three chip CCD camera. The measurements are made off-line. Individual prints are analyzed from which the behavior of the printing press can be studied. The mini- target was minimized to approximately 1/9 of a square inch, which is more than a magnitude smaller compared to control strips commonly used. The mini-target contains areas of solid tone, halftone and elements for register control. Due to its small dimensions it is accepted in newspaper printing. The CCD camera is a commercially available 3 chip 8-bit camera. Using the camera and image analysis software which has been developed, we are able to measure registration, solid-tone density, dot gain and colorimetric values. A measurement is performed by capturing the image of the test target in one shot followed by image analysis. The reproducibility of the system is high. Misregistration can be determined in the range of plus or minus 200 micrometers with an accuracy of about 20 micrometers.
Application of a 3-CCD color camera for colorimetric and densitometric measurements
David Brydges, Freddy Deppner, Hansjoerg Kuenzli, et al.
Video cameras have been used in the graphic arts industry primarily for quality inspection applications where one is interested only in the macro or large scale appearance defects of the print i.e. acceptable/not acceptable. CCD video cameras also have the potential for use in on-press color-type measurements. The advantages of such measurements are numerous, most notably the ability to accurately determine what has been measured. However, despite the advantages current CCD cameras are not designed to measure colors directly. One of the major drawbacks to the use of standard 3- CCD cameras for such measurements is that the spectral response of the cameras differ from standard densitometric or colorimetric responses. Additionally, the dynamic range of the CCD camera is not suitable to accurately measure the densities attainable in high quality sheet-fed printing. This paper discusses techniques which have been used, and results obtained, in an attempt to acquire both densitometric and colorimetric measurements from a standard 8-bit 3-CCD camera for use in newspaper printing.
Halftoning
icon_mobile_dropdown
Microcluster line screens and frequency analyses
Microcluster halftoning is a hybrid approach between clustered-dot and dispersed-dot ordered dithers. The concept and design principles of microcluster dots have been published elsewhere. This paper reports the frequency analyses of microcluster line screens. First, the Fourier transform for the frequency-domain analysis is briefly reviewed. Several line screens are proposed, ranging from 8 to 144 levels. At each level, three dot growth patterns of the conventional, interlace, and microcluster line screens are provided. Frequency analyses of these screens and the corresponding dispersed-dot are presented and compared. Results indicate that the line screens can be made into high frequency and high tone levels. Generally, they behave like a dispersed dot in the highlight region, a line screen in the midtone region, and an inverted dispersed dot in the shadow region.
Improving void-and-cluster for better halftone uniformity
Hakan Ancin, Anoop K. Bhattacharjya, Joseph Shou-Pyng Shu
Dithering quality of the void and cluster algorithm suffers due to fixed filter width and absence of a well-defined criterion for selecting among equally-likely candidates during the computation of the locations of the tightest clusters and largest voids. Various researchers have addressed the issue of fixed filter width by adaptively changing the width with experimentally determined values. This paper addresses both aforementioned issues by using a Voronoi tessellation and two criteria to select among equally likely candidates. The algorithm uses vertices of the Voronoi tessellation, and the areas of the Voronoi regions to determine the locations of the largest voids and the tightest clusters. During void and cluster operations there may be multiple equally-likely candidates for the locations of the largest voids and tightest clusters. The selection among equally-likely candidates is important when the number of candidates is larger than the number of dots for a given quantization level, or if there are candidates within the local neighborhood of one of the candidate points, or if a candidate's Voronoi region shares one or more vertices with another candidate's Voronoi region. Use of these methods lead to more uniform dot patterns for light and dark tones. The improved algorithm is compared with other dithering methods.
Estimation of error diffusion kernel using genetic algorithm
Seung-Ho Park, Ki-Min Kang, Choon-Woo Kim
Error diffusion technique has been one of the most popular digital image halftoning methods. The quality of binary image resulting from the error diffusion technique is affected by the following three key factors; the values of error diffusion kernel, the locations of neighboring pixels for error propagation, and the quantization scheme. Among these factors, this paper is focused on the estimation of the values of error diffusion kernel. In previous efforts to propose modification to the original Floyd-Steinberg's algorithm, the values of error diffusion kernel have been determined by the trial and error method or by utilizing optimization techniques such as the least mean square estimation and neural network methods. This paper presents a new estimation method for the values of error diffusion kernel based on the genetic algorithm. Compared to the conventional optimization techniques, the genetic algorithm based approach lifts restrictions on the complexity of the error criterion for optimization. In this paper, two types of the error criteria are defined to improve image quality. They represent a measure of the reproduction of average brightness and an extent of undesirable artifacts appeared on the binary image for specific gray levels. The values of error diffusion kernel are estimated by simultaneously minimizing the defined error criteria using genetic algorithm. In the experiments, three types of error diffusion kernel are examined. The experimental results indicate that the binary images obtained based on the estimated error diffusion kernel exhibit less artifacts.
Error diffusion algorithm with output position constraints for homogenous highlight and shadow dot distribution
In the conventional error diffusion, non homogeneous arrangement of dots ('worm' artifacts) may occur in the binary image for the highlight and shadow gray regions. In order to eliminate the 'worms' artifacts and preserve the advantages of error diffusion for rendering the middle tones, the proposed method introduces spatial constraints that conduct to more uniform arrangement of dots. In the proposed method, for a current pixel with a gray level in the shadow or highlight region, a dot is placed only if a minimum distance between the current position and the neighbor dots is satisfied. If the distance constraint is not satisfied, the placement of the current dot is postponed and the quantizer error is diffused to the unprocessed pixels. For an input shadow or highlight gray level at the current location, the distance constraint consists of checking if another dot is encountered along a roadmap associated to the current location for a length dependent on the gray level to be represented. The roadmap is defined such that it enlarges from the current location as the gray level to be represented is closer to the extreme limits of the gray level range. The minimum distance from the current dot (black or white) to its neighbors increases as long as the gray level to be represented is closer to the extreme limits of the gray level range (that is, closer to 0% or 100%). The algorithm represents a practical implementation of the output feedback control method proposed by Levien. In term of quality, the proposed method offers equivalent results with the threshold modulation using an imprint function proposed by Eschbach.
Color FM screen design using DBS algorithm
When halftoning a color image for a bi-level color printer, one has to obtain the halftones of the cyan, magenta, and yellow planes if the printer is a three color device. For a four-color printer, one has to also obtain the halftone of the black plane. Suppose a source color images is represented by the red, green, and blue components. The simple way of halftoning a color image using a dither matrix is to halftone each color plane independently using the same matrix. This will result in halftone dots of different colors overlapping each other, thus increasing the graininess of an image. Simple schemes such as shifting the matrices have been proposed in the past, but they usually reduce the dot overlap at the cost of increasing other artifacts, such as fuzziness. We propose an algorithm to jointly design a set of dither matrices such that the overall graininess is minimized. We use the direct binary search (DBS) algorithm to design a dither matrix for each of the primary colors of a printer, cyan, magenta, yellow, and black. A color fluctuation function is defined for the halftone patterns of a set of constant tone color patches in a uniform color space such as CIEL*a*b*. The color fluctuation function is then minimized on a level-by-level basis using swap operations. Efficient evaluation of the color fluctuation function allows the optimization to converge at a reasonable speed. We show that we are able to achieve halftone image quality comparable to that of the direct binary search (DBS) algorithm at a significantly lower computational cost. Because the dither matrices are pre-computed, efficient implementation in either hardware or software is possible.
Improved digital multitoning with overmodulation scheme
Qing Yu, Kevin J. Parker, Kevin E. Spaulding, et al.
Multilevel halftoning (multitoning) is an extension of bitonal halftoning, in which the appearance of intermediate tones is created by the spatial modulation of more than two tones, i.e., black, white, and one or more shades of gray. In this paper, a conventional multitoning approach and a specific approach, both using stochastic screen dithering, are investigated. Typically, a human visual model is employed to measure the perceived halftone error for both algorithms. We compare the performance of each algorithm at gray levels near the intermediate printer output levels. Based on this study, an over-modulation algorithm is proposed. This algorithm requires little additional computation and the halftone output is mean-preserving with respect to the input. We will show that, with this simple over-modulation scheme, we will be able to manipulate the dot patterns around the intermediate output levels to achieve desired halftone patterns. Investigation on optimal output level selection and inkjet printing simulation for this new scheme will also be reported.
Poster Session
icon_mobile_dropdown
Quality issues in blue noise halftoning
Qing Yu, Kevin J. Parker
The blue noise mask (BNM) is a halftone screen that produces unstructured visually pleasing dot patterns. The BNM combines the blue-noise characteristics of error diffusion and the simplicity of ordered dither. A BNM is constructed by designing a set of interdependent binary patterns for individual gray levels. In this paper, we investigate the quality issues in blue-noise binary pattern design and mask generation as well as in application to color reproduction. Using a global filtering technique and a local 'force' process for rearranging black and white pixels, we are able to generate a series of binary patterns, all representing a certain gray level, ranging from white-noise pattern to highly structured pattern. The quality of these individual patterns are studied in terms of low-frequency structure and graininess. Typically, the low-frequency structure (LF) is identified with a measurement of the energy around dc in the spatial frequency domain, while the graininess is quantified by a measurement of the average minimum distance (AMD) between minority dots as well as the kurtosis of the local kurtosis distribution (KLK) for minority pixels of the binary pattern. A set of partial BNMs are generated by using the different patterns as unique starting 'seeds.' In this way, we are able to study the quality of binary patterns over a range of gray levels. We observe that the optimality of a binary pattern for mask generation is related to its own quality mertirc values as well as the transition smoothness of those quality metric values over neighboring levels. Several schemes have been developed to apply blue-noise halftoning to color reproduction. Different schemes generate halftone patterns with different textures. In a previous paper, a human visual system (HVS) model was used to study the color halftone quality in terms of luminance and chrominance error in CIELAB color space. In this paper, a new series of psycho-visual experiments address the 'preferred' color rendering among four different blue noise halftoning schemes. The experimental results will be interpreted with respect to the proposed halftone quality metrics.
Aperiodic microscreen design using DBS and training
With the advent of high resolution (1200+ dpi) desktop printers, the use of conventional 128 by 128 screens can produce a distinctive periodicity in the printed images. A new method for design of multiple 32 by 32 screens using direct binary search and training is proposed. The screens are seamless with each other; and a small number of these screens are randomly tiled over the entire support of the continuous- tone image. These are then used to threshold the image to create the halftone image. Due to the random tiling of the screens, the resulting halftones do not have any periodicity in them. The resulting screens also have lower memory requirements than 128 by 128 screens. Experimental results also show that the exact order of the screens is not crucial to the quality of the final halftone. Therefore, no additional information about the ordering of the multiple screens needs to be stored.
Digital copying of medium-frequency halftones
Modern digital copiers offer distinct advantages over conventional analog copiers in their ability to perform added functionality. One disadvantage of digital copiers, however, is their susceptibility to moire. This is caused by interactions of the input data, the scanning resolution and the subsequent binarization for printing. Medium resolution halftones, often cause the severe problems. This paper describes a method to reduce the moire amplitude in for medium frequency halftones.
Compressing images for the Internet
The World Wide Web has rapidly become the hot new mass communications medium. Content creators are using similar design and layout styles as in printed magazines, i.e., with many color images and graphics. The information is transmitted over plain telephone lines, where the speed/price trade-off is much more severe than in the case of printed media. The standard design approach is to use palettized color and to limit as much as possible the number of colors used, so that the images can be encoded with a small number of bits per pixel using the Graphics Interchange Format (GIF) file format. The World Wide Web standards contemplate a second data encoding method (JPEG) that allows color fidelity but usually performs poorly on text, which is a critical element of information communicated on this medium. We analyze the spatial compression of color images and describe a methodology for using the JPEG method in a way that allows a compact representation while preserving full color fidelity.
Professional portrait studio for amateur digital photography
We describe how to build a professional portable portrait studio that can be used with any consumer camera. The studio allows effortless off-line chroma-key insertion of backgrounds. Digital consumer cameras are designed for delivering acceptable images in typical outdoor or small room situations. The cameras fail when tungsten filament lamps are used. The built-in flash tube is too weak to fill the background in a studio setting and cannot be used to trigger professional electronic-flash lamps because the camera's firmware computes the exposure under the assumption that only the build-in flash tube supplies light to the scene when it is activated. A further problem is the position of the lamps. In the case of digital cameras the position is much more delicate than for silver halide film cameras because the sensor's dynamic range is very small and unwanted shadows are easily created. We present two different lamp set-ups for different size rooms.