On the behavior of spatial models of color
Author(s):
A. Rizzi;
J. J. McCann
Show Abstract
There is a growing family of algorithms that treat/modify/enhance color information in its visual context, also known as
Spatial Color methods (e.g. Retinex or ACE). They produce results that, due to a changing spatial configuration, can
have a non-unique relationship with the physical input. In authors' opinion judging their performance is a challenging
task and is still an open problem. Two main variables affect the final result of these algorithms: their parameters and the
visual characteristics of the image they process. The term visual characteristics refers not only to the image's digital
pixel values, (e.g. calibration of pixel value, the measured dynamic range of the scene, the measured dynamic range of
the digital image), but also to the spatial distribution of these digital pixel values in the image. This paper does not deal
with tuning parameters, rather it discusses the visual configurations in which a Spatial Color methods show interesting,
or critical behavior. A survey of the more significant Spatial Color configurations will be presented and discussed.
These configurations include phenomena, such as color constancy and contrast. The discussion will present strengths
and weaknesses of different algorithms, hopefully allowing a deeper understanding of their behavior and stimulating
discussions about finding a common judging ground.
Effect of time spacing on the perceived color
Author(s):
Sylvain Roch;
Jon Y. Hardeberg;
Peter Nussbaum
Show Abstract
One of latest developments for pre-press applications is the concept of soft proofing, which aims to provide an accurate
preview on a monitor of how the final document will appear once it is printed. At the core of this concept is the problem
of identifying, for any printed color, the most similar color the monitor can display. This problem is made difficult by
such factors as varying viewing conditions, color gamut limitations, or the less studied time spacing. Color matching
experiments are usually done by examining samples viewed simultaneously. However, in soft proofing applications, the
proof and the print are not always viewed together. This paper attempts to shed more light on the difference between
simultaneous and time-spaced color matching, in order to contribute to improving the accuracy of soft proofs. A color
matching experiment setup has been established in which observers were asked to match a color patch displayed on a
LCD monitor, by adjusting its RGB values, to another color patch printed out on paper. In the first part of the experiment
the two colors were viewed simultaneously. In the second part, the observers were asked to produce the match according
to a previously memorized color. According to the obtained results, the color appearance attributes lightness and chroma
were the most difficult components for the observers to remember, generating huge differences with the simultaneous
match, whereas hue was the component which varied the least. This indicates that for soft proofing, getting the hues right
is of primordial importance.
Segmentation for MRC compression
Author(s):
Eri Haneda;
Jonghyon Yi;
Charles A. Bouman
Show Abstract
Mixed Raster Content (MRC) is a standard for efficient document compression which can dramatically improve
the compression/quality tradeoff as compared to traditional lossy image compression algorithms. The key to
MRC's performance is the separation of the document into foreground and background layers, represented as
a binary mask. Typically, the foreground layer contains text colors, the background layer contains images and
graphics, and the binary mask layer represents fine detail of text fonts.
The resulting quality and compression ratio of a MRC document encoder is highly dependent on the segmentation
algorithm used to compute the binary mask. In this paper, we propose a novel segmentation method based
on the MRC standards (ITU-T T.44). The algorithm consists of two components: Cost Optimized Segmentation
(COS) and Connected Component Classification (CCC). The COS algorithm is a blockwise segmentation algorithm
formulated in a global cost optimization framework, while CCC is based on feature vector classification of
connected components. In the experimental results, we show that the new algorithm achieves the same accuracy
of text detection but with lower false detection of non-text features, as compared to state-of-the-art commercial
MRC products. This results in high quality MRC encoded documents with fewer non-text artifacts, and lower
bit rate.
A new approach to JBIG2 binary image compression
Author(s):
Maribel Figuera;
Jonghyon Yi;
Charles A. Bouman
Show Abstract
The JBIG2 binary image encoder dramatically improves compression ratios over previous encoders. The effectiveness
of JBIG2 is largely due to its use of pattern matching techniques and symbol dictionaries for the
representation of text. While dictionary design is critical to achieving high compression ratios, little research
has been done in the optimization of dictionaries across stripes and pages.
In this paper we propose a novel dynamic dictionary design that substantially improves JBIG2 compression
ratios, particularly for multi-page documents. This dynamic dictionary updating scheme uses caching algorithms
to more effciently manage the symbol dictionary memory. Results show that the new dynamic symbol caching
technique outperforms the best previous dictionary construction schemes by between 13% and 46% for lossy
compression when encoding multi-page documents. In addition, we propose a fast and low-complexity pattern
matching algorithm that is robust to substitution errors and achieves high compression ratios.
Removal of artifacts from JPEG compressed document images
Author(s):
Basak Oztan;
Amal Malik;
Zhigang Fan;
Reiner Eschbach
Show Abstract
We present a segmentation-based post-processing method to remove compression artifacts from JPEG compressed
document images. JPEG compressed images typically exhibit ringing and blocking artifacts, which can be
objectionable to the viewer above certain compression levels. The ringing is more dominant around textual
regions while the blocking is more visible in natural images. Despite extensive research, reducing these artifacts
in an effective manner still remains challenging. Document images are often segmented for various reasons. As a
result, the segmentation information in many instances is available without requiring additional computation. We
have developed a low computational cost method to reduce ringing and blocking artifacts for segmented document
images. The method assumes the textual parts and pictorial regions in the document have been separated from
each other by an automatic segmentation technique. It performs simple image processing techniques to clean
out ringing and blocking artifacts from these regions.
Digital images for eternity: color microfilm as archival medium
Author(s):
C. Normand;
R. Gschwind;
P. Fornaro
Show Abstract
In the archiving and museum communities, the long-term preservation of artworks has traditionally been guaranteed by
making duplicates of the original. For photographic reproductions, digital imaging devices have now become standard,
providing better quality control and lower costs than film photography. However, due to the very short life cycle of
digital data, losses are unavoidable without repetitive data migrations to new file formats and storage media. We present
a solution for the long-term archiving of digital images on color microfilm (Ilfochrome® Micrographic). This extremely
stable and high-resolution medium, combined with the use of a novel laser film recorder is particularly well suited for
this task. Due to intrinsic limitations of the film, colorimetric reproductions of the originals are not always achievable.
The microfilm must be first considered as an information carrier and not primarily as an imaging medium. Color
transformations taking into account the film characteristics and possible degradations of the medium due to aging are
investigated. An approach making use of readily available color management tools is presented which assures the
recovery of the original colors after re-digitization. An extension of this project considering the direct recording of
digital information as color bit-code on the film is also introduced.
Colour appearance change of a large size display under various illumination conditions
Author(s):
Seo Young Choi;
Ming Ronnier Luo;
Michael R. Pointer
Show Abstract
This paper describes an investigation into the effect of a wide range of surround conditions on the colour appearance of
test colours on a 42" plasma display panel. Experiments were conducted using surrounds including dark, indoor and
outdoor conditions. Additionally the stimulus size was changed by controlling the viewing distance. The viewing
conditions studied were two bright, two average, two dim and two dark surrounds. Each of the test colours was assessed
by 10 observers using a magnitude estimation method. These surrounds were divided into two categories. In the first
category, the surround had no effect on the displayed colours, but observers could still sense the different brightness
levels of the surround. In the second category, surround introduced flare to the displayed colours together. For the first
category, little visual lightness difference was shown between bright and dark, and dim and dark surround, unlike the
expectation that the perceived lightness contrast may increase as the surround becomes brighter. The lightness
dependency of colourfulness, however, was found to change. For the second category, the visual colour appearance of
the surround conditions was plotted against measured data, CIELAB L*, C* values, to try to understand the surround
effect. As the surround became brighter, the perceived dynamic range of visual lightness decreased, and the perceived
colourfulness increased, more obviously in high chroma colours. In the investigation of the change of stimulus size
under different surround conditions, visual colour appearance was not affected by the stimulus sizes of 2o and 0.6o in the
dark surround. However, the difference was found in the very dark colours with a dim surround. Finally, all of visual
colour appearance data were used to test the performance of the colour appearance model CIECAM02. Minor
modification was accomplished to improve the colourfulness predictor, especially for the black background.
An investigation of the effect of image size on the color appearance of softcopy reproductions using a contrast matching technique
Author(s):
Mahdi Nezamabadi;
Ethan D. Montag;
Roy S. Berns
Show Abstract
Many art objects have a size much larger than their softcopy reproductions. In order to develop a multiscale model that
accounts for the effect of image size on image appearance, a digital projector and LCD display were colorimetrically
characterized and used in a contrast matching experiment. At three different sizes and three levels of contrast and
luminance, a total of 63 images of noise patterns were rendered for both displays using three cosine log filters. Fourteen
observers adjusted mean luminance level and contrast of images on the projector screen to match the images displayed
on the LCD. The contrasts of the low frequency images on the screen were boosted while their mean luminance values
were decreased relative to the smaller LCD images. Conversely, the contrast of projected high frequency images were
reduced for the same images on LCD with a smaller size. The effect was more pronounced in the matching of projected
image to the smaller images on the LCD display. Compared to the mean luminance level of the LCD images, a reduction
of the mean luminance level of the adjusted images was observed for low frequency noise patterns. This decrease was
more pronounced for smaller images with lower contrast and high mean luminance level.
Modeling for hue shift effect of human visual system on high luminance display
Author(s):
Tae-Hyoung Lee;
Myong-Young Lee;
Kee-Hyon Park;
Yeong-Ho Ha
Show Abstract
This paper proposes a color correction method based on modeling the hue shift phenomenon of human visual system
(HVS). Observers tend to perceive same color stimuli, but of different intensity, as different hues, what is referred to as
the hue shift effect. Although the effect can be explained with the Bezold-Brücke (B-B) effect, it is not enough to apply
the B-B model on high luminance displays because most displays have a broad-band spectrum distribution and results
vary according to type of display. In the proposed method, the quantities of hue shift between a high luminance display
and a normal luminance display were first modeled by a color matching experiment with color samples along the hue
angle of the LCH color space. Based on the results, the hue shift was then modeled piecewise and was finally applied to
the inverse characterization of display to compensate the original input image. From evaluating the proposed method
using the psychophysical experiment with some test images, we confirmed that the proposed modeling method is
effective for color correction on high luminance displays.
Optimizing color edge contrast in presence of noise and amplitude quantization errors
Author(s):
Fritz Lebowsky;
Yong Huang
Show Abstract
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use
structures with more than three subpixels to further reduce the visibility of defective subpixels, increase spatial
resolution, and perhaps even implement a multi-primary display with up to six different primaries. This newly available
"subpixel" resolution enables us to improve color edge contrast thereby allowing for shorter viewing distances through a
reduction of perceived blur. However, not only noise, but also amplitude quantization can lead to undesirable, additional
visual artifacts along contours. Using our recently introduced method of contour phase synthesis in combination with
non-linear, color channel processing, we propose a simple method that maximizes color edge contrast while maintaining
or suppressing visual artifacts as well as noise below the threshold of visual perception. To demonstrate the advantages
of our method, we compare it with some classical contrast enhancement techniques such as cubic spline interpolation
and color transient improvement.
Correlating 2D NTSC gamut ratio to its 3D gamut volume
Author(s):
Pei-Li Sun
Show Abstract
The present study provides functions to correlate gamut size across different color spaces including 2D planes - (x,y)
and (u',v') and 3D spaces - CIELAB, CIECAM02 JCh and QMh. All gamut size must be converted to NTSC gamut
ratio before using the functions. As viewing conditions influence 3D gamut ratio significantly, predict 3D gamut ratio in
high precision is not easy. However, the mean values, median values, standard deviations, and confidence intervals can
be predicted accurately. In terms of viewing parameters, we can model them individually under IEC reference condition
successfully, the resulted functions would be a good reference to derive more versatile functions to predict gamut ratio
under complex viewing conditions.
Device calibration method for optical light modulator
Author(s):
Yousun Bang;
Aron Baik;
Dusik Park;
Injae Yeo;
Jaeho Han
Show Abstract
Due to subtle misalignment of optical components in the fabrication process, images projected by an optical light
modulator have severe line artifact along the direction of the optical scan. In this paper, we propose a novel methodology
to calibrate the modulator and generate the compensate image for the misaligned optical modulator in order to eliminate
the line artifact. A camera system is employed to construct Luminance Transfer Function (LTF) that characterizes the
optical modulator array. Spatial uniformity is obtained by redefining the dynamic range and compensating the
characteristic curvature of the LTF for each optical modulator array element. Simulation results show significant
reduction in the visibility of line artifact.
LCD TV comparable to CRT TV for moving image quality: world's best MPRT LCD TV
Author(s):
Jae-Kyeong Yoon;
Ki-Duk Kim;
Nam-Yong Kong;
Hong-Chul Kim;
Tae-Ho You;
Soon-Shin Jung;
Gil-Won Han;
Moojong Lim;
Hyun-Ho Shin;
In-Jae Chung
Show Abstract
The world's best motion fidelity LCD, which is based on scanning backlight technique, was developed. The MPRT
(Motion Picture Response Time) of 4.0 ms for an LCD is achieved while a CRT has about 3.7 ms and a PDP still has
about 8.1 ms. It means that the last weakness bothering an LCD so far compared with a PDP and a CRT was cleared.
Moreover, 4.0 ms MPRT is a number at which people cannot perceive motion blur for moving images under most
conditions and therefore the MBR LCD is superior to a PDP by above 50% and equal to a CRT in motion blur.
Nonlinear curve optimization in digital imaging
Author(s):
Shuxue Quan;
Xiaoyun Jiang
Show Abstract
High dynamic compression of natural scenes and nonlinearity compensation of output devices are demanded as the
output devices have limited dynamic range and color gamut, and devices such as Standard-RGB compliant displays and
printers have different nonlinear responses to linear inputs. This paper describes a general framework of nonlinear curve
optimization in digital imaging and its application to the determination of gamma look up table through curve
parameterization. Three factors are considered, that is, the color appearance fidelity, the feature preservation, and the
suppression of noise propagation.
Reducing rainbow effect for filed sequential color LCDs
Author(s):
Pei-Li Sun
Show Abstract
The paper presents two models to reduce the visibility of rainbow effect for field sequential color (FSC) LCDs. It was
done by changing the LED backlight state, introducing crosstalk among the R/G/B LEDs and finally modified the RGB
signals to minimize its image color differences to its original state. The results of our simulations suggested that the
proposed methods would reduce the visibility of rainbow effect to some extent. However, its performance was quite
image-dependent. How to reduce its computational cost and calibrate the LED backlight accurately is a great challenge
to implement the models on real FSC LCDs.
Third-time lucky: why a series of ISO display standards deserves extensive coverage at Electronic Imaging conferences
Author(s):
Floris L. van Nes
Show Abstract
We can all use, to a larger or lesser extent, the technological marvels that surround us and that we often depend on;
without, however, really understanding how they function. But designing, buying, using and maintaining those devices
does require a certain level of knowledge on the basics of that functioning, if one wants to avoid problems. Displays are
a worth-while example of the marvels meant because of their ubiquity in modern life. A standard on displays, especially
a new one, could provide the knowledge meant - and anyway deserves to be known by the people frequenting the EI
Conferences. Therefore, a progress report on the production of the new ISO display standard is presented here for the
third time. The standard contains: an introduction; all definitions used; the visual requirements for displays; three
different methods to measure display properties; and analysis and compliance methods to verify whether a particular
display complies with the requirements set. The standard is meant to provide knowledge for designers of displays, their
users, and those that procure displays for these users. It is set up in such a way that emerging display technologies as are
used in, for instance, SED and OLED, can be incorporated easily.
A novel color mapping method for preferred color reproduction
Author(s):
Kyeong Man Kim;
Hyun Soo Oh;
Sang Ho Kim;
Don Chul Choi
Show Abstract
We propose a novel color mapping method that generates smooth color transition and can accommodate the color
preference. The method consists of two stages; rough calibration and black generation. The rough calibration process
generates a three dimensional (3-D) Look-Up-Table converting input RGB data to printable CMY values. When the 3-D
LUT is created, a new intent for color mapping, target color is internally used. The target color is predefined from a
reference color book based on the color preferences of testers involved in the target definition phase. All of the input
data of the 3-D LUT are mapped to the printable values in a printer based on the target color, and then simply converted
to CMYK values. We evaluated the proposed algorithm comparing with a commercial printer profiler and found that the
proposed algorithm makes better printing quality.
Gamut mapping method for ICC saturated intent
Author(s):
Min-Ki Cho;
Heui-Keun Choh;
Se-Eun Kim;
Yun-Tae Kim;
Yousun Bang
Show Abstract
The same image on the display and color printer isn't the same. Firstly, this is due to the bit depth difference for
representing the color of a pixel. The display uses the color data of the eight or more bits, but the color printer uses just
1bit for representing color of a pixel. So, the display can reproduce smoother image than the color printer. Secondly, the
display gamut is larger than the printer gamut, so the display color is brighter and more saturate than the printer color.
For minimizing the problems due to these differences, many halftoning and gamut mapping techniques have been
developed. For the gamut mapping, color management standard organization, ICC, recommended 2 gamut mapping
methods, HPMINDE and SGCK. But the recommended methods by ICC have some weak points; contouring
(HPMINDE), paled pure color reproduction (SGCK) and too reddish hair color reproduction (HPMINDE, SGCK). This
paper introduces a gamut mapping method that can reproduce smooth gradation, pure colors with high saturation and
natural hair color. The proposed method is developed for optimal reproduction of graphic image, and it also gives good
results for pictorial image.
Adaptive color artwork
Author(s):
Giordano Beretta
Show Abstract
The words in a document are often supported, illustrated, and enriched by visuals. When color is used, some of it is used
to define the document's identity and is therefore strictly controlled in the design process. The result of this design process
is a "color specification sheet," which must be created for every background color. While in traditional publishing there
are only a few backgrounds, in variable data publishing a larger number of backgrounds can be used. We present an algorithm
that nudges the colors in a visual to be distinct from a background while preserving the visual's general color character.
Peteye detection and correction
Author(s):
Jonathan Yen;
Huitao Luo;
Daniel Tretter
Show Abstract
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of
other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and
correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye
removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to
localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can
cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye
correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes
based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and
glint retention.
Modeling an electro-photographic printer, part I: monochrome systems
Author(s):
Michael A. Kriss
Show Abstract
This paper will outline a simplified model for the development of toner dots on a reflective support. Using
this model and the interaction of light between the reflective support and the dot's microstructure, the
physical, optical and total dot-gain will be calculated, along with the resulting tone scales, for a variety of
digital halftone patterns. The resulting tone reproduction curves and dot-gain will be compared with the
classical literature on dot-gain and tone reproduction curves, more modern approaches and experimental
data from the literature. A comparison to a well-defined experimental system will be shown.
Modeling an electro-photographic printer, part II: color systems
Author(s):
Michael A. Kriss
Show Abstract
This paper will outline a simplified model for the development of toner dots on a reflective support in a
color electro-photographic system. A model developed for a monochrome system will be adapted to a color
imaging system where four pigments, each capable of scatting light, is used to form a digital halftone
image. The combination of physical and optical dot gains, interlayer scattering, on-dot and off-dot digital
halftones will be explored and the results demonstrated in terms color shifts due to layer order and dot gain
due to halftone geometry.
Printer color calibration using an embedded sensor
Author(s):
Yifeng Wu;
Al Gudaitis
Show Abstract
The purpose of printer color calibration is to ensure accurate color reproduction and to maintain the color consistency. A closed loop color calibration system has the ability to perform the whole calibration work automatically without any user intervention. A color calibration process consists of several steps: print a pre-designed target; measure the printed color patches using an color measuring instrument; run a set of algorithms to calibrate the color variations. In this paper, we present a closed loop color calibration system. And we will show in particular how to use a low cost optical sensor to get accurate color measurement. Traditional, low cost optical sensors are only used to measure the voltage data or density data. The novelty of our approach is that we can also use a low cost optical sensor to measure colorimetric data. Using the colorimetric measurement, we can perform more complicate color calibration works for color printing systems.
Color measurements on prints containing fluorescent whitening agents
Author(s):
Mattias Andersson;
Ole Norberg
Show Abstract
Papers with a slightly blue shade are, at least among a majority of observers being perceived as whiter than papers
having a more neutral color1. Therefore, practically all commercially available printing papers contain bluish dyes and
fluorescent whitening agents (FWA) to give the paper a whiter appearance. Furthermore, in the paper industry, the most
frequently used measure for paper whiteness is the CIE-whiteness. The CIE Whiteness formula, does in turn, also favor
slightly bluish papers. Excessive examples of high CIE-whiteness values can be observed in the office-paper segment
where a high CIE-whiteness value is an important sales argument. As an effect of the FWA, spectrophotometer
measurements of optical properties such as paper whiteness are sensitive to the ultraviolet (UV) content of the light
source used in the instrument. To address this, the standard spectrophotometers used in the paper industry are equipped
with an adjustable filter for calibrating the UV-content of the illumination. In the paper industry, spectrophotometers
with d/0 measurement geometry and a light source of type C are used. The graphical arts industry on the other hand,
typically measures with spectrophotometers having 45/0 geometry and a light source of type A. Moreover, these
instruments have only limited possibilities to adjust the UV-content by the use of different weighting filters. The
standard for color measurements in the paper industry governs that measurements should be carried out using D65
standard illumination and the 10o standard observer. The corresponding standard for the graphic arts industry specify
D50 standard illumination and the 2o standard observer. In both cases, the standard illuminants are simulated from the
original light source by spectral weighting functions. However, the activation of FWA, which will impact the measured
spectral reflectance, depends on the actual UV-content of the illumination used. Therefore, comparisons between
measurements on substrates containing FWA from two instruments having light sources with different UV-content are
complicated. In this study, the effect of FWA content in paper on color reproduction has been quantified for an officetype
paper. Furthermore, examples are given on how color measurement instruments give different readings when FWA
is present. For the purpose of this study and in order to ensure that only the effect of FWA was observed, a set of papers
with varying additions of FWA otherwise identical, were produced on a small-scale experimental paper machine. The
pilot papers were printed in three different printers. Two spectrophotometers representative to the instruments used in the
Graphical Art Industry and the Paper Industry respectively where used to measure the printed papers. The results
demonstrate how the use of spectral weighting functions for simulating standard illuminants works properly on nonfluorescent
material. However, when FWA is present, disparities in UV content between the light source and the
simulated illuminant will result in color differences. Finally, in many printing processes, some of the used inks are UVblocking,
this further complicates the effect of FWA in printed material. An example is shown on how different color
differences are obtained for different process ink combinations when the amount of FWA added to the paper is varied.
A reference printer and color management architecture
Author(s):
P. J. Green
Show Abstract
A colour management strategy based on a reference printer and reference medium is described. Additional printers and
media are then defined in terms of this reference printer. This makes it possible to estimate new transforms for a given a
printer and media with no new measurements, even when there is no characterization data available for this combination
of printer and media. A simple method of estimating the new transform was implemented and evaluated, and it was
found to give good results when the target printing condition was similar in gamut and colorant primaries to the
reference printer. This approach can be extended by applying more complex empirical models.
Efficient color printer characterization based on extended Neugebauer spectral models
Author(s):
Pau Soler;
Jordi Arnabat
Show Abstract
In order to print accurate colors on different substrates, color profiles must be created for each specific ink-media
combination. We tackled the problem of creating such color profiles from only few color samples, in order to
reduce the needed time of operation. Our strategy is to use a spectral reflectance prediction model in order
to estimate a large sampling target (e.g. IT8.7/3) from only a small subset of color patches. In particular,
we focused on the so-called Yule-Nielsen modified Spectral Neugebauer model, proposing new area coverage
estimation, and a prediction of Neugebauer primaries, which can not be directly measured due to ink limiting.
We reviewed the basis of such model, interpret it under the perspective of generalized averaging, and derived
expressions to decouple optical and mechanical dot gain effects. The proposed area coverage estimations are
based on assumptions of the printing process, and characterized through few extra color samples. We tested the
models with thermal ink-jet printers on a variety of media, with dye-based and pigment-based inks. The IT8.7/3
target was predicted from 44 samples, with color average accuracy below 4 dE and maximum error below 8 dE,
for dye-based inks, which performed better than pigment-based inks.
Digital camera calibration for color measurements on prints
Author(s):
Mattias Andersson
Show Abstract
Flatbed scanners and digital cameras have become established and widely used color imaging devices. If colorimetrically
calibrated, these trichromatic devices can provide fast color measurement tools in applications such as printer calibration,
process control, objective print quality measurements and color management. However, in calibrations intended to be
used for color measurements on printed matter, the media dependency must be considered. Very good results can be
achieved when the calibration is carried out on a single media and then applied for measurements on the same media, or
at least a media of a very similar type. Significantly poorer results can be observed when the calibration is carried out for
one printer-substrate combination and then applied for measurements on targets produced with another printer-substrate
combination. Even if the problem is restricted to the color calibration of a scanner or camera for different paper media
printed on a single printer, it is still tedious work to make a separate calibration for each new paper grade to be used in
the printer. Therefore, it would be of interest to find a method where it is sufficient to characterize for only one or a few
papers within a grade segment and then be able to apply a correction based on measurable optical paper properties.
However, before being able to make any corrections, the influence of measurable paper properties on color
characterizations must be studied and modeled. Fluorescence has been mentioned1-3 as a potential source of error in color
calibrations for measurements on printed matter. In order to improve paper whiteness, producers of printing paper add
bluish dye and fluorescent whitening agents (FWA) to the paper4. In this study, the influence of FWA in printing paper
on the color calibration of a digital camera for color measurements on printed targets is discussed. To study the effect of
FWA in the paper, a set of papers with varying additions of FWA but otherwise identical, were produced on a smallscale
experimental paper machine. Firstly, the impact on the color calibration when the amount of FWA in the paper
varies was studied. Secondly, the situation where the printed substrate has FWA-content, and illuminations having
different contents of ultraviolet (UV) light were used in the camera and reference spectrophotometer measurements
respectively. The results show that for some combinations of illuminations used in the calibration, very large errors are
induced by the variation of FWA in the printed substrate.
User preferences in colour enhancement for unsupervised printing methods
Author(s):
Carinna Parraman;
Alessandro Rizzi
Show Abstract
In order to obtain a good quality image in preparation for inkjet printing, the process of adjusting images can be a time
consuming and a costly procedure. In this paper, we consider the use of an unsupervised colour enhancement method as
part of the automatic pre-processors for printing. Other unsupervised colour enhancement methods are utilised and
compared: Retinex, RSR, ACE, Histogram Equalisation, Auto Levels. Test images are subjected to all of the
enhancement methods, which are then printed. Users are asked compare each of the sampled images. In all cases, the
results are dependent on the image. Thus, we have selected a range of test images: photographs of scenes, reproduction
of prints, paintings and drawings.
Some of the tested methods are parameter dependent. We do not intend to consider fine tuning for each of the
techniques, rather to consider an average parameter set for each one and then test if this approach can aid the decision
process of fine tuning.
Three user groups are employed: the general user, commercial photographer expert and fine artist. Groups are asked to
make a blind evaluation of a range of images (the original and the colour enhanced by the different methods); these are
randomly placed. All images are printed on the same printer using the same settings. Users are asked to identify their
preferred print in relation to lightness, tonal range, colour range, quality of detail and overall subjective preference.
A user-friendly digital image processing procedure: technical implementation
Author(s):
Rodney Shaw
Show Abstract
A huge number of consumer digital images are considered unsatisfactory for general use due to their poor image
quality. Of this number a significant fraction actually possess an inherent quality, as acquired, and with relatively
straightforward manipulation could be enhanced and thus become entirely satisfactory according to consumer
preference. In fact around fifty percent of all digital images, including those acquired with high signal-to-noise ratio
using sophisticated capture devices, are positioned at a significant distance from their optimum preferred image-quality
state, as defined by the end user. An unconventional practical methodology has been developed that uses a novel image
processing procedure, yet leads to a solution that can readily be applied in real-time by any unskilled consumer to obtain
the preferred image-quality state for each individual digital image. The underlying scientific principles are now
explored in further detail, and these are related to the practical image-quality outcomes. Examples are given of the
transformations applied to specific image types in order to yield preferred image-quality, and these transformations are
related directly to the selection process as operated by an unskilled user of the methodology.
Production planning and automated imposition
Author(s):
Chris Tuijn
Show Abstract
The production planning in a printing organization is quite complex since there are many parameters to consider such as
the work to be done, the available devices and the available resources. Printed products such as books, magazines,
leaflets etc. all consist of sections that will be printed on a press sheet. After printing, the sheets will be folded and cut.
As a last step, the different sections belonging to the same product will be collected and bound together (glued, stapled,
stitched etc.).
In the prepress environment, one traditionally uses special imposition templates to identify how the pages should be
imposed on the press sheet. The main drawback of this approach is that one needs to remake imposition templates each
time a parameter has been changed.
A new approach to overcome this problem has been proposed by the CIP4 graphic standards organization and consists of
specifying a so-called stripping specification of the sections on the sheet. In addition to the stripping information, one
also can specify how the sections can be combined physically. This is done in the so-called assembly specification. Both
stripping and assembly allow defining unambiguously how a product can be produced.
In the first part of the paper, we will explain in detail how the stripping and assembly specification can be used to further
automate the prepress, printing and finishing processes. In the second part, we will discuss how the production planning
itself can be automated. This assumes an in-depth knowledge of the device characteristics (press and finishing
equipment) and the available resources (plates, paper, ink).
The perfect photo book: hints for the image selection process
Author(s):
Reiner Fageth;
Wulf Schmidt-Sacht
Show Abstract
An ever increasing amount of digital images are being captured. This increase is due to several reasons. People are
afraid of not "capturing the moment" and pressing the shutter is not directly liked to costs as was the case with silver
halide photography. This behaviour seems to be convenient but can result in a dilemma for the consumer. This paper
presents tools designed to help the consumer overcome the time consuming image selection process while turning
the chore of selecting the images for prints or placing them automatically into a photo book into a fun experience.
Local contrast enhancement
Author(s):
Marco Bressan;
Christopher R. Dance;
Hervé Poirier;
Damián Arregui
Show Abstract
We introduce a novel algorithm for local contrast enhancement. The algorithm exploits a background image which
is estimated with an edge-preserving filter. The background image controls a gain which enhances important
details hidden in underexposed regions of the input image. Our designs for the gain, edge-preserving filter and
chrominance recovery avoid artifacts and ensure the superior image quality of our results, as extensively validated
by user evaluations. Unlike previous local contrast methods, ours is fully automatic in the sense that it can be
directly applied to any input image with no parameter adjustment. This is because we exploit a trainable decision
mechanism which classifies images as benefiting from enhancement or otherwise. Finally, a novel windowed TRC
mechanism based on monotonic regression ensures that the algorithm takes only 0.3 s to process a 10 MPix
image on a 3 GHz Pentium.
Color recovery from gray image based on analysis of wavelet packet sub-bands
Author(s):
Kyung-Woo Ko;
Oh-Seol Kwon;
Chang-Hwan Son;
Eun-Young Kwon;
Yeong-Ho Ha
Show Abstract
This paper proposes a colorization method that uses wavelet packet sub-bands to embed color components. The
proposed method, firstly, involves a color-to-gray process, in which an input RGB image is converted into Y, Cb, and
Cr images, and a wavelet packet transform applied to Y image to divide it into 16 sub-bands. The Cb and Cr images are
then embedded into two sub-bands that include minimum information on the Y image. Once the inverse wavelet packet
transform is carried out, a new gray image with texture is obtained, where the color information appears as texture
patterns that are changed according to the Cb and Cr components. Secondly, a gray-to-color process is performed. The
printed textured-gray image is scanned and divided into 16 sub-bands using a wavelet packet transform to extract the Cb
and Cr components, and an inverse wavelet packet transform is used to reconstruct the Y image. At this time, the
original information is lost in the color-to-gray process. Nonetheless, the details of the reconstructed Y image are almost
the same as those in the original Y image because it uses sub-bands with minimum information to embed the Cb and Cr
components. The RGB image is then reconstructed by combining the Y image with the Cb and Cr images. In addition,
to recover color saturations more accurately, gray patches for compensating the characteristics of printers and scanners
are used. As a result, the proposed method can improve both the boundary details and the color saturations in recovered
color images.
Model-based deduction of CMYK surface coverages from visible and infrared spectral measurements of halftone prints
Author(s):
Thomas Bugnon;
Mathieu Brichon;
Roger David Hersch
Show Abstract
The Yule-Nielsen modified Spectral Neugebauer reflection prediction model enhanced with an ink spreading model
provides high accuracy when predicting reflectance spectra from ink surface coverages. In the present contribution, we
try to inverse the model, i.e. to deduce the surface coverages of a printed color halftone patch from its measured
reflectance spectrum. This process yields good results for cyan, magenta, and yellow inks, but unstable results when
simultaneously fitting cyan, magenta, yellow, and black inks due to redundancy between these four inks: black can be
obtained by printing either the black ink or similar amounts of the cyan, magenta, and yellow inks. To overcome this
problem, we use the fact that the black pigmented ink absorbs light in the infrared domain, whereas cyan, magenta, and
yellow inks do not. Therefore, with reflection spectra measurements spanning both the visible and infrared domain, it is
possible to accurately deduce the black ink coverage. Since there is no redundancy anymore, the cyan, magenta, yellow,
and pigmented black ink coverages can be recovered with high accuracy.
Accurate spectral response measurement system for digital color cameras
Author(s):
Gao-Wei Chang;
Zong-Mu Yeh
Show Abstract
In imaging systems, color plays an essential role in conveying and recording visual information from the real world. To
faithfully represent colors acquired from digital cameras, a spectral responsivity measurement system is proposed for
those devices in this paper. For estimating spectral responsivities of digital color cameras, a filter-based optical system is
designed with proper filter selections. Since the spectral filters primarily prescribe the optical characteristics of the
system, the filter consideration is important to the optical design of the system with the presence of noise. A theoretical
basis is presented to confirm that sophisticated filter selections can make this system as insensitive to noise as possible.
Also, we propose a filter selection method based on the orthogonal-triangular (QR) decomposition with column pivoting
(QRCP). To investigate the noise effects, we assess the estimation errors between the actual and estimated spectral
responsivities, with the different signal-to-noise ratio (SNR) levels of an eight-bit/channel camera. To demonstrate the
effectiveness of this approach, the experimental results from the filter-based optical system with the spectral filters
selected from the QRCP-based method is much less sensitive to noise than those with other filters from different
selections. It is found that the measurement accuracy is fairly satisfactory.
Interpolation for nonlinear Retinex-type algorithms
Author(s):
D. Shaked
Show Abstract
In this paper we propose a method to speed up Retinex-type algorithms, consisting of a computationally intensive nonlinear
illumination estimation module followed by a relatively simple manipulation module. Speed up is obtained by way
of computing the illumination on a sub-sampled image. The challenge is to interpolate a piece-wise smooth low
resolution image. We present and analyze the trade of between two types of interpolation methods. On one hand, regular
illumination interpolation, which preserves the Retinex-type output quality, however may result in artifacts. On the other
hand a detail preserving interpolation which removes artifacts, however may compromise the output quality.
Omnidirectional scene illuminant estimation using a multispectral imaging system
Author(s):
Shoji Tominaga;
Tsuyoshi Fukuda
Show Abstract
A method is developed for estimating an omnidirectional distribution of the scene illuminant spectral distribution, including spiky fluorescent spectra. First, we show a measuring apparatus, consisting of the mirrored ball system and the imaging system using a LCT filter (or color filters), a monochrome CCD camera, and a personal computer. Second, the measuring system is calibrated and images representing the omnidirectional light distribution are created. Third, we present an algorithm for recovering the illuminant spectral-power distribution from the image data. Finally,
the feasibility of the proposed method is demonstrated in an experiment on a classroom scene with different illuminant sources such as fluorescent light, incandescent light, and daylight. The accuracy of the estimated scene illuminants is shown in the cases of the 6-channel multi-band camera, 31-channel spectral camera, and 61-channel spectral camera.
Deducing ink-transmittance spectra from reflectance and transmittance measurements of prints
Author(s):
Mathieu Hébert;
Roger D. Hersch
Show Abstract
The color of prints is mainly determined by the light absorption of the inks deposited on top of paper. In order to predict
the reflectance spectrum of prints, we use a spectral prediction model in which each ink is characterized by its spectral
transmittance. In the present paper, we consider two classical reflectance prediction models: the Clapper-Yule model and
the Williams-Clapper model. They rely on a same description of multiple reflection-transmission of light, but use a
different description of the attenuation of light by the inks. In the Clapper-Yule model (non-orientational ink
attenuation), the orientation of light traversing the ink is not taken into account. In the Williams-Clapper model, it is
taken into account (orientational ink attenuation). In order to determine experimentally which of these two models is the
more suitable for a given type of print, we propose a method using the reflectance and the transmittance of prints. We
introduce a bimodal model, enabling spectral reflectance and transmittance predictions. Depending whether the direction
of light into the ink is taken into account, we obtain a non-orientational bimodal model or an orientational bimodal
model. Using these two models, we deduce the ink transmittance spectrum from various reflectance and transmittance
measurements performed on a same print, and compare the different deduced spectra. The model which is the most
adapted to the considered print is the one where the deduced spectra best match each other.
Halftone independent methods for color drift correction
Author(s):
Vishal Monga;
Shen-ge Wang;
Raja Bala
Show Abstract
Color printer calibration is the process of deriving correction functions for device CMYK signals, so that
the device can be maintained with a fixed known characteristic color response. Since the colorimetric
response of the printer can be a strong function of the halftone, the calibration process must be repeated for
every halftone supported by the printer. The effort involved in the calibration process thus increases
linearly with the number of halftoning methods. In the past few years, it has become common for high-end
digital color printers to be equipped with a large number of halftones thus making the calibration process
onerous . We propose a halftone independent method for correcting color (CMY/CMYK) printer drift. Our
corrections are derived by measuring a small number of halftone independent fundamental binary patterns
based on the 2×2 binary printer model by Wang et. al. Hence, the required measurements do not increase
as more halftoning methods are added. The key novelty in our work is in identifying an invariant halftone
correction factor (HCF) that exploits the knowledge of the relationship between the true printer response
and the 2×2 predicted response for a given halftoning scheme. We evaluate our scheme both quantitatively
and qualitatively against the printer color correction transform derived with the printer in its "default
state". Results indicate that the proposed method is very successful in calibrating a printer across a wide
variety of halftones.
Controlling the error in spectral vector error diffusion
Author(s):
Jérémie Gerhardt;
Jon Y. Hardeberg
Show Abstract
We aim to print spectral images using spectral vector error diffusion. Vector error diffusion produces good quality
halftoned images but it is very slow to diffuse the error in the image during the halftoning process due to error
accumulation. In spectral images each pixel is a re.ectance and the accumulation of error can modify completly
the shape of the reflectance. This phenomena is increased when data are out of the gamut of the printer. To
control the diffusion of error and to decrease the slowness of the spectral vector error diffusion we preprocess the
spectral image by applying spectral gamut mapping and test the shape of the reflectances by keeping them in a
range of feasible values. Our spectral gamut mapping is based on the inversion of the spectral Neugebauer printer
model. After preprocessing the spectral image to be halftoned is the closest estimation the printer can made
of it with the available colorants. We apply spectral vector error diffusion to spectral images and we evaluate
the halftoning by simulation. We use a seven channels printer which we assume has stable inks and no dot gain
(with a large set of inks we increase the variability or re.ectances the printer can produce). Our preprocessing
and error control have shown promising results.
Holladay halftoning using super resolution encoded templates
Author(s):
Jon S. McElvain;
Charles M. Hains
Show Abstract
A new method for halftoning using high resolution pattern templates is described, that expands the low level rendering capabilities for engines that support this feature. This approach, denoted super resolution encoded halftoning (SREH) is an extension of the Holladay concept, and provides a compact way to specify high resolution dot growth patterns using a lower resolution Holladay brick. Fundamentally, this new halftoning method involves using the SRE patterns as building blocks for constructing clustered dot growth assemblies. Like the traditional Holladay dot description, the SRE halftone is characterized by a size, height, and shift, all of which are specified at the lower resolution. Each low resolution pixel position in the SRE halftone brick contains a pair of lists. The first of these is a list of digital thresholds at which a transition in SRE patterns occurs for that pixel position, and the second is the corresponding list of SRE codes. For normal cluster dot growth sequences, this provides a simple and compact mechanism for specifying higher resolution halftones. Techniques for emulating traditional high resolution Holladay dots using SREH are discussed, including mechanisms for choosing substitutions for patterns that do not exist among the available SRE patterns.
The hybrid screen: improving the breed
Author(s):
Changhyung Lee;
Jan P. Allebach
Show Abstract
The hybrid screen is a halftoning method that generates stochastic dispersed dot textures in highlights and
periodic clustered dot textures in midtones. Each tone level is sequentially designed from highlight to midtone
by applying an iterative halftoning algorithm such as direct binary search (DBS). By allowing random seeding
and swap-only DBS in a predefined core region within each microcell, we design each level while satisfying
the stacking constraint and guaranteeing a smooth transition between different levels. This paper focuses on a
number of enhancements to the original hybrid screen and their impacts on print quality. These include analytical
determination of the human visual system filter in the spatial domain for DBS, multilevel screen design either by
extending a bilevel screen or by directly generating a multilevel screen on the high resolution grid. Our results
show that the multilevel screen design method has a direct impact on hybrid screen design parameters such as
the optimal core size. We also extend the whole design process to color by jointly optimizing the color screens
using color DBS. Our results demonstrate a significant improvement in the highlights over halftones generated
by independently designed screen.
Ranked dither for robust color printing
Author(s):
Maya R. Gupta;
Jayson Bowen
Show Abstract
A spatially-adaptive method for color printing is proposed that is robust to printer instabilities, reproduces
smooth regions with the quality of ordered dither, reproduces sharp edges significantly better than ordered
dither, and may be less susceptible to moire. The new method acts in parallel on square, non-overlapping blocks
of each color plane of the image. For blocks with low spatial activity, standard ordered dither is used, which
ensures that smooth regions are printed with acceptable quality. Blocks with high spatial activity are halftoned
with a proposed variant of dither, called ranked dither. Ranked dither uses the the same ordered dither matrix
as standard dither, but the ranks of the thresholds are used rather than the thresholds themselves. Ranked
dither is more sensitive than ordered dither to edges and more accurately reproduces sharp edges. Experiments
were done with standard ordered dither masks of size 130, 130, 128, 144 for the cyan, magenta, yellow, and black
planes respectively. Both on-screen and in-print, the results were sharper halftones. The entire process can be
implemented in parallel and is not computationally expensive.
Rank-ordered error diffusion: method and applications
Author(s):
Robert P. Loce;
Beilei Xu
Show Abstract
We present a specialized form of error diffusion that addresses certain long-standing problems associated with operating
on images possessing halftone structure as well as other images with local high contrast. For instance, when rendering
an image to printable form via quantization reduction, image quality defects often result if that image is a scanned
halftone. Rendering such an image via conventional error diffusion typically produces fragmented dots, which can
appear grainy and be unstable in printed density. Rendering by simple thresholding or rehalftoning often produces
moire, and descreening blurs the image. Another difficulty arises in printers that utilize a binary image path, where an
image is rasterized directly to halftone form. In that form it is very difficult to perform basic image processing
operations such as applying a digital tone reproduction curve. The image processing operator introduced in this paper,
rank-order error diffusion (ROED), has been developed to address these problems. ROED utilizes brightness ranking of
pixels within a diffusion mask to diffuse quantization error at a pixel. This approach to diffusion results in an imagestructure-
adaptive quantization with many useful properties. The present paper describes the basic methodology of
ROED as well as several applications.
AM/FM halftoning: improved cost function and training framework
Author(s):
Seong Wook Han;
Mehul Jain;
Roy Kumontoy;
Charles Bouman;
Peter Majewicz;
Jan P. Allebach
Show Abstract
FM halftoning generates good tone rendition but it is not appropriate for electrophotographic (EP) printers due
to the inherent instability of the EP process. Although AM halftoning yields stable dots, it is susceptible to moire
and contouring artifacts. To combine the strengths of AM and FM halftoning, the AM/FM halftoning algorithm
exploits each advantage of AM and FM halftoning. The resulting halftone textures have green noise spectral
characteristics. In this paper, we present an improved training procedure for the AM/FM halftoning algorithm.
Since most of the green noise energy is concentrated in the middle frequencies, the tone dependent error diffusion
(TDED) parameters (weights and thresholds) are optimized using a new cost function with normalization to
distribute the cost evenly over all frequencies. With the new cost function, we can obtain image quality that is
very close to the direct binary search (DBS) search-based dispersed-dot halftoning algorithm. The cost function
for training the AM part is also modified by penalizing variation in measured tone value across the multiple
printer conditions for each combination of dot size and dot density.
Resolution enhancement techniques for halftoned images
Author(s):
Byong Tae Ryu;
Jong Ok Lee;
Choon-Woo Kim;
Ho Keun Lee;
Sang Ho Kim
Show Abstract
Recently, speed and resolution of electrophotographic printer engine have been significantly improved. In today's
market, it is not difficult to find low to mid-end electrophotographic printers with the spatial resolution greater than 600
dpi and/or bit-depth resolution more than 1 bit. Printing speed is determined by the processing time at computer, data
transmission time between computer and printer, and processing and printing time at printer. When halftoning is
performed at computer side, halftoned data would be compressed and sent to printer. In this case, increase in the spatial
and bit-depth resolution would increase data size to be transmitted and memory size at printer. For a high-speed printer,
increased transmission time may limit the throughput in imaging chain. One of possible solutions to this problem is to
develop resolution enhancement techniques. In this paper, a fast and efficient spatial resolution enhancement technique is
proposed. Objectives of the proposed technique are to reduce the data size for transmission and minimize image quality
deterioration. In the proposed technique, number of black pixels in the halftoned data is binary coded for data reduction.
At printer, black pixel placement algorithm is applied to binary coded data. For non-edge area, screen order is utilized for
the black pixel placement. When identified as edge area, locations of black pixels are selected by the edge order designed
by genetic algorithm.
Contribution to quality assessment of digital halftoning algorithms
Author(s):
Ferruccio Cittadini;
Michaël Remita;
Jacques Pervillé;
Stéphane Berche;
Mohamed Ben Chouikha;
Hans Brettel;
Georges Alquié
Show Abstract
Many new proposals are continually published in the halftoning domain. Alas, the demonstration of the interest of the
proposed methods is often limited to a few favourable tests for the proposed methods, and images showing the defects of
the other halftoning methods.
The halftoning community needs to be able to compare a halftoning method with the innovations that appear in this
domain. A complete and measured evaluation of quality is necessary through to a well defined set of test images and
metrics to evaluate the algorithm.
This paper proposes a protocol for the quality assessment of digital halftoning algorithm that can be used to compare one
algorithm to another.
It discusses the assessment of halftoner quality. It analyzes the perceived image quality concepts and defines the
technical criteria that a good halftoner must match. A first sketch of a simple quality assessment protocol is proposed. It
is composed of test images and quality metrics.
This protocol could be used to provide new proposed halftoning algorithms with objective results.
Uniform rosette for moire-free color halftoning
Author(s):
Shen-Ge Wang
Show Abstract
By selecting halftone frequencies from high-order harmonics of two common rosette fundamentals for all color
separations, a true moiré-free color halftoning can be achieved. With such screen configurations, the interference
between any two frequency components, fundamentals or high-order harmonics, of different colors will also result in a
linear combination of the two rosette fundamentals. Thereby, no visible interference, or moiré, at all will be shown in
the output. The halftone outputs are two-dimensionally repeated patterns, as visually pleasant uniform rosettes. The
uniform-rosette configurations can be implemented by single-cell non-orthogonal halftone screens for digital halftoning.
Unlike "dot-on-dot" screening, or using one screen for all colors, uniform-rosette halftoning is robust to mis-registration
between color separations. Several designs of uniform-rosette halftone screens have been successfully applied to Xerox
iGen3 color printers for high-quality color reproduction.
Implement of FGS video encoding based on H.264
Author(s):
Qiwei Lin;
Gui Feng
Show Abstract
In H.264 video coding standard, the combination encoding frame was adopted. It introduces some new algorithms, and
modifies several aspects of the encoding scheme. So the encoding scheme improves the encoding efficiency obviously.
But the H.264 standard is not supporting FGS encoding. So a H.264 based self-adaptive FGS (Fine Granular
Scalable)(H.264-FGS) encoding scheme is proposed in this paper. In this encoding scheme, the base layer of encoder is
keeping H.264 encoder architecture, which consists of the motion estimation, motion compensation, intra predictive,
integer transformation, loop filtering, content based arithmetic encoding, and etc. In the base layer generated block we
obtain base code flux of FGS. Subtracting the original image from the reconstruction image of the base layer, we get the
residual error. Then after the DCT transform and the variable length encoding compresses, we obtain the enhanced code
flux of FGS.
Compared with the original MPEG-4 FGS encoding scheme, the proposed FGS encoding scheme has the feature of
increasing encoding efficiency by 1~3 dB and keep the all properties that MPEG-4 FGS encoding technology provided.
A watermarking algorithm for halftone image based on human vision system model
Author(s):
Xiaoxia Wan;
Chengwei Hu;
Jinglin Xu
Show Abstract
The paper discussed the digital watermarking algorithm which embeds watermarking in halftone image in the process of
halftoning. The digital watermarking algorithm based on human vision system (HVS) model which can minimize the
visual error between the embedded watermarking halftone image and the original continuous-tone image using iterative
binary search method. The algorithm can embed large amount of information in halftone image under the precondition
of watermarking invisibility; on the other hand, the extraction of watermarking is reliable. All experiments indicate that
this algorithm can embed more information than other algorithm and has strong robustness, so the algorithm can resist
unconsciously or intentionally attacks which caused by printing, scan, dirty, clip, etc., at the same time, the
watermarking embedded in halftone image using the algorithms has the advantage of invisibility, high vision quality of
halftone image.
Compression of color images using a hologram of gray tones
Author(s):
Alejandro Restrepo-Martínez;
Roman Castañeda
Show Abstract
A strategy for to compress color images in digital holograms of gray tones was developed. The procedure codifies the
information of each channel of the RGB model in a system of fringes, it is a gray image denominated "hologram". The
fringes in their intensity of gray tone carry the signal of the channel, in this manner the amplitude information for each
channel of color image is stored. The angles of fringes define how the information of each channel will be
packaged.
The sum of the different gray fringes images is the hologram, it is the "object" for a digital holographic system. The
RGB channels are high intensity peaks of information in the hologram's Fourier space, and when the peaks are filtered
each channel can be extracted.
Parameters such as: space frequency, visibility, direction and quality of the fringes affect the quality of the reconstructed
image. However, the propose methodology allow a radius 3:1 for the compression of color image, too with this
process is possible the compression of different spectrum in a one color image.