Proceedings Volume 1460

Image Handling and Reproduction Systems Integration

Walter R. Bender, Wil Plouffe
cover
Proceedings Volume 1460

Image Handling and Reproduction Systems Integration

Walter R. Bender, Wil Plouffe
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 August 1991
Contents: 3 Sessions, 14 Papers, 0 Presentations
Conference: Electronic Imaging '91 1991
Volume Number: 1460

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Color Imaging
  • Image Management
  • Imaging Tools for Design
  • Image Management
Color Imaging
icon_mobile_dropdown
Integrated color management: the key to color control in electronic imaging and graphic systems
Joann M. Taylor
The continuing evolution of color in both established and new electronic imaging and graphics markets plus the parallel growth of user expectation resulting in the demand for WYSIWYG color across devices points directly to the need for a unified color management approach that addresses the complex and multi-level requirements of system implementors, application and hardware vendors, as well as users. Such a system should provide extensible, device- and computing platform-independent access to state of the art color functionality. The principal elements of such an integrated system are discussed.
Subsampled device-independent interchange color spaces
James M. Kasson, Wil Plouffe
This paper extends the authors' work on device-independent interchange color spaces. The authors add the additional requirement that a desirable interchange space exhibit good performance when two of its dimensions are encoded at lower spatial resolution than the other constituent, either by subsampling or other methods. Many important bandwidth compression algorithms use such encodings. The authors define luminance contamination, a new measure for interchange space performance under subsampling, supply a test for this property, and report results for a set of proposed device-independent color spaces. The occurrence of large, infrequently occurring errors in RGB color spaces with low luminance contamination is also reported. The authors discuss the conversion of a device-independent representation to popular device spaces by means of trilinear interpolation and find, for a given accuracy, some spaces require substantially fewer lookup table entries than do others.
Gamut mapping computer-generated imagery
William E. Wallace, Maureen C. Stone
Existing device independent color representations are based on standards established for identifying single colors viewed on a neutral background or quantifying small differences between similar colors. However, when reproducing images or illustrations, the relationship between colors is often more important than the fidelity of individual colors. Furthermore, clients often choose to adapt their output to individual device limitations to get the best reproduction for each medium. The process of maintaining the appearance of an image while mapping the colors to fit the gamut of the target device is called gamut mapping. This paper describes nonlinear adjustment of the image colors in lightness to control the dynamic range, and in chroma to bring overly saturated colors inside the target gamut. The authors perform calculations in the CIE L*a*b* color space. Benefits and limitations of this approach are described.
Summary of color definition activity in the graphic arts
This paper discusses the status of graphic arts standards activities within the United States and the international community relating to color definition. A brief description of the organization of standards groups that are involved in graphic arts activity is given. The principal focus is on the activities of Working Group (WG) 11, Color, of ANSI Standards Committee IT8 (Digital Data Exchange Standards). Additional work being carried out in ANSI Committee CGATS (Committee for Graphic Arts Technologies Standards) is also discussed.
Image Management
icon_mobile_dropdown
Organization of a system for managing the text and images that describe an art collection
Frederick C. Mintzer, John D. McFall
For more than a year, the authors have been involved in building a workstation-based system to manage the information that describes a collection of art. This information, for each work of art, includes text that describes that work of art and color images of it. An important requirement of this system is that it be able to display images of the art with 'faithful color.' In this paper, the requirements of this system are described, especially those that relate to images, and how they affect the system design. Other aspects of the system described in the paper include the system organization, the image storage hierarchy, the color calibration approach, the image architecture, and the organization of the color calibration information.
Laser disk: a practicum on effective image management
Richard F. Myers
For more than a year, we have been involved in building a workstation-based system to manage the information that describes a collection of art. This information, for each work of art,' includes text that describes that work of art and color images of it. An important requirement of this system is that it be able to display images of the art with "faithful color." In this paper, we will describe the requirements of this system, especially those that relate to images, and how they affect the system design. Other aspects of the system that will be described in the paper include the system organization, the image storage hierarchy, the color calibration approach, the image architecture, and the organization of the color calibration information.
Image enhancement using nonuniform sampling
William J. Bender, Charles J. Rosenberg
This paper describes the application of a lossy digital image compression scheme to synthetic image enhancement for image scaling. The lossy compression scheme employed is a non-uniform sampling and interpolation algorithm. During image analysis, more sample points are taken from complex regions than from simple regions of the image. Upon resynthesis, missing samples are interpolated. One consequence of the algorithm is that it imparts a structure to the image. The utility of this structure to several methods of subsequent enhanced image scaling by the addition of synthetic resolution is described. These techniques are well suited to image systems where scalable and extensible resolution are desirable features.
Applications of text-image editing
Steven C. Bagley, Gary E. Kopec
The most common approach to processing text which originates as a scanned document image is format conversion, in which procedures such as page segmentation and character recognition are used to convert the scanned text into a structured symbolic description which can be manipulated by a conventional text editor. While this approach is attractive in many respects, there are situations in which complete recognition and format conversion is either unnecessary or very difficult to achieve with sufficient accuracy. This paper presents several applications illustrating an alternative approach to scanned text processing in which document processing operations are performed on image elements extracted from the scanned document image. The central and novel insight is that many document processing operations may be implemented directly by geometrical operations on image blobs, without explicit knowledge of the symbolic character labels (that is, without automatic character recognition). The applications are implemented as part of image EMACS, an editor for binary document images, and include editing multilingual documents, reformatting text to a new column width, differential comparison of two versions of a document, and preprocessing an image prior to character recognition.
Imaging Tools for Design
icon_mobile_dropdown
Capturing multimedia design knowledge using TYRO, the constraint-based designer's apprentice
Ronald L. MacNeil
TYRO is a visual programming environment that uses a case-based reasoning approach to capturing and reusing knowledge about the design of multimedia presentations. Case-based reasoning assumes that people solve problems by remembering relevant scenarios and modifying or adapting them to the situation at hand, then storing this new approach away for future reuse. In TYRO, the designer constructs a case library by demonstrating solutions of prototypical multimedia problems and defining constraining relations between object sequences. Adaptation and augmentation of the case library takes place as trial presentations reveal failure conditions. The designer constructs rule objects which are combinations of condition objects and actions objects. Condition objects trigger when the failure conditions is detected and action objects, or rules of thumb, fire and revise the constraint network, or revise the sequence of objects, etc. The resulting cases generated are stored and indexed for future use by the system.
Temporal adaptation of multimedia scripts
Laura Robin
Hypermedia systems structure large quantities of multimedia information into webs of associated concepts. Users can selectively explore these concepts, limiting the quantity of information viewed to suit their personal goals for using the system. But the flexibility of navigational freedom creates a problem for hypermedia designers--that of incorporating mandatorily viewable information into the web. One frequently used solution is to script information illustrating multiple levels of detail into a single non- interruptable presentation to be associated with a single node of the web. This technique affords the designer control over what the user sees, yet does so at the expense of the user's control over time spent with the system. This paper addresses the need for adaptivity at the presentation-level in hypermedia systems. A two-component system is described for the authoring and temporal adaptation of multimedia scripts used in hypermedia applications. With the authoring component, a user (taking on the role of hypermedia designer) can design multimedia presentations using text, still images, sound, animation and video objects. Using graphic interfaces, the user can format these objects in space and time, modify their media-specific attributes, and semantically delineate these objects as pertaining to specific levels of detail. This delineation creates hierarchical structures which can then be selectively parsed by the system to create variations on the original presentation. The user can save all formatting and chunking decisions in a script, creating a persistent record of the presentation. These scripts can then be associated with the nodes of a hypermedia system, so that when a hypermedia user invokes a node, some version of the scripted presentation is played back. The hypermedia component is responsible for determining version content of a presentation. Based on user-imposed constraints (hypermedia users can dynamically input how much time they wish to spend with the system, as well as interest levels associated with represented concepts), the system can adjust the amount of detail shown in a given presentation, thus contracting or expanding the presentation over time.
Transparency and blur as selective cues for complex visual information
Grace Colby, Laura Scholl
Image processing techniques are applied that enable the viewer to control both the gradients of focus and transparency within an image. In order to demonstrate this concept, the authors use a geographical map whose features are organized as layers of information. This allows a user to select layers related to a particular area of interest. For example, someone interested in air transportation may choose to view airports, airport labels, and airspace in full focus. Relevant layers such as the roads and waterways are also visible but appear somewhat blurry and transparent. The user's attention is drawn to information that is clearly in focus and opaque; blurry transparent features are perceived to be in the background. Focus and transparency produce effective perceptual cues because of the human eye's ability to perceive contrast and depth. The control of focus and transparency are made accessible through a graphic interface based on a scale of importance. Rather than specifying individual focus and transparency settings, the user specifies the importance of the individual feature layers according to his needs for the task at hand. The importance settings are then translated into an appropriate combination of transparency and focus gradients for the layers within the image.
Adaptive typography for dynamic mapping environments
Didier Bardon
When typography moves across a map, it passes over areas of different colors, densities, and textures. In such a dynamic environment, the aspect of typography must be constantly adapted to provide disernibility for every new background. Adaptive typography undergoes two adaptive operations: background control and contrast control. The background control prevents the features of the map (edges, lines, abrupt changes of densities) from destroying the integrity of the letterform. This is achieved by smoothing the features of the map in the area where a text label is displayed. The modified area is limited to the space covered by the characters of the label. Dispositions are taken to insure that the smoothing operation does not introduce any new visual noise. The contrast control assures that there are sufficient lightness differences between the typography and its ever-changing background. For every new situation, background color and foreground color are compared and the foreground color lightness is adjusted according to a chosen contrast value. Criteria and methods of choosing the appropriate contrast value are presented as well as the experiments that led to them.
Simulating watercolor by modeling diffusion, pigment, and paper fibers
David Small
This paper explores a parallel approach to the problem of predicting the actions of pigment and water when applied to paper fibers. This work was done on the Connection Machine II, whose parallel architecture allows one to cast the problem as that of a complex cellular automata. One defines simple rules for the behavior of each cell based on the state of that cell and its immediate neighbors. By repeating the computation for each cell in the paper over many time steps, elaborate and realistic behaviors can be achieved. The simulation takes into account diffusion, surface tension, gravity, humidity, paper absorbency and the molecular weight of each pigment. At each time step a processor associated with each fiber in the paper computes water and pigment gradients, surface tension and gravitational forces, and decides if there should be any movement of material. Pigment and water can be applied and removed (blotting) with masks created from type or scanned images. Use of a parallel processor simplifies the creation and testing of software, and variables can be stored and manipulated at highprecision. The resulting simulation runs at approximately one-tenth real time.
Image Management
icon_mobile_dropdown
Fast access to reduced-resolution subsamples of high-resolution images
Joel S. Isaacson
Frequently, displaying a digital image requires reducing the volume of data contained in a high-resolution image. This reduction can be performed by sub- sampling pixels from the high resolution image. Some examples of systems that need fast access to reduced resolution images are: modern digital prepress production; flight simulators; terrestrial planetary and astronomical imaging systems. On standard workstations, a lower resolution image cannot be read without essentially reading the whole high-resolution image. This paper demonstrates a method that allows fast access to lower scale resolution images. The method has the following characteristics. The proposed storage format greatly lessens the time needed to read a low-resolution image typically by an order of magnitude. The storage format supports efficient reading of multiple scale reduced resolutions. The image file size remains the same as in current formats. No penalty is imposed by using this new format for any operation that uses the image at full resolution. Additionally, an efficient method for rotating images in this format is demonstrated that is many times faster than methods currently employed. The last section gives benchmarks that demonstrate the utility of this format for reading an image at low resolution.