Proceedings Volume 0249

Advances in Image Transmission II

cover
Proceedings Volume 0249

Advances in Image Transmission II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 November 1980
Contents: 1 Sessions, 26 Papers, 0 Presentations
Conference: 24th Annual Technical Symposium 1980
Volume Number: 0249

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Composite Model Of A 3-D Image
I. J. Dukhovich
This paper presents a composite model of a moving (3-D) image especially useful for the sequential image processing and encoding. A non-linear predictor based on the composite model is described. The performance of this predictor is used as a measure of the validity of the model for a real image source. The minimization of a total mean square prediction error provides an inequality which determines a condition for the profitable use of the composite model and can serve as a decision device for the selection of the number of subsources within the model. The paper also describes statistical properties of the prediction error and contains results of computer simultation of two non-linear predictors in the case of perfect classification between subsources.
Dynamic Simulation Of Hybrid Video Compression
Kalyan Dutta, Mitchell Millman
Recent experience with airborne video reconnaissance systems has shown that artifacts associated with specific video bandwidth compression techniques can not be fully recognized by viewing simulations of static frames. This is particularly true when there are significant channel errors. For this reason, a facility has been established to dynamically simulate video image compression and transmission systems, including channel errors, for up to 300 consecutive or subsampled frames, representing 10 seconds to 40 minutes of continuous video. This capability is useful in optimizing compression parameters and performing human factors analysis.
Channel Error Propagation In Predictor Adaptive Differential Pulse Code Modulation (DPCM) Coders
Venkat Devarajan, K. R. Rao
New adaptive differential pulse code modulation (ADPCM) coders with adaptive prediction are proposed and compared with existing non-adaptive DPCM coders, for processing composite National Television System Commission (NTSC) television signals. Comparisons are based on quantitative criteria as well as subjective evaluation of the processed still frames. The performance of the proposed predictors is shown to be independent of well-designed quantizers and better than existing predictors in such critical regions of the pictures as edges ind contours. Test data consists of four color images with varying levels of activity, color and detail. The adaptive predictors, however, are sensitive to channel errors. Propagation of transmission noise is dependent on the type of prediction and on location of noise i.e., whether in an uniform region or in an active region. The transmission error propagation for different predictors is investigated. By introducing leak in predictor output and/or predictor function it is shown that this propagation can be significantly reduced. The combination predictors not only attenuate and/or terminate the channel error propagation but also improve the predictor performance based on quantitative evaluation such as essential peak value and mean square error between the original and reconstructed images.
The Data Compression System For The Solar Polar Mission's CXX Experiment
Bruce H. Holmes
The White Light Coronagraph and X-Ray/Extreme Ultra Violet Telescope (CXX) on the International Solar Polar Mission spacecraft will contain two charged-coupled device imaging systems. A data compression subsystem has been designed to process the image data in order to reduce the telemetry system requirements while maintaining high photometric quality. This system implements a completely information-preserving algorithm which self-adapts to images of varying information content. A compressor performance ratio of approximately 2:1 has been measured with several images. The performance of the algorithm is not severely affected by imaging sensor defects typical of charge-coupled devices nor is it as sensitive to random, single-bit errors as other differential PCM techniques. The algorithm can be implemented modularly, thus allowing tradeoffs between a design operating completely in microprocessor software and one using varying amounts of discrete logic, depending upon requirements of power, weight, volume, and operating speed. This paper discusses the operation of the algorithm, a typical implementation, and its measured performance using representative solar images.
Line Organized Videocompression
J. De Roo, A. Oosterlinck, J. Van Daele, et al.
This paper deals with a new method for discrete digital signal generation and encoding. The generation method is based on the RO-algorithm which in general solves differential equations in a binary logical way without arithmetical operations. The encoding technique uses directly the RO-algorithm to construct a discrete digital approximation of the input video signal versus time. The RO-algorithm is a "rolling-out" of line segments of variable length and variable direction along a discrete digital signal in a discrete lattice. The code of the input video signal is a string of "roll-points" (2 bit sets) for the line segments in the encoded signal. Reconstruction of the output video signal is extremely simple by a line segment generator. The compression obtained in this way is proportional to the structural information content of the input video signal. Software simulations were carried out to investigate the compression results. Images can be compressed to 0.4...0.8 bit per pixel with subjective tolerable degradation. Because of the binary logical nature of the algorithm a very simple but very fast hardware implementation is straightforward. A line segment generator can be built with only 7 MSI TTL IC's and the point-by-point generation speed can be up to 20 Megapoints per second (real time videocompression).
Geometrical Rectification Of Spin-Scan Images From Pioneer II
R. N. Strickland, J. J. Burke
Images of Saturn received from Pioneer 11 suffer from geometrical distortions due to the curvilinear scan lines and the unequal sampling intervals in orthogonal directions, which are inherent in spin-scan imaging. In this paper we discuss geometrical image rectification by polynomial transformation based on control points. Factors that affect the accuracy of reconstruction are shown to include the spatial distribution and spatial density of control points, and the order of the polynomial distortion model. A computer implementation of the technique is described.
Quality Measures In The Processing Of High-Contrast Images
V. R. Algazi, G. E. Ford
The use of image quality measures in the design of processing algorithms and equipment is a difficult task. Realistic and useful images are complex and far from the threshold conditions under which psychophysical measurements and models are obtained. For a class of processing algorithms, the image distortion is actually proportional to the image contrast. For these algorithms it appears possible to determine a worst case image and to establish fairly simple quality criteria. The basic components of such a quality measure are discussed and examples of application given.
Micro-Adaptive Picture Sequencing (MAPS) In A Display Environment
Anton Edw. La Bonte
Micro-Adaptive Picture Sequencing (MAPS) is a computationally-efficient, contrast-adaptive, variable-resolution two-dimensional spatial image coding technique. MAPS enhancements which smooth the processing rate and improve image quality in a manner compatible with direct image display are described. Rate smoothing allows the reconstructed image to be updated at the equivalent of one line per line. Image quality is enhanced in a manner which sharpens detail, preserves subtle (low contrast) extended features, improves representation in textured regions, and effectively reduces visual artifacts in the form of image 'blockiness' present with the basic MAPS decompression mode. Moreover, decompression operations maintain a strictly local character suitable for very fast implementation � potentially at video rates. Finally, in addition to visual improvement, significant quantitative (mean square) error reductions are observed with no change in MAPS compression level.
An Iterative Image Enhancement Procedure With Dynamic Range Constraints
John A. Saghri, Andrew G. Tescher
An extension to image processing of Van Cittert's iterative restoration technique for one-dimensional data is presented. A particular advantage of the algorithm given here is the ability to satisfy amplitude constraints. It is useful for presentation of high-dynamic-range imagery in a limited display environment. Preliminary pictorial experiments are promising. Implementation aspects of the technique are further discussed.
Empirical Determination Of Processing Parameters For A Real Time Two-Dimensional Discrete Cosine Transform (2D-DCT) Video Bandwidth Compression System
W. Bernard Schaming, Oliver E. Bessette
An interactive process is described for fine tuning a specific algorithm for use in a real time video bandwidth compression system. Parameters for the algorithm are initially optimized using computer simulation, then implemented in hardware to observe performance. The next step is the modification of the parameters to improve the real time performance of the system. This paper describes some of the algorithm parameter choices that were made in the implementation of a two-dimensional discrete cosine transform processor for the Air Force Wright Aeronautical Laboratory. Examples are shown of alternate choices for DCT scaling strategies, filter functions, and bit assignment procedures. The impact of these parameter selections on the resulting imagery is included using both simulation and hardware outputs.
Hybrid Optical/Digital Interframe Image Data Compression Scheme
B. R. Hunt, H. Ito
Image data compression methods have been dominated by digital computations. In this pa-per we discuss a data compression concept which employs optical computations as part of the compression process. Simple optical processes are used to separate an image into low frequency and high frequency components. These components are then subjected to temporal compression, for multiframe imagery, by using a DPCM frame-buffer structure. Simulations of the process are shown, with reasonable performance being seen at multiple frame compression rates of 1.75 bits per pixel.
Data Compression Ratios Versus Sample Resolution
J. C. Stoffel
This paper presents an analysis of the variation in the number of bits required to represent line copy (black/white graphics) imagery after data compression as a function of the input sampling resolution. This key system parameter is shown to be a linear function of the sample resolution for a broad class of documents and algorithms. Furthermore, the rationale for this phenomenon is developed and empirical results are presented to support the theoretical concepts developed.
An Image Enhancement Algorithm Based Upon The Photographic Adjacency Effect
Harold Liff, Bennett Rudomen
This paper introduces a non-linear, image dependent, two dimensional filter for enhancing electro-optical images. The filter is termed the Diffusion Model Transformation (DMT) and is based upon the EIKONIX Diffusion Model for the photographic adjacency effect. The paper first discusses the photographic adjacency effect and presents several illustrations of its image enhancement potential. The DMT algorithm is then defined and examples of its application to electro-optical infrared imagery are presented.
Block Transform Image Coding In The Presence Of Channel Errorst
J. W. Modestino, D. G. Daut
The use of two-dimensional (2-D) block transform image coding using the discrete cosine transform (DCT) is considered in the presence of channel errors. This technique has proven an efficient and readily implementable source coding technique in the absence of channel errors. In the presence of channel errors, however, the performance degrades rapidly requiring some form of error-control protection if reasonable quality image reconstruction is to be achieved. Unfortunately, channel coding can be extremely wasteful of channel band-width if not applied judiciously. This paper describes an approach which provides a rationale for combined source-channel coding which results in improved quality image reconstruction without sacrificing transmission bandwidth. This approach is shown to result in a relatively robust design which is reasonably insensitive to channel errors and yet provides performance approaching theoretical performance limits.
Use of Moment Preserving Quantizers In Differential Pulse Code Modulation (DPCM) Image Coding
Edward J. Delp, O. Robert Mitchell
In this paper we present results of a study using a moment preserving (MP) quantizer in predictive (DPCM) coding of images. The MP quantizer is designed such that the quantizer preserves statistical moments of the input and output of the quantizer. This quantizer has previously been shown by the authors to work quite well in a PCM image compression scheme. In this paper we extend the use of MP quantizers to predictive coding. We show that the moment preserving quantizer works quite well at data rates of 1.18 bits/pixel. The quantizer scheme is block adaptive, i.e. the picture to be coded is divided into non-overlapping blocks and from block to block the quantizer parameters are changed. A theoretical analysis is presented whereby necessary and sufficient conditions are derived relating the moment preserving properties of the quantizer to the moment preserving properties of the original image data. The performance of the moment preserving quantizer in DPCM is compared to classical minimum mean square error quantizers. These methods are also compared in the presence of channel errors.
Data Compression For National Oceanic And Atmospheric Administration (NOAA) Weather Satellite Systems
Robert F. Rice, Alan P. Schlutsmeyer
The National Oceanic and Atmospheric Administration (NOAA) receives high quality infrared weather images from each of its two geostationary weather satellites at an average data rate of 57 kilobits/second. These images are currently distributed to field stations over 3 kilohertz analog phone lines. The resulting loss in image quality renders the images unacceptable for proposed digital image processing. This paper documents the study leading to a current effort to implement a microprocessor-based universal noiseless coder/decoder to satisfy NOAA's requirements of high quality, good coverage and timely transmission of its infrared images.
Image Compression With Approximate Transform Implementation
Robert A. Gonsalves, Guy R. Johnson
In this paper we examine the effects of using an imperfect transform encoder to compress an image. The forward transform is noisy to reflect imperfections in a possible hardware implementation. We find that these imperfections cause unacceptable image degradations unless they are very carefully calibrated out. Particular attention must be paid to the bias of each vector in the transform. To illustrate the concept we present a computer simulation of a hybrid encoder that uses a Hadamard transform on the rows and DPCM on the columns. In a series of images and error pictures we show the effects of different encoder and decoder strategies which use various degrees of uncertainty about the encoder.
Architecture For A Color Video Frame Processor
J. Hartung, T. G. Marshall
This paper describes a proposed color video frame processor architecture. The structure of the processor allows the storage, replay, and processing of sampled composite or component NTSC or PAL television signals. A two-port memory architecture allows concurrent real-time sequential access for storage and display and random access for processing the stored image. Processing operations can be performed by both an embedded bit--slice controller and a host computer.
Phase Coded Imaging
Robert A. Gonsalves
We present a cross-coupling of adaptive optics and image processing techniques. The optics of the taking system is adaptive so that a deterministic phase coding such as a time-varying quadratic phase can be introduced into successive images. The resulting images are processed to produce an enhanced image. The adaptation is done in open loop fashion so that most of the burden for generating a sharp image is placed on the post-processing hardware, not the optical system. A computer simulation demonstrates the concept.
Channel Error Effects And Error Protection For The Combined Symbol Matching Facsimile Coding System
Wen-hsiung Chen
The Combined Symbol Matching (CSM) facsimile coding algorithm has proven to have superior performance over any existing facsimile coding algorithm. However, as with any high performance coder, the algorithm produces an asychronous code that is susceptible to channel errors. The error effects on the source code items of CSM are not all alike nor are they always catastrophic. It is highly dependant upon which code items are perturbed by the error. The error effects are demonstrated by injecting an error onto each of the source code items. Based on these a tailored error protection scheme is derived. The tailored error protection scheme isolates the most sensitive code elements of the source codes to minimize a catastrophic loss of code word synchronization. Besides the tailored error protection scheme, a conventional forward error correction scheme and a conventional error detection scheme (ARQ scheme) are also investigated. With channel error rates of less than 10-4, the tailored error protection scheme proves to be more than adequate using a minimum of additional code bits.
The Definition And Measurement Of Image Quality
S. J. Briggs
The adequacy of an image's quality is determined by the user judgment of how well it does its job. Therefore, image quantity from a user's standpoint is a subjective parameter which is dependent on the image's application. Different users have developed different subjective image quantity measures and, in many cases, related these to physical measures. This paper is a general discussion of some of the ways image quantity is determined and mea sured by different image users.
Tactical Utilization Of Digital Imagery
Robert V. Taylor
Recent advances in sensor technology present a heretofore unobtainable opportunity to collect and process imagery in a real or near-real time mode. Since this imagery is transmitted and displayed in a digital format, the imagery can be manipulated by a computer prior to display, permitting many functions to be automated. Coupled with this new technology are newly emerging concepts on how to satisfy tactical commander's intelligence information requirements. The process to exploit digital imagery will be considerably different than what we have been used to. To optimize the design of digital imagery exploitation equipment, it is vital that the purpose and use for it be thoroughly understood.
Determining Image Quality From Electronic Or Digital Signal Characteristics
Christopher Allen, Richard A. Schindler
Frequency domain measures of image quality have been used successfully for some years to predict human assessment of image quality. These measures were derived from the optical power spectrum. The emphasis in this paper is on utilization of digitally-derived power spectral estimates of image quality computed before hard copy imagery is produced.
Data Requirements For A Reconnaissance Surface Station
John A. Wise
Communication of data from the acquisition sensor to decision maker does not take place unless the decision maker internalizes the data in a way which allows him to meet his perceived needs. Information and management sciences have found that this internalization is dependent not only on perceptual factors such as resolution, but to at least an equal extent upon other more general factors. Management scientists have noted that these factors include: decision style, decision situation, data available, mode of presentation, and user system dialog. This paper will attempt to identify how the designer of a reconnaissance system should consider these factors in the design of the recce surface station in order to assure communication between the sensor and the user.
Digital Flicker-Free Slow Scan Multispectral Imagery Display
John C. Kates
A technique for displaying flicker-free real-time grey scale images from multiple spec-tral region slow scan imagers has been developed. Under control of an microprocessor, digital data from the imager is transferred one raster line per spectral region at a time via buffer memories into a bank of video graphic random access memories (VRAM). The VRAM bank generates standard video rate digital grey scale data continuously while being updated with the imager scan rate data. The digital VRAM data grey scales may be expanded under operator control for effective display of low level imagery. The microprocessor adds a calibration grey scale at the borders of the images for reference. Following conversion to analog, the imager video is combined with synchronizing signals to provide composite video. Titling and timing information is generated by an alphanumeric VRAM controlled by the microprocessor and inserted in the composite video. The VRAMs will automatically synchronize to an external video source, and an external mixer was used to combine the multi-spectral video with conventional visible camera video for video data suitable for display or recording.
Stereoscopic Video Microscope
James F. Butterfield
The new electronic technology of three-dimensional video combined with the established. science of microscopy has created. a new instrument. the Stereoscopic Video Microscope. The specimen is illuminated so the stereoscopic objective lens focuses the stereo-pair of images side-by-side on the video camera's pick-up, tube. The resulting electronic signal can be enhanced, digitized, colorized, quantified, its polarity reverse., and its gray scale expanJed non-linearally. The signal can be transmitted over distances and can be stored on video. tape for later playback. The electronic signal is converted to a stereo-pair of visual images on the video monitor's cathode-ray-tube. A stereo-hood is used to fuse the two images for three-dimensional viewing. The conventional optical microscope has definite limitations, many of which can be eliminated by converting the optical image to an electronic signal in the video microscope. The principal aHvantages of the Stereoscopic Video Microscope compared to the conventional optical microscope are: great ease of viewing; group viewing; ability to easily recohd; and, the capability of processing the electronic signal for video. enhancement. The applications cover nearly all fields of microscopy. These include: microelectronics assembly, inspection, and research; biological, metallurgical, and che.illical research; and other industrial and medical uses. The Stereo-scopic Video Microscope is particularly useful for instructional and recordkeeping purposes. The video microscope can be monoscopic or three dimensional.