Proceedings Volume 0594

Image Coding

Thomas S. Huang, Murat Kunt
cover
Proceedings Volume 0594

Image Coding

Thomas S. Huang, Murat Kunt
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 May 1986
Contents: 1 Sessions, 44 Papers, 0 Presentations
Conference: 1985 International Technical Symposium/Europe 1985
Volume Number: 0594

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Adaptive Split-and-Merge for Image Analysis and Coding
R. Leonardi, M. Kunt
An approximation algorithm for two-dimensional (2-D) signals, e.g. images, is presented. This approximation is obtained by partitioning the original signal into adjacent regions with each region being approximated in the least square sense by a 2-D analytical function. The segmentation procedure is controlled iteratively to insure at each step the best possible quality between the original image and the segmented one. The segmentation is based on two successive steps: splitting the original picture into adjacent squares of different size, then merging them in an optimal way into the final region configuration. Some results are presented when the approximation is performed by polynomial functions.
The Extraction of Orientation and 2-D Velocity through Hierarchical Processing
David J. Fleet, Allan D. Jepson
This paper concerns the first functional level of visual processing in which low-level primitives are extracted for use in higher levels of processing. We constrain the computational nature of this first level such that a rich description of local intensity structure is computed while requiring no previous or concurrent interpretation. The simultaneous use and interaction of different types of visual information extracted in this way will facilitate a variety higher level tasks. We outline principles for the analysis and design of mechanisms selectively sensitive to local orientation and velocity information. We also discuss tools for the construction of such mechanisms in terms of a hierarchical computational framework. The framework consists of of cascades of explicit (convolution) and implicit (lateral interactions) spatiotemporal processing. The degree of orientation or velocity tuning can be altered by varying the number of layers in the cascade and the form of the processing at each layer.
The Application Of Human Visual System Models To Digital Color Image Compression
Charles F. Hall
A nonlinear mathematical model for the human visual system (HVS) was selected as a pre-processing stage for monochrome and color digital image compression. Rate distortion curves and derived power spectra were used to develop coding algorithms in the preprocessed "perceptual space." Black and white images were compressed to 0.1 bit per pel. In addition, color images were compressed to 1 bit per pel (1/3 bit per pel per color) with less than 1 percent mean square error and no visible degradations. Minor distortions are in-curred with compressions down to 1/4 bit per pel (1/2 bit per pel per color). It appears that the perceptual power spectrum coding technique "puts" the noise where one cannot see it. The result is bit rates up to an order of magnitude lower than those previously obtained with comparable quality. In addition, the model leads to an image quality metric which compares to subjective evaluations of an extensive image set with a correlation of 0.92.
Motion Adaptive Downsampling Of High Definition Television Signals
Thomas Reuter
To reduce the data rate for the digital transmission of a high definition television (HDTV) signal different three-dimensional downsampling patterns were investigated with regard to the three-dimensional spectral domain that can be represented. Resulting from these investigations and from computer simulations sampling patterns allowing the transmission of spectral domains which are optimally adapted to the human visual perception are proposed. For the luminance signal a 2:1 downsampling procedure with motion adaptive pre- and post-filtering is presented. The spatial resolution for stationary areas is not reduced in comparison to the interlaced signal. For the chrominance signals a non-adaptive procedure with a reduction factor of 4 is suggested. In contrast to other procedures the spatial resolution for nonmoving parts is uniformly reduced by a factor of 2 in both the horizontal and vertical direction.
Colorimetry in HDTV : Up-To-Date Solutions For A New System
Raymond Melwig
Use of luminance-chrominance separation in all forms of picture coding is so current that the reasons which have conducted to this state are often forgotten. These reasons are reminded and the deviations from the principles in present use pointed out. Main deviations are matrixing of gamma-corrected instead of linear primary signals and matrixing with coefficients corresponding to a colorimetric system today abandoned in actual fact.
Problems Of Fieldrate Conversions In HDTV
C. Billotet-Hoffmann, H. Sauerburger
A comparative study has been carried out of standard conversion to 50 Hz, starting from 60 Hz, 80 Hz HDTV-signals and a superinterlaced system (15:1) with 600 Hz field rate. The limits of field rate conversion without motion compensation caused by the sampling theorem are investigated.
Psychovisual Bases For Television Pictures Enhancement
E. Bourguignat
The required picture quality depends on the service, therefore, for each case, the "good quality" assessment must be defined in accordance with the use made of the picture. So tridimensional enhancement of television pictures is technically very expensive and precise knowledge of necessary characteristics is very usefull.
An Adaptive Block-Quantizer Approach To Transform Coding Of Pictures
M. Haqhiri, C. Remus
Two fixed rate transform coding techniques are presented in this paper. They use a simple non-stationary model for images. This model supposes that the transformed sub-blocks of an image signal could be generated by switching the outputs of several random vector generators. It leads to two adaptive block quantizer techniques which characterize the random generators based on a training sequence and encode the new images using the learned parameters. These techniques show some improvements with respect to the well-known Chen-Smith method. The encoder-decoder complexity is lower and the techniques are easily hardware implementable.
Image Coding by Vector Quantization of M-Hadamard Transform Coefficients
Bernard Hammer, Michael Schielein
Vector quantization (VQ) has been recognized to be a promising technique for low data rate image coding as a result of the development of effective codebook design algorithms and advances in digital processing. VQ in the transform domain of the so-called M-Hadamard Transform is proposed to avoid typical quantization noise of VQ, namely block contouring and the 'staircase' reconstruction of edges in the decoded image which is highly noticeable by the human observer. This contribution reports the application of this principle in an intra/interframe coder concept for encoding the luminance component of TV-sequences with fixed length codewords of 1 bit/sample. In particular a vector predictive quantizer is used for encoding the inter-frame block differences of the image sequence. This is supported by a memoryless VQ to improve prediction in blocks with strong movement. The codebook of the VQ is based on the well-known LGB-algorithm.
Bounds On Coder Performance Derived From The Uncertainty Principle
Roland G. Wilson
Traditionally, the only bounds on the performance of image coding systems have been obtained by assuming stationary statistical models of the image data. Such models are, however, known to be unrealistic for natural images. An alternative approach, which is based on the essential non-stationarity of images, has been developed. Based on the uncertainty principle, it can be applied to study the performance of both predictive and transform coders. The purpose of the paper is to develop this approach and show how it can lead to a better understanding of the essential constraints on coder design. After an explanation of the general principles and the theoretical development, examples will be given to show how coders with superior performance can be developed by taking account of these bounds.
An Integrated Approach To Video Standard Conversion And Pre-Processing For Videoconference Codec
L. Chiariglione, M. Guglielmo, F. Tommasi
The goal of implementing videoconference codec at 384 Kbit/s cannot be achieved by maintaining the full spatio-temporal resolution of the original images. Consequently the videoconference codec will require pre-processing and spatio-temporal subsampling operations at the transmitter side with corresponding postfiltering and interpolations, at the receiver side, to reconstruct the missing information. These operations are very similar to those implied by the conversion between the American and European video standards required by the videoconference service, basically international. The integration of the two processings in a single operation would result in a further advance toward a world-wide standard in the video codec implementation, overcoming the dichotomy between 625/50 and 525/60 systems. The present paper describes the architecture of a codec which gives an integrated solution to pre/post filtering and standard conversion problems.
Improving The Linear Approach To Motion Estimation Of Rigid Bodies By Means Of Nonlinear Constraints
C. Braccini, G. Gambardella, A. Grattarola, et al.
Estimating the 3-D motion from a sequence of images is one of the means to efficiently code time-varying scenes and to infer 3-D shape information for scene analysis. Linear and nonlinear approaches have been proposed in the literature to the problem of estimating the motion parameters of a rigid body from a set of corresponding points. In both approaches, errors on the input data reflect into approximations on the estimated parameters. in particular, in the case of the linear approach considered in this paper, the input errors affect the so-called matrix of the essential motion parameters, in such a way that the estimated motion does no longer correspond to the motion of a rigid body. We show that the estimate of the motion can be improved by constraining the above matrix to satisfy the properties corresponding to the motion of a rigid body. This implies adding nonlinear constraints to the original set of linear equations. On the other hand, the unconstrained solution of these latter turns out to be a convenient initial guess for the iterative procedure used to solve the resulting least squares problem. Some simulation ana experimental results are presented, showing how the improvements of the estimated motion depend on the image resolution, the number of corresponding points, their spatial distribution, the amount of motion and the sensor calibration.
Image Coding By Vector Quantization In A Transformed Domain
C. Labit, J. P . Marescq
Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.
Multimode Predictive Coding Algorithm With Motion Compensation
Francisco J. Santillana Rivero
Interframe coding techniques exploiting temporal redundancy of T.V. signals, have been extensively used in order to reduce the bandwith (bit rate) required to transmit these signals. In most cases interframe coding was applied to videoconference signals, where the scene -head and shoulders- has special dynamic features like slow motion, uniform background and no camera motion. Coding efficiency of this technique falls off rapidly when the amount of motion increases, as in broadcast T.V. signals where several moving objects and/or camera movements are present at the same time.
Coding Television Signals at 320 and 64 kbit/s
G. Kummerfeldt, F. May, W. Wolf
This paper explains a coding technique for transmitting television scenes at rates ranging from 64 to 320 kbit/s. The algorithms to be described are applied to TV input signals with reduced spatial and temporal resolution. Use is made of a hybrid coding algorithm, with predictive coding in the time domain and transform coding in the spatial domain. To improve coding efficiency an object matching technique is used for movement compensation. The residual prediction errors are coded by adaptive block quantization with a 16*16 discrete cosine transform (DCT). On the receiver side skipped fields are reconstructed by motion adaptive interpolation (MAI).
New Results In Directional Decomposition-Based Image Coding
Roberto Cusani
In the framework of directional decomposition-based image coding, a new strategy is proposed in order to improve previous results. Particular attention is devoted to the representation of the high-frequency component of the image, that is critical for both the quality of the decoded picture and the compression ratio obtained.
Low Data-Rate Coding Using Image Primitives
Elias Hanna, Don Pearson, John Robinson
We report on some initial attempts to reconstruct a grey-level image starting trom a cartoon or sketch, which is treated as an image primitive. The aim is to code grey-Level images efficiently by using object-related information. It the cartoon is derived t valley detection from the image of a smooth object, samples of the original image taken at the cartoon lines and at a few selected intermediate points contain implicit inrormation about the geometry of the object. Some initial experimental results with (a) simple black-level fill-in, and (b) first-order interpolation have been obtained.
Improvements Of Directional Decomposition Based Image Coding
M. Benard, M. Kunt
The directional decomposition has already been introduced as a pertinent tool for image coding. In this paper, this approach is optimized with several steps of refinements in order to improve the compression ratio (up to 120:1) and the quality of the decoded image. After the presentation of the coding strategy, results are shown for various real images.
Image Compression Based On Hierarchical Encoding
Narciso Garcia, Carlos Munoz, Alberto Sanz
Hierarchical encoding, initially developed for image decomposition, is a feasible alternative for image transmission and storage. Several independent compression strategies can be implemented, and, therefore, applied at the same time. Lossless encoding o Universal statistical compression on the hierarchical code. A unique Huffman code, valid for every hierarchical transform is built. Lossy encoding o Improvement of the intermediate approximations, as this can decrease the effective bit rate for transmission applications. Interpolating schemes and non-uniform spatial out-growth help solve this problem. o Prediction strategies on the hierarchical code. A three-dimensional predictor (space and hierarchy) on the code-pyramid reduces the information required to build new layers. o Early branch ending. Analysis of image homogeneities detects areas of similar values that can be approximated by a unique value.
Some Important Observations Concerning Human Visual Image Coding
Ian Overington
During some 20 years of research into thresholds of visual performance we have required to explore deeply the developing knowledge in both physiology, neurophysiology and, to a lesser extent, anatomy of primate vision. Over the last few years, as interest in computer vision has grown, it has become clear to us that a number of aspects of image processing and coding by human vision are very simple yet powerful, but appear to have been largely overlooked or misrepresented in classical computer vision literature. The paper discusses some important aspects of early visual processing. It then endeavours to demonstrate some of the simple yet powerful coding procedures which we believe are or may be used by human vision and which may be applied directly to computer vision.
A Differential Displacement Estimation Algorithm With Improved Stability
M. Bierling
A new differential displacement estimation algorithm for television sequences is presented. It minimizes the local mean squared displaced frame difference rather than maximizing the local cross correlation of displaced frames, as it can be shown that there is frequently no correspondence between the cross correlation peak and the actual displacement of a moving object. The algorithm is applied iteratively, i.e. in each step of iteration the resulting estimate of the displacement vector serves as an initial guess for the next step. Compared to known techniques stability is improved by introducing a more accurate two-dimensional image model. The approximation of spatial gradients as an average of spatial differences of two successive frames yields an increased accuracy of the displacement estimate.
Time/Space Recursions For Differential Motion Estimation
P. Robert, C. Cafforio, F. Rocca
The alqorithms for the measurement of image motion heavily depend on the uniformity of the motion field. Block techniques work only if the whole block undergoes the same motion and point-recursive techniques work only if the motion changes smoothly. However, in real images. motion changes abruptly at the boundaries of the imaged objects. Therefore block techniques are unreliable and pointwise recursion techniques have to be driven by proper tests so that the recursions are stopped and reinitialized where necessary, without causing unnecessary smear. The paper analyzes such improvements of the pel recursive motion analysis techniques either for coding, with a progressive scan of the image sequence, or for image interpolation.
Motion-Compensating Field Interpolation From Interlaced And Non-Interlaced Grids
B. Girod, R. Thoma
Effects caused by sampling and motion compensating interpolation of television sequences containing translatory motion in the image plane are analyzed in the frequency domain. As a velocity-adapted lowpass filter possesses a "velocity passband", a precise motion-compensation is not required. Schemes for motion-compensating interpolation from non-interlaced and interlaced grids are proposed and investigated experimentally by means of moving zoneplate patterns. A novel scheme for interpolation from an interlaced grid preserves the full vertical resolution of the picture contents over a wide range of velocities.
A Motion-Compensated Interframe CODEC
K. Iinuma, T. Koga, K. Niwa, et al.
This paper describes an adaptive intra-interframe codec with motion-compensation followed by an entropy coding for prediction error signal as well as for motion vector information. This adaptive prediction is highly efficient even for very fast motion as well as scene change where motion compensation is ineffective. Prediction error and vector information are code-converted for transmission by means of an entropy coding where contiguous zero signal is run-length coded and non-zero signal is Huffman-coded. Based upon the algorithms described in this paper a practical codec has been developed for videoconference use at sub-primary rate. According to a brief subjective evaluation, the codec provides good picture quality even at a 384 kb/s transmission bit rate.
Coding Shape Details By Means Of Frequency Analysis Of The Boundary
A. Grattarola, S. Gaglio, L. Massone, et al.
In this paper an approach to the analysis of boundary details in planar shapes is presented. It is possible to distinguish, in the structure of an object, between a "main structure" and the overlying "details" or "textures"; at this purpose, a complementarity principle for the description of shape is proposed. It is then suggested that the two entities can be separated in terms of frequency analysis, the low frequency component being associated with the main structure, and high frequency components with details/textures. A representation for high-frequency components is proposed, based on the "zero-crossing of details/textures" which extends to the contours of shapes the theorem by Logan.
Image Representation By Means Of Two Dimensional Polynomials
Michel Kocher
In this paper, two different methods of representing an image using 2-D polynomials will be presented. In the first approach, the image is segmented into adjacent regions using a region growing algorithm. At the completion of this step, the grey level evolution of every region is approximated by a 2-D polynomial using the least square error method. In the second approach, the segmentation and approximation procedures are merged together. This is achieved by representing the matrix's image by a graph where every node is a mapping of a square of n times n pixels and every edge is a measure of similarity between the two nodes it connects. The graph is then iteratively transformed by using an isotropic node merging algorithm. Lastly, the reduced graph is transformed back to a matrix representation giving the final image. The two algorithms will be compared in terms of growing homogeneity, error control and optimality, and their respective results will be presented.
Data Compaction in the Polygonal Representation of Images
Minsoo Suk, Hyun-Taek Chang
An efficient data compaction algorithm which reduces the total number of vertices in the polygonal representation of images is presented. The data compaction procedure takes a segmented and polygonally approximated image and transforms it into an ordered list of attribute-minimum cover. The new algorithm incorporates the precedence relation among fields associated with each node of Enclosure Tree as well as enclosure relation among constituent polygons. It is an extension of our previous work which uses only enclosure relation. The algorithm is useful for automated frame creation for pictorial information systems and graphics data compression.
A New Method For The Synthesis And Efficient Coding Of Natural Structured Textures
P. Volet, M. Kunt
This paper deals with the synthesis of cell-structured textures, which can be modeled by a primitive repeatedly positioned in the picture plane according to some placement rule. The only restriction introduced to this model is that the primitive must be locally invariant and the placement rule locally regular. Experimental results are presented.
Coding Textures
M. Spann, T. Dodgson, R. Wilson
A number of novel approaches to image data compression have been proposed, which attempt to compress the image by analysing it into regions of uniform gray level and texture and synthesising an approximation, which is accurate only in a statistical sense, at the receiver. This paper attempts to establish what gains may be achieved by such methods and what the problems are in finding a viable method. The perceptual properties of textures are considered and methods for analysing them and synthesising them are discussed. It is shown that although savings over conventional methods of data compression can be large, there are some difficult problems to overcome in developing a practicable system.
Temporal Propagation Of Channel Errors In Multimode Adaptive Coders
C. Labit, J. P. de la Tribonniere
The aim of this study is temporal propagation of channel errors in coding schemes for Television Sequences. Many studies have been performed to decrease spatial propagation in adaptive intraframe DPCM coder. All these methods could be extended to temporal domain. Briefly we distinguish, algorithmic robustness using efficient adaptive predictors, systematic protection with redundant information transmission, and the use of error correcting codes. This paper concerns more specifically the inherent robustness of coding algorithms. A multimode intraframe/interframe coding algorithm using motion compensation is used and the locii where channel errors are introduced for simulation experiments are precisely defined : fixed areas, motion compensated areas, uncompensated areas. First experiments show that high degree of adaptivity in the sense of fast switching between prediction modes (intra/interframe) must decrease the error propagation in both temporal and spatial domains. Moreover, correcting improvements such as temporal refleshment of specific parameters, especially motion parameters, have been tested.
A new Hybrid Coding Technique for Videoconference Applications at 2 Mbit/s
Herbert Holzlwimmer, Walter Tengler, Achim V. Brandt
A new hybrid coding technique is introduced which is based on the Discrete Cosine Transform, interframe DPCM, conditional replenishment, detection of significant subareas in the transform domain, adaptive quantization, adaptive Huffman coding and postbuffer control. This coder concept is the result of a comparison of several coding methods including uniform/nonuniform quantization, constant/variable word length coding and prebuffer/post-buffer control schemes. An important feature of the presented coder is the selection of transform coefficients within each block which are grouped in subareas for adaptive entropy coding. The components of the coder are described and its excellent performance is demonstrated by means of SNR-measurements and a typical videoconference sequence.
Interframe Contour Coding
Roberto Vitiello, Giovanni Zarone
Contour coding can be applied with profit both to visual communication at very low data rates and to contour-texture techniques, as recent neurophysiological findings strongly suggest. The present paper aims at reducing the coding cost by removing the interframe redundancy. First therefore several intra-frame contour coding algorithms are investigated and attempts to single out those suitable for interframe extrapolation are made. The parameter encoding turned out to be the most appropriate. In fact it does not resort to a pel-by-pel descri'ption; on the contrary it resorts to a global description of the contour, which presents more moderate frame to frame changes. Then an interframe method is presented. It con-sists in a preliminary verification to see if the movement of a single contour is rigid, without any deformation, or not. In the first case the displacement is measured and transmitted; in the second case the changes experienced by the parameters chacterizing the contour are estimatet, coded and transmitted. The performance of this coder, tested by a typical videotelephonic sequence, is shown, and comments are made and a feasible strategy for further improvements is outlined.
Method For Measuring Large Displacements Of Television Images
L. Chiariglione, L. Corgnier
A new method is proposed for measuring displacements in moving images, based on orthogonal transforms. The method is suitable for arbitrarily large displacements, and can provide a basis for alternative motion compensation coding.
Two New Approaches To Hybrid Image Coding Of Sequences
Torbjorn Kronander, Robert Forchheimer
It is well known that when applying image coding to sequences also redundancy in time domain should be removed. An efficient way to do this is to use "hybrid-techniques", i.e. to use predictive coding in time and adaptive transform coding in the spatial domain. By adaptive transform coding we mean threshold coders similar to the "scene adaptive coder" described by Chen and Pratt [3]. Two significant problems of these methods are: 1) The complicated schemes needed to regulate the output bitrate. 2) "Subjective problems", that is blocking effects and other problems that arise from the fact that the conventional transforms are based on a statistical framework instead of being adapted to the human visual system. We propose modifications of the conventional schemes to reduce these effects.
Coding of YUV Colour Images based upon a Two Dimensional Two Component Model
D. Allott, R. J. Clarke
A probable explanation for the discrepancies that exist between the theoretical and observed performance of single model based image coding methods such as transform coding, is the poor applicability of the continuous Gauss Markov source generally used to model images in the spatial domain, caused by the presence of abrupt discontinuities associated with edge detail. Although using adaptive methods transform coders can to a certain extent overcome this problem, the fact remains that the transform coding approach in itself is not a particularly efficient way of representing discontinuous edge detail. As an alternative approach, two component coding methods are based upon the assumption that an image may be modelled better as the sum of two distinct sources than as a single source. One component is associated with discontinous edge detail and the other with continuous texture, thus enabling independent coding strategies to be designed to match the quite different character-istics of each component model and hence produce better performance overall.
Two-Dimensional Color Encoding Patterns For Use In Single Chip Cameras
Karl Knop
Color video cameras using a single solid state image sensor derive color from a multiplexed signal which is obtained optically by superimposing a mosaic color filter array in the image plane of the sensor. The problem of optimizing the color filter array together with the corresponding decoding scheme is discussed and a systematic study of various color encoding schemes is given.
Display Of Color Images With A Limited Set Of Colors
Minsoo Suk, Hyun-Taek Chang
Many present-day color display systems incorporate Color Look-Up Tables (CLUTs) to compensate for small video refresh memory. The selection of color palette (representing colors selected for the CLUT) is critical for natural display of color images since the number of distinct colors displayable simultaneously is quite limited. This paper describes an efficient algorithm for optimal palette selection for a CLUT. The algorithm fully utilizes the distribution of pixels in three-dimensional color space. The algorithm is independent of the choice of color space. The results of computer simulation demonstrate that the algorithm is particularly useful for terminals with small refresh memory.
Digital Coding For Quasi-Motion Colour Picture Transmission
Kalman Fazekas
This paper describes the design considerations of the digital quasi-motion colour picture transmission systems and the realization of an experimental system for low cost and low bit rate transmission. The conditional replenishment procedure as interframe coding and 1D-DST as intraframe coding are used in the system. The image sequences will be transmitted through a channel of a radio paging system or through mobile radio channel. The output bit rate is 19,2 kbit/sec. An error correction unit is used in the system, because of the relative high bit error rate of the channel. The description of the coding unit is completed with results of computer simulation.
Pseudocolour Encoding In Spatial Frequency Domain By Quasi Interferometric Set Up
Frank Dubois
Pseudocolour encoding techniques are used to improve the legibility of black and white pictures because discrimination of colours is easier than grey levels.
Self Similar Hierarchical Transforms: a bridge between Blocko-Transform coding and coding with a model of the Human Visual System
G. H. L. M. Heideman, H. E. P. Tattje, E. A. R. van der Linden, et al.
Hierarchical Transforms for time (or spatial) discrete signals are presented. Such Transforms include some familiar orthogonal Block-Transforms, but also non-orthogonal and non-Block transforms. Therefore, the degree of freedom of choosing basisfunctions is much larger. Within this family a subclass exists that approximates closely the operations that are performed by the Human Visual System.
Noise Restoration Of Compressed Image Data
James R. Sullivan
Image noise restoration and predictive image coding are combined by implementing a maximum-a-posteriori (MAP) estimator on the differential signal in a differential pulse code modulation (DPCM) image compression scheme. For a Laplacian differential-signal probability density function (pdf) and a Gaussian noise pdf, the MAP estimator is an adaptive coring operator which is linear in the uncored region with a bias toward zero and a null operator in the cored region. The bias and the width of the coring region are functions of the noise and differential-signal variance, which are estimated from local image statistics over variable-length line segments. Independent segments are isolated by using a generalized-likelihood-ratio-test (GLRT) for Laplacian signals to determine whether or not adjacent segments have statistically equivalent differential variances. Because the MAP operator is an additive bias, it can be inserted in the transmitter of a DPCM encoder without error build-up or overhead information, and since it lowers the variance of the signal to be quantized by reducing the noise it can simplify the encoder by decreasing the number of levels that are required.
Application Of Predictive Compression Methods To Synthetic Aperture Radar (SAR) Imagery
Susan A. S. Werness
Several variations of a prediction compression system have been demonstrated with 6 meter resolution synthetic aperture radar (SAR) imagery. Due to the uncorrelated nature of SAR imagery, the prediction system design problem was approached from the point of view of statistics matching and decorrelation of reconstruction errors, rather than minimization of mean square error. It is shown that a moving average (MA) predictor can work well, depending upon the quantizer ,used and upon the homogeneity of the data. Due to the occurrence of large data values evolving from returns from cultural objects, slope overload can be a severe problem in system design. This problem is most economically solved by a thresholding type of operation in the quantizer, resulting in a dual rate system. Good results are obtainable at rates of 1.5 bits/pixel.
Image Coding Using Pseudorandom Shift Register Sequences
Steven C. Gustafson
The coding of digital images using linear feedback shift register (LFSR) sequences is considered analytically and in computer simulations. It is shown that LFSR sequences can provide efficient representations of binary and multilevel images in terms of a relatively small set of integers related (in limiting cases) to image complexity and randomness.
A High Compression Coding Method For Facsimile Documents
Pascale Joly, Francoise Romeo
A facsimile data compression method is presented. Using three different modes : line recognition, pattern matching and facsimile coding, it is particularly adapted to black and white technical documents containing both graphical and textual information. The use of such a method is quite interesting in electronic document storage and retrieval systems, where considerable amount of documents are to be digitized and stored. Compared to CCITT group IV two-dimensional code, the average compression ratio is about 3.9 times higher. The possible building of a common pattern library for a set of documents, instead of a library per document, allows to improve the compression for text-predominate documents. The implementation of the method on a microcomputer equipped with a high-resolution raster display showed that in spite of some slight information modifications, the quality of restitution is good, and the document decoding and display are very fast.