Proceedings Volume 0432

Applications of Digital Image Processing VI

cover
Proceedings Volume 0432

Applications of Digital Image Processing VI

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 9 January 1984
Contents: 1 Sessions, 47 Papers, 0 Presentations
Conference: 27th Annual Technical Symposium 1983
Volume Number: 0432

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Image Enhancement Tools For Tracing Fringe Patterns In Holographic Interferograms Acquired During Laser Fusion Experiments
Pamela C. Vavra, Garland E. Busch, Chester L. Shepard
Pulsed holographic interferometry is essentially the only direct method for determining electron density profiles in inertial fusion plasmas. Consequently, it is a very important diagnostic tool in laser fusion experimentation. The tracing of fringe patterns in the reconstructed holograms is required to determine their precise number and location for subsequent Abel inversion. This is a very labor-intensive task, for which computer assistance has long been sought. In the KMS Fusion multiframe optical probing system, a sequence of four time resolved image frames is produced at rates equivalent to over 5 billion/sec. The increased number of images thus generated has spurred the development of improved methods for handling data. A plan has evolved for providing scientists with interactive adaptive image enhancement tools to assist in locating the fringes. The feasibility of applying digital techniques to aid in the analysis of holographic interferograms has been demonstrated by others. However, only limited success has been achieved in tracing highly dense fringes in the presence of noise. Traditional noise reduction methods tend to fail in the case of high density fringes, where the spatial frequency of the noise is close to that of the pattern to be discerned. Other problems are introduced by uneven lighting conditions, competing fringe patterns (due to aberrations in optical components or other attenuators in the optical path), and bonafide discontinuities in the fringes. Newly developed digital enhancement tools apply tailorable neighborhood operators to individual pixels as directed by a cursor that may be manipulated via joystick or keyboard control. Operations may be performed on a sectional blow-up while viewing both the full image and the enlarged section. In this manner, global information can be utilized to aid in the local enhancement operations, and vice versa. This paper constitutes a progress-to-date report on work that is continuing for the U.S. Department of Energy.
Restoration For Linearly Degraded Digital Pictures By Using The Generalized Laplacian
Shozo Kondo, Moriyuki Matsuo
There are two kinds of restoration methods for linearly degraded digital pictures which have been proposed until now; the one is a spatial frequency filtering method by using two-dimensional Fourier transforms and the other is a method by using the generalized inverse of matrices. In both of the methods, a large amount of computing time and memory are required to restore a degraded picture, then it is quite difficult to construct an on-line picture restoration system. A restoration method proposed in this paper uses Neumann's series expangianof the generalized inverses of matrices. In this method,a degraded picture is restored by a matrix called a generalized Laplacian which is made from a matrix which represents degradation function. If degradation can be assumed to be shift-invariant,this method becomes to be quite simple and efficient, and requires only small amount of computing time and memory.
Comparison Of Image Averaging Methods For Structure Determination In Molecular Biology
Olaf Kubler, Dietmar Pum
Structure determination in molecular biology with the aid of the electron microscope is made difficult by the extreme susceptibility of biological objects to ionizing radiation. Molecular structures may be assessed after averaging over many low dose images of like preparations. Signal averaging is facilitated when periodic structures such as crystalline membrane components are investigated. Fourier techniques akin to methods of X-ray crystallography or matched filter techniques may be used to obtain the average structure. The alternative methods, their reliability and their usage were investigated on computer simulations of electron microscopical images.
An Efficient Algorithm for Constrained Image Restoration with the Viterbi Algorithm
Herbert Schorb, Hans Burkhardt
Linear filters are non-optimal for image restoration problems with a constrained original signal alphabet (e.g. blurred black-and-white images). Reference 1 gives an optimal solution to this problem under the criteria of maximum-a-posteriori probability with the Viterbi algorithm. The image distortion was formulated as a rather general Markov model including, for example, nonlinear and space-variant dispersions as well as the possibility of considering a restricted original data set like alphanumeric characters etc. The computational complexity, however, was still very high. For a one-dimensional problem of dimension N, dynamic programming reduces the exponentially growing number of calculations of a straightforward solution to a value proportional to N. In the two-dimensional case for images of dimension MxN it was possible' to reduce the .. computational complexity from 0(Bc•M•N) to 0(M•Bc•N) which is linear in one dimension but still exponential in the second dimension. As a consequence, only rather small images could be restored. This paper shows how to reduce the computational complexity further to 0(M•Bc•n). A value which is linear in the image dimension and where the exponential part is of the order of the size of the two-dimensional pulse response. Thus it is possible to apply the restoration algorithm to images of realistic dimension. This result holds for non-pathological dispersions. The algorithm turns out to be asymptotically optimal. However, in general it is suboptimal.
Synthesis And Applications Of Effective Decision Algorithms For Digital Image Processing On A Basis Of Statistical Invariant Coupling Method
Nicolai A. Nechval
Some problems of digital image processing are considered in the paper. Main attention is paid to effective statistical processing of observations obtained thereby. Statistical invariant coupling method which is the basis for constructing synthesis of effective decision algorithms is used for the purpose. Optimum sampling procedures are also discussed. The results are presented as components of the single algorithm for determining homo-geneous regions boundaries in the composite picture which provides determination of the closed boundaries. The image to be analysed is first broken into equal segments after which statistically similar segments are merged into homogeneous regions corresponding to the objects in the picture. Decision whether it is possible or not to add to the region the next segment is found by checking statistical hypotheses. Formation of the region is terminated when no segment can be added to it any more. The process of differentiation of homogeneous regions follows next. Questions of optimum forming standard observations samples for differentiation of regions are also outlined. The algorithm in question may be used, for instance, for classification of aerial photographs of t he Earth surface, for natural resources examination, classification of surface for aerial survey for laying routes in the sea under conditions of winter ship navigation. It can essentially solve the following problems: terrain classification, effective coding and decoding of information, selection of most informative properties, etc. A number of new results have been obtained. The paper cites examples for illustrative purposes.
A Theory Of Pseudo-Orthogonal Bases And Its Application To Image Transmission
Makoto Sato, Hidemitsu Ogawa, Taizo Iijima
A theory of pseudoorthogonal bases is proposed, which is an extended concept of orthogonal bases. Ampseudoorthogonal base in an N-dimensional Hilbert space consists of M (⟨ N ) vectors { pm }m=1 satisfying the following equation for any f in the Hilbert space:
Synthetic Aperture Radar Image Bandwidth Compression
B. G. Kashef, K. K. Tam
This paper reports the result of an effort to explore the potential of utilizing the existing compression techniques for the synthetic aperture radar (SAR) imagery. Both adaptive and non-adaptive transform coding techniques were utilized to simulate an end-to-end system, where SAR imagery goes through block-quantization, re-sampling, filtering and encoding, to achieve the desired rate reduction with minimum possible amount of degradation in image quality. Although this investigation utilized limited amounts of SAR data, the approach is not specific to a certain case and is applicable for compression of various SAR imagery. Using simple low-pass filtering, resampling and fast Fourier transform technique, 2:1 or 4:1 data compression leaves all details of the original imagery intact and produces no degradation in image quality based on subjective visual examination. Using semi-adaptive bit-mapping techniques and assigning bit rates of 4, 2, and 1 bit per pixel, further compression on the order of 8:1, with little image degradation, is achievable on most portions of the scene that is examined. This approach has potential for even higher compression ratios if a more adaptive bit-mapping scheme is utilized.
Multispectral Data Compression Using Staggered Detector Arrays
Robert T. Gray, Bobby R. Hunt
A multispectral image data compression scheme has been investigated in which a scene is imaged onto a detector array whose elements vary in spectral sensitivity. The elements are staggered such that the scene is undersampled within any single spectral band, but is sufficiently sampled by the total array. Compression thus results from transmitting only one spectral component of a scene at any given array coordinate. High-resolution reconstructions are achieved by a space-variant minimum-mean-square spectral regression estimate of the missing pixels of each band from the adjacent samples of other bands. Digital simulations show that spectral regressions of mosaic array data provide reconstruction errors comparable to second-order differential pulse code modulation (DPCM). When the mosaic data is itself encoded by DPCM, the accuracy of spectral regression is superior to direct DPCM for equivalent bit rates.
Digital Filtering For The Reconstruction Of Color Images In A Line Sequential Chroma System
Tran Thong
Simulation results of the reconstruction of a line sequential chroma image are presented in this paper. It is shown that the conventional coding method yields undesirable color artifacts. By vertical digital prefiltering the color signals prior to coding, significant reduction of artifacts can be achieved. A further improvement can be achieved by a simple post-filtering during reconstruction. Additional horizontal color resolution can also be achieved at the expense of diagonal resolution by offset sampling the image. A two-dimensional filter is used to preshape the image spectrum. The generation of this 2D filter is discussed in this paper.
Motion Compensated Predictive Coding
S. Kappagantula, K. R. Rao
Interframe image coding techniques in real-time provide a very attractive scheme of reducing the bandwidth required for coding and transmitting natural video scenes. This paper proposes a modification of two existing algorithms for motion compensated interframe coding. It is shown that the modified method involves a reduced computational complexity while being compatible with the performance obtained by the previous algorithms. Implementation of the new algorithm is consequently simplified and a design for the hardware using a parallel processing approach is studied. The system is proposed for use in NTSC TV pictures for applications ranging from broadcast quality TV to video teleconferencing systems.
Low-Frame-Rate Air-To-Ground TV Using Displacement Estimation
R. Lippmann
A computer-based matching strategy for on-line displacement estimation in low-frame-rate video aerial scenes is experimentally investigated. This strategy is required for regeneration of the continuous scene movement, based on movement-adapted frame interpolation. Efficient on-line operation is achieved by a hierarchical search algorithm with termination of the matching procedure at potential dissimilarity locations. The reference window is placed in areas of distinct scene structure, which are selected by a simple threshold test of a set of adjacent grey-level gradients. The performance is illustrated by the measured percentage of correct matching results, including the effect of scene structure selection, channel errors, and 3-bit DPCM coding. Presented results show the advantage of using struc-ture selection and indicate good performance at channel bit error rates up to 10-3.
Adaptive Transform Coding Based On Chain Coding Concepts
J. A. Saghri, A. G. Tescher
The concept of chain coding, originally intended for digital processing of line drawing data, has been used in an adaptive transform image coding procedure to improve its performance. In the previously developed dual coder algorithm the run-length coding technique applied to encode consecutive zeros in the transform domain is replaced by a chain coding procedure. The algorithm is based on the assumption that the boundary of the non-zero region of the transform domain can efficiently be represented by a fixed set of line segments known as chain codes. Preliminary results indicate significant improvements over the basic dual coder algorithm. The additional implementational complexity is modest.
Orbital Mosaic IR Image Simulation And Processing
Tim J. Patterson, Richard W. Christiansen
A series of IR images from a mosaic sensor in geosynchronous orbit is simulated. This simulation includes the area around Santa Cruz, two layers of moving clouds, high and low level targets, simulated line of sight drift, and the composite optical-detector transfer function. Eight algorithms were tested on the simulation for target extraction and clutter rejection: four temporal differencing algorithms, temporal adaptive filtering, spatial adaptive filtering, spatial differencing, and a modified LMS filter. Background suppression factors up to 722 and target extraction of 25 dB. are reported for the simulation developed.
Process For Producing Laser-Formed Video Fiducials And Reticles
J. B. Franck, P. N. Keller, R. A. Swing, et al.
A process for producing calibration markers directly on the photoconductive surface of video camera tubes has been developed. This process includes the use of either a pulsed Nd:YAG laser, operating at 1.06 μm with a 9.5-ns pulse width (full width at half maximum), or a continuous helium-neon laser. The Nd:YAG laser was constrained to operate in the TEMoo spatial mode by intercavity aperturing. The use of this technology has produced an increase of up to 50 times the accuracy of geometric measurement. This is accomplished by a decrease in geometric distortion and an increase in geometric scaling. The process by which these laser-formed video calibrations are made will be discussed.
Data Processing of INSAT Meteorological Imagery
Stephen M. Jermaine, Frank H. Miller
A multipurpose satellite was launched into geostationary orbit for the Government of India in 1982. This vehicle, INSAT-lA, was designed to provide various communications functions and carry visible and infrared sensors designed to provide imagery of meteorological phenomena. The ground station necessary to process this remotely sensed environmental data was designed and built by System Development Corporation of Santa Monica, California. This paper describes the design requirements of this data processing system and introduces its hardware and software architecture. The interactive image processing techniques and data sampling and calibration algorithms that were employed are identified.
A Robust Two-Dimensional Adaptive Thresholder For Bilevel Digital Display And Electronic Printing
To R. Hsing, James C. Stoffel
In the application of bilevel digital display and electronic printing, threshold setting is used to get binary images from optical scanning digitizers.' Such images can be simply obtained by using a fixed threshold to a raw continuous tone image. However, due to (1) a wide range of color backgrounds, (2) wide density variations of the printed information, and (3) the shading effect caused by imaging optics, the use of adaptive threshold setting is obviously needed for obtaining high quality displays and hard copies. This paper will describe a robust two-dimensional adaptive thresholder for obtaining valid bilevel images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local threshold function. The local information in both fast scan and slow scan directions has been considered for threshold calculation. The algorithm and its associated hardware functional block diagram will be described. The experimental results are presented to describe the procedures. The robust algorithm below has shown an adaptability of 0.15 ▵D for density varied text information. A high contrast 35 μm line or low contrast 70 μm to 500 μm line can be detected by the algorithm. Both objective evaluations and subjective pictorial results have indicated promising usage of this algorithm for multicolor inputs or colored background documents.
Statistically Weighted Non-Local Method of Iterative Digital Image Processing
Hiram E. Hart
A direct non-local iterative deconvolution method (NLM) employing the algorithm has been developed and successfully applied to tomographic radioisotopic tissue concentration imaging: PSRF are the point source response functions of the measurement system, (x,y,z) are the coordinates of the tissue element, and D(X,Y,Z) are the data elements corresponding to the measurement system indexing coordinates (X,Y,Z). High resolution determination of radioisotopic tissue concentrations on a scale aproximating the pixel spacing are routinely achieved even with relatively broad PSRF's and even though the individual data elements necessarily reflect a significant degree of radioisotopic statistical fluctuation. NLM converges to the correct high resolution source distributions with ideal and/or randomized data over a much wider range of conditions than the Gauss-Seidel method of iterative deconvolution. Comparative results will be presented.
An Industrial Vision System Recognising Overlapping Industrial Parts Using Grey Scale Images Under A Wide Range Of Lighting Conditions
Alan H. Bond, Roger S. Brown, Chris Rowbury
We describe the design of a computer vision system, which has been implemented and tested in software. The design is aimed at eventual hardware implementation using a video-stream processor approach, and to achieve recognition times of 1 or 2 seconds. The system uses 256x256@8 bit grey scale images of industrial parts which may overlap or touch. It is based upon Nevatia-Babu line extraction, the exhaustive matching of each pair of segments of each stored model separately and the weighing of evidence and extraction of identity and location of each part in the scene. The project is a university-industry collaboration, part of the British Government's SERC Robotics Initiative.
On The Representation Of Moving Objects In Real-Time Computer Vision Systems
Volker Graefe
When a computer vision system is used to interpret real-world scenes containing moving objects, certain characteristics of the objects and of the computer vision system itself, that would be irrelevant in the context of sequences of static images, must be represented. Among them are motion-induced blur and distortion of the image. When the scene has to be interpreted in real time, the representation of objects should, in the interest of efficiency, be decomposed in such a way that at each stage of processing a partial representation is available that includes only those aspects of each object that are relevant at that particular stage. Such partial representations may, at the lowest level, relate to the visual appearance of an object, at an intermediate level to coordinates of its parts, and at the highest levels to its dynamic behavior and to error statistics.
Detection Of Moving Vehicles In Thermal Imagery Obtained From A Moving Platform
Arthur V. Forman Jr., Meloneze M. Moore, Ronald Patton, et al.
Automatic recognition of ground vehicles from an airborne platform can be greatly enhanced by an algorithm that accurately extracts the velocity of these vehicles when they are moving. The moving scene elements can be separated from the stationary (background) scene elements by accurately registering the stationary scene elements in successive images of the image sequence. In this paper, alternative techniques are described and demonstrated for each of the steps in a generic symbolic registration procedure. Output results are presented for a thermal closing sequence and for a side looking tracking sequence. These results show promise not only for accurate measurement of object velocity, but for accurate passive ranging as well.
Determination Of Visual Range From Landsat Data
Robert S. Dennen
A method was developed to use Landsat multispectral scanner (MSS) data to estimate the visual range on the ground or the ground visibility. For a non-reflecting target the radiance seen at the satellite is due entirely to the scattered in-welling energy to the aperature of the detector. If the target is ideally black and there are no scatterers in the atmosphere, the observed radiance is zero. Simultaneous readings on the ground, and from a low flying aircraft at the time of Landsat passage facilitated modeling a wave-length dependent radiance versus altitude relationship. One factor in the relationship (the surface scattering coefficient, ao) relates directly to the surface visible range. Then, from a Landsat reading, the ground visibility can be estimated from that parameter from the relationship Visibility = K/ao where ao is determined from Landsat readings from the derived radiance function relating radiance to the geometric configuration angle 0 and b, a scattering decay constant. H = f(ao, b, 0)
Predicting the performance of matched filter image detectors for non-stationary scenes and noise
Robert E. Stovall
The performance of matched filter scene detectors has been widely studied and methods for predicting its performance are available under assumptions of scene and noise stationarity. In this paper correlator performance is investigated for the more common situation when stationarity cannot be assumed. Sensor noise models are derived and combined with two binary and gray scale correlator implementations to yield an expression for the expected performance of a given system for a specific scene. This expression is implemented in an algorithm that is demonstrated on real-world imagery using fairly simple image processor hardware.
Automatic Detection Of Microaneurysms In Retinopathy Fluoro-Angiogram
Bruno Lay, Claude Baudoin, Jean-Claude Klein
A computerized method for detecting microaneurysms is described, based upon the concepts introduced by the Mathematical Morphology. On the grey level image of a fluoroangiogram, magnified by a microscope, a first transformation, searching the regional minima, gives a binary image where appear vessels and microaneurysms. Then the microaneurysms are extracted using a shape criterion which selects almost circular particles among stretched particles in one direction. The most complex part is to discriminate diffusing microaneurysm (leakage of fluorescein) from vessel. The algorithm is tested on 25 angiograms. We obtain a good correlation between the automatic counting and the manual counting performed by three reading-technicians.
A Binary Tree Classifier for Ship Targets
B. A. Parvin, B. H. Yin, R. J. Hickman, et al.
This paper describes the design of a binary tree classifier for ship targets. The design methodology is general enough so that it can be utilized for other classification problems. A hierarchical clustering procedure is employed to i) discover the underlying structure of data, and ii) construct the binary tree skeleton. The best feature subset, at each nonterminal node of the tree skeleton, is selected through a multivariate stepwise procedure which attempts to maximize the class separability. Further, this stepwise approach continues, until the probability of error at each nonterminal node with respect to a quadratic discriminant function is minimized. The proposed tree classifier has been evaluated against 1300 samples and classification accuracy of 85% versus 62% for the single stage classifier is achieved.
High Speed Implementation Of Linear Feature Extraction Algorithms
Hassan Mostafavi, Mukesh Shah
This paper describes a novel digital implementation of the Hough Transform for high speed extraction of linear features from images of complex scenes. In addition to detection, the Hough Transform estimates both location and orientation of a linear feature. The implementation is based on using the image intensity histogram generator that is available as a high speed hardware feature on some commercial digital image display and processors. It is shown that linear feature extraction is reduced to detection of peaks in a number of onedimensional histogram arrays. The number of histogram generation operations is equal to the number of angles into which the 0°-18° range is quantized. Using this method, the processing time for extraction of a linear feature using 16 angles is on the order of 0.5 second. Unlike regular implementations of the Hough Transform, the amount of computation is completely independent of the number of detected pixels (pixels with value 1) in the binary input image to the Rough Transform.
Maximum Likelihood Estimation For Image Registration
William L. Eversole, Robert E. Nasburg
This paper describes an image registration technique which utilizes a maximum likelihood estimate of translational shifts between images. Through use of a detailed error model, insight is gained into the suitability of common registration methods such as correlation. The relationship between the optimal statistical-based registration algorithm and algorithms which have been reported in the literature is also presented. Models of the two images (search and reference) to be registered are developed, assuming the search image is a combination of the reference image, additive i.i.d. Gaussian noise, and areas often referred to as background or clutter. These clutter areas may or may not have statistics similar to the reference areas. The image models differ from past models which implicitly assume a reference image of infinite dimension. Where the reference image is of infinite dimension, it is shown that Gaussian statistics lead to a likelihood equation yielding a square difference template-matching technique.
SAR Image Registration By Multi-Resolution Correlation
Robert T. Frankot
An algorithm for automatically estimating the spatial misregistration between two synthetic aperture radar (SAR) images of the same scene is described. This algorithm consists of a multi-resolution hierarchical search which drives a multi-subarea cross-correlation of the logarithm of the SAR image intensities. The multi-resolution/multiple subarea correlation approach is suitable for removing very large translational uncertainties and minor rotational uncertainties in SAR image coordinates due to inertial navigation system errors. The resulting algorithm is potentially highly reliable, highly accurate, and computationally efficient. The organization of this paper is as follows. First, the general problem of image registration is introduced. Second, existing techniques for image registration are surveyed. Third, the multi-resolution/multi-subarea correlation approach to SAR image registration is discussed. Finally, experimental results for a two-resolution version of the algorithm are presented.
Multi-Resolution Splining Using A Pyramid Image Representation
Edward H. Adelson, Peter J. Burt
Two or more images may be joined at the edges in order to form an image mosaic. Mosaics are commonly used in satellite imagery [4]; a dozen or more sub-images of a planet may be joined to form a single large image of the entire planet. The same process is used in trick photography, for advertising and for special effects in film-making.
High-fidelity Image Resampling For Remote Sensing
Harold J. Reitsema, Allan J. Mord, Eric Ramberg
Investigation of the image resampling requirements of remote sensing has indicated a need for improved resampling convolution kernel design. Areas in which progress has been made include a recognition of the improved phase linearity of longer kernels and the need for similarity of the modulation transfer function (MTF) across all filters. The computational capability required for the longer kernels is achieved with a dedicated signal processor.
Reduction of Display Artifacts by Random Sampling
A. J. Ahumada Jr., D. C. Nagel, A. B. Watson, et al.
The discrete, sequential scanning (sampling) procedures used in electronic displays generate disturbing artifacts such as flicker, Moire-type patterns, and paradoxical motion. Application of the theory of random scanning procedures to displays can reduce these artifacts. The human retina provides an example of a system that uses randomness to avoid sampling artifacts. Though the retina performs a discrete spatial sampling of the stimulus, artifacts are not evident. For example, we do not see Moire patterns when we look at fine gratings. This is because the retinal receptors are positioned randomly, rather than in a regular array. Random sampling avoids conventional aliasing, but introduces noise. Constraints on the random sampling can banish most of the noise to the region above the Nyquist frequency, where it is easily removed by post-sampling low-pass filtering. For visual displays this means that the sampling artifacts can be traded for noise, and this noise can be placed in regions of space-time frequency for which the human visual system has little sensitivity. Here we report that artifacts, especially those associated with motion, are reduced by constrained random sequencing of horizontal scan lines. Three scan line sequencing procedures are compared: a sequential one, a single random sequence repeated, and a new random sequence on each update. The single random sequence flickered least and was :most as good as the new random sequence procedure at minimizing motion artifacts for the display of a vertical line moving horizontally.
A Survey Of New Techniques For Image Registration And Mapping
Bayesteh G. Kashef, Alexander A, Sawetauk
An important problem in any onboard imaging system is the rectification and registration of images generated by onboard sensors. Accurate registration is a key requirement for detecting changes (in position, brightness, texture, boundary, etc.), from one sensed image to the next, as well as classification of data for intelligence gathering and vehicle guidance. This paper discusses techniques of finding match points in pairs of images and performing geometric corrections and unwarping to compensate for systematic and random variations in the flight path, ephemeris, and sensor response. Techniques of resampling and interpolation of image data are reviewed, and the particular characteristics of sensors operating over a wide spectrum from visible through infrared and microwave are discussed. Particular attention is given to the rectification and registration of synthetic aperture radar (SAR) imagery.
Image Registration By A Statistical Method
Victor T. Tom, Gregory K. Wallace, Gregory J. Wolfe
An automatic method for accurate registration of digital image data is described. The method uses statistically significant correlations between different data sets to generate control points. Under the assumption that the data can be coarsely registered via sensor geometry models or a few manually extracted ground control points, one can use a statistical technique to "fine-tune" the registration. The technique is based on data whitening, short-space correlations, a false-alarm threshold test, and coordinate remapping. Data whitening permits subsequent analysis of processed images to be based on a Gaussian white noise assumption. A quantitative threshold can then be evaluated which is exceeded only when the corresponding subimages are correlated. This statistical approach often yields a dense uniform distribution of control points. Except for the coarse registration step, the procedure is totally automatic, alleviating the tedious task of manual control point selection. The accuracy of the technique is demonstrated by aligning digital stereo imagery and SAR to simulated Thematic Mapper imagery.
A Novel Pattern Recognition Algorithm For Explosives Detection
C. K. Wong, F. L. Roder, H. K. Huang
The detection of explosives concealed in checked luggage is an important security concern of the airline industry. One approach to explosives detection is to obtain a digital radiograph of the checked luggage and utilize pattern recognition techniques to identify suspicious objects. We have developed a pattern recognition methodology which addresses one major category of explosives threat : packaged high explosives. The algorithm being developed uses an optimization technique in the frequency domain to decompose a trace across a digital radiograph of the checked luggage into two components : the background and the potential packaged explosives. The decomposition makes use of the information inherent in the size, shape, density of individual stick of the packaged high explosives as well as the contiguous configuration of the package.
Object Identification from Images of Variable Scale
Martin J. Lahart
When objects must be identified from distorted imagery, a choice must be made between feature sets that are invariant to the distortion and those that are not. Sets of invariants almost always contain less information, resulting in classification error rates that are higher under distortion free conditions, but which are no larger when distortion is present. The choice can be evaluated by calculating error rates as a function of the eigenvalues of the correlation matrix, noise, number of classes and a distortion parameter. An example of this evaluation is given by comparing identification of ships by using a subtraction correlator and moment features. The distortion parameter is scale, to which the correlator is sensitive and the moment comparison is invariant.
A Model Driven System for Contextual Scene Analysis
John F. Gilmore, Andrew J. Spiessbach
Existing strategies for the identification of objects in a scene are based upon classical pattern recognition approaches. The basic concept involved centers around the extraction of a set of statistical features for each object detected in a scene, followed by the application of a classifier which attempts to derive the decision boundaries that separate these objects into classes. As statistical features are quite sensitive to noise, this approach has led to problems due to the inability of classifiers to identify accurate feature set separation in less than ideal conditions. A global approach utilizing the contextual information in a scene currently discarded offers the most promise in overcoming the short-comings of current object classification methods.
An Automated Method For Typewriter Print Comparison
David L Kryger, H. John Caulfield
Aerodyne Research, Inc. (ARI) has shown that its own proprietary methods of statistical pattern recognition (SPR), operating on data readily obtained in real time by an image scanning system, are able to perform detailed pattern recognition and classification tasks. Specifically, using well defined and directly obtainable characteristics of typewriter characters, a technique is demonstrated that will allow rapid and objective association of typewritten material with the source typewriter. A generalization of this capability is also discussed.
Applications Of Model-Based Image Interpretation By Line Structure Description
Eliane Egeli
A general scheme is given for the processing of images containing patterns that can be described by line structures: first lines or/and edges are extracted from the original gray-level picture. Then topological operations are applied to reduce the resulting line structure to an unambiguous skeleton representation. This preprocessed version is translated into an equivalent graph, retaining only the geometrical and the topological information. Finally these transformed data are analysed to yield a set of model-dependent attributes that constitute the base for the interpretation. The practical use of this scheme is demonstrated by means of two different applications: restoration of cartographic data and automated analysis of blood samples.
A Sequential Thinning Algorithm For Image Skeletonization
Chung Chang Lee
A sequential thinning algorithm is presented, which maintains connectivity within a 3-by-3 window to produce a pixel-thin skeleton from a connected, unevenly thick skeleton. The algorithm can be readily adapted for either 4-neighbor or 8-neighbor connectedness to aid the shape extraction of an object in a complex scene. The algorithm includes an efficient test for connectivity thereby minimizing processing time.
Pattern Inspection Techniques for SEM Image
Toshimitsu Hamada, Asahiro Kuni, Kazushi Yoshimura, et al.
We have developed pattern inspection techniques for Integrated Circuit elements which use an SEMI (Scanning Electron Microscope). In this paper we will discuss the transformation of low SP1 ratio SEM image signals into binary values, detection techniques using the SEM to detect patterns on insulating materials, and detection algorithms for defects.
Tactical Targeting By Structural And Correlation Analysis
Nitin Pandya, Yun-Kung J. Lin
In this paper, a syntactic analysis approach for target classification and a correlation analysis approach to detect target formations (patterns) are defined and evaluated. Examples of simulation results are also given. In the syntactic approach, the structural relationship of target primitives is analyzed through a grammar set. These primitives can be interior components of a large target or individual small targets in a target grouping. A contextfree grammar (CFG) is constructed to describe different targets and target formations. This approach was demonstrated on a large target of an airport and on a target cluster of cars on the highway. In target formation detection, spatial correlation between the target and all predefined reference formations is analyzed to determine the best match. This correlation analysis approach was evaluated on a "mesh" formation of residential houses, a "line" formation of cars on the highway, a "U" formation of residential houses, and an "L" formation of planes in a plane parking area.
Pattern Recognition With Multiaperture Optics System
Richard T. Schneider, James F. Long, Martin F. Wehling
The application of multiaperture optics systems for pattern recognition is discussed. Multiaperture optics systems inherently perform some degree of optical preprocessing which reduces the amount of information offered to the system to a manageable level and so simplifies pattern recognition. The architecture of a typical multiaperture optics system is described and the performance of the system is computed. Due to the large number of lenses, a special detector array is required. Each detector terminates into a memory location which can be randomly accessed. The architecture of the multiaperture system requires special algorithms for pattern recognition. The basic idea is to treat each eyelet as a binary digit. It is shown that only one number is required to describe a given shape. Recognition is accomplished by looking up the stored recognition number.
Effects Of Image Correlation Type And Degree On Matched Filter Image Detection
Robert E. Stovall
The performance of classical two-dimensional matched filter image detectors is analyzed in terms of both scene and system parameters. Crucial scene parameters include both the bandwidth of the scene and the type of correlation in the scene data (whether exponential, Whittle, or Gaussian correlated, or in between). System parameters include field-of-view, number of pixels, scale and rotation errors, fractional pixel misregistration, etc. Analysis treats both analog and digital correlation detectors. The influence of signal-to-noise is treated for Whittle correlation only.
An Evaluation of System Quality Metrics for Hard-Copy and Soft-Copy Displays of Digital Imagery
Robert J. Beaton, Robert W. Monty, Harry L. Snyder
This paper reports an evaluation of 16 quantitative models of system quality for hard-copy (i.e., positive film transparencies) and soft-copy (i.e., CRT-based) displays of digitally derived imagery. The results demonstrate that the effects of system noise and blur degradation upon photointerpretation and subjective scaling performance can be predicted, on an a priori basis, with better than 85% accuracy.
A Simple Analytic Expression For The Contrast Sensitivity Of The Human Eye As A Function Of Brightness
Howard Borough
A simple analytic expression for human visual contrast sensitivity as a function of space frequency has been developed which is dependent only on brightness. This approximation yields results generally within the + 20% uncertainty of most such human measurements when compared to what little published data exists. This approximation is applicable for contrasts greater t4an 1% and space frequencies greater than 0.1 cycle per mrad and for brightness down to 10-4 ft. L which covers the region of most practical modeling interest.
Error Diffusion Using Random Field Models
J. C. Dalton, G. R. Arce, J. P. Allebach
Algorithms for bilevel display of continuous tone images should provide reproduction of edge detail and proper rendition of continuous tone regions with binary textures that adequately reproduce image gray scale without introducing additional structure into the image. Error diffusion algorithms have been shown to have the capability of generating visually appealing binary images, but existing schemes will sometimes generate undesirable textural artifacts. A new image binarization algorithm is developed based on a Markov random field texture model and an edge based constraint matrix. The resulting Markov random field error diffusion algorithm generates nondeterministic binary textures that overcome some of the problems associated with conventional binarization algorithms.
A Summary Measure Of Image Quality
Edward M. Granger
The Fourier Transform of a line spread function, the O.T.F., can be accurately approximated by the moments of the line spread function. This relation is used to show that the second moment is an excellent single parameter estimate of a system's M.T.F. The second moment is easy to compute and offers many advantages in determining the performance of asymmetric imagery. The overall performance of the system can be predicted by a single universal image quality template.
Real World Optical Design And The Vision Feed Back
Rong-seng Chang
Real world close loop optical design system which include the software design program and the hardware optical bench with stock lens set as well as image procesor with image testing equipment. The datum of optical lens distance are input to the program by digit meter which connected to the lens holder in the bench. The datum of the index of refraction and the lens curvature of stock lenses are stored in the lens library subroutine with code number in order easily to key in. According to the optical design program, servomotors drive the lens holders to suitable positions. The result of update optical system are checked by the resolving power projector to project test pattern through optical system to screen to be inspected and processed by TV camera and image process system. The value of the square wave transfer functions are calculated and feed back to the optical design program to suggest the direction for the next turn of optimization.