Proceedings Volume 0939

Hybrid Image and Signal Processing

David P. Casasent, Andrew G. Tescher
cover
Proceedings Volume 0939

Hybrid Image and Signal Processing

David P. Casasent, Andrew G. Tescher
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 18 July 1988
Contents: 1 Sessions, 30 Papers, 0 Presentations
Conference: 1988 Technical Symposium on Optics, Electro-Optics, and Sensors 1988
Volume Number: 0939

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Optical Matrix-Vector Laboratory Data For Finite Element Problems
B. K. Taylor, D. P. Casasent
We detail the use of an optical linear algebra processor for a finite element processing application. A linear time-varying structural mechanics finite element earthquake case study is described. The structure response under earthquake loading is considered, and the solution is obtained with the Newmark direct integration algorithm. The optical architecture for performing the required computationally burdensome banded matrix-vector operations is reviewed. The case study was solved on a laboratory version of the optical processor, and the results are presented.
Digital Optical Multiple Matrix Multiplication Based On Inner-Product And Systolic-Inner-Product Architectures
Francis T . S. Yu, Taiwei Lu
Two digital optical architectures utilizing a binary number encoding technique for multiple matrix multiplication are presented. An inner-product method with grating masks is used in one of the architectures, so that multiple matrix multiplication can be performed in parallel. The second architecture, a mixture of systolic array and the inner-product processing method are used. These two architectures can offer high accuracy with moderate speed processing capability.
Performance Analysis Of Matrix Preconditioning Algorithms On Parallel Optical Processors.
Anjan Ghosh
An efficient way of using analog optical associative processors is to implement robust computational algorithms that require high throughput rate but exhibit tolerance for roundoff errors and noise. Matrix preconditioning algorithms used for preprocessing the data of linear algebraic equations have these properties. In this paper, the performance of polynomial matrix preconditioning algorithms on optical processors is analyzed. The results of the error analysis and numerical experiments show that for a given set of data the spatial errors and detector noise below a certain threshold level do not affect the accuracy of optical preconditioning. It is also shown that optical preconditioning improves the rate of convergence and the accuracy of the final solution of a linear algebra problem. Simple and efficient optical preprocessors designed with preconditioning algorithms can thus assist parallel solvers of linear algebraic equations and other engineering problems.
Solving Ill-Posed Algebra Problems Using The Bimodal Optical Computer
Mustafa A. G. Kbushagur, H.John Caulfield
A set of ill-posed algebra problems are considered for solving by the Bimodal Optical Computer, BOC. The BOC algorithm was shown to be capable of solving this class of algebra problems. Three different methods of generating the error matrix are compared in terms of the convergence of the solution. Some applications for the methods are introduced.
Optical In-Plane Distortion Invariant Pattern Recognition In Structured Noise
D. Miazza, M. Kabrisky, M. Mayo, et al.
An architecture for an optical processor for recognizing and locating objects in cluttered two-dimensional scenes without prior knowledge of the position, scale, or in-plane rotation has been developed. The system involves Fourier transforms, a computer generated hologram for coordinate transformation, a technique for optical phase extraction, optical correlators, and spatial light modulators (SLMs). Experiments have been conducted to verify parts of the design.
Nonlinear Optical Median Filtering By Time-Sequential Threshold Decomposition
James M. Hereford, William T. Rhodes
Median filtering of binary imagery can be performed by optically convolving the input image with a disk or other binary spread function and thresholding the output. This technique can be applied to median filtering of gray-scale images if the input is decomposed into a sequence of binary "slices" by a variable thresholding operation (threshold decomposition). The binary slices are median-filtered and are then added to produce the output gray-scale image. Median filtering to remove "salt-and-pepper" noise from a gray-scale image is demonstrated. The use of gray-scale (as opposed to binary) convolution kernels allows extension of the method to a more general class of nonlinear filtering operations. Comparisons with recently proposed spatial multiplexing (as opposed to time-sequential) methods are made.
Robust Texture Extractors For Real-Time Pyramidal Architectures
Ivan Kadar, Erica J. Liebman
A class of operators based on linear statistical models has been previously developed. The theory of experimental design-based operators - in general, the two-way analysis of variance and specifically Latin Squares designs specified previously - are extended here. Both global and local robust statistical mask texture/extractor operators/object detectors are developed and examined for significance with regard to parallel implementation on a pyramidal architecture. Possible real-time systolic array implementations are considered. Computer experiments using real-world noisy images and simulation results of a pyramidal processing architecture are shown to verify the theory and compare the performance measures of the new class of mask-operators with a more standard class of variance sensing.
A Modified Algorithm For Scanning Tomographic Acoustic Microscopy
A. Meyyappan, G. Wade
Acoustic microscopy is an invaluable tool in non-destructive evaluation because of its ability to provide high-resolution images of microscopic structure in small objects. When such a microscope operates in the transmission mode, the micrograph produced is simply a shadowgraph of all the struc-tures encountered by the acoustic wave passing through the object. Because of diffraction and over-lapping, the resultant images are difficult to comprehend, especially in the case of objects of sub-stantial thickness with complex structures. To over-come these problems, we have developed a scanning tomographic acoustic microscope (STAM) which is capable of producing unambiguous high-resolution tomograms. We have described in previously-published work how a scanning laser acoustic micro-scope can be employed to realize STAM. We use an algorithm based on "back-and-forth propagation" to reconstruct tomograms of the various layers to be imaged. When these layers are physically close to one another, we see ambiguities in the reconstructions. In this paper we describe a modified algorithm which removes these ambiguities. With the new algorithm, we can resolve layers that are only two wavelengths apart.
A Hardware/Software System For Implementing Geophysical Diffraction Tomography
Alan J. Witten, Wendell C. King
Geophysical diffraction tomography is a technique for high resolution, quantitative imaging of subsurface cross-sections. The method is based upon an imaging process known as filtered backpropagation which is a generalization of the inverse straight ray tracing process referred to as backprojection. In backpropagation, an image of spatial variations in refractive index is formed by backpropagating data received along an array of detectors, by means of the reduced wave equation, from the array into the support volume of the host medium. Important steps in this imaging process include: (1) the synthesis of a coherent incident wave, (2) the determination of complex phase perturbations relative to a homogeneous background, and (3) the numerical application of a holographic lens. While there has been considerable development in the theory of backpropagation, less effort has been devoted to problems associated with its implementation. Problems of practical importance that must be considered include: (1) the development of a data acquisition system capable of resolving the necessary wave characteristics, (2) quantification of the incident field in a homogeneous host matrix, and (3) in-field, real-time signal processing. This paper describes a microprocessor-based data acquisition system specifically designed and fabricated for geophysical diffraction tomography, discusses signal processing algorithms which are implemented on the system and presents results of several field studies.
Identifying Metal Surfaces In Color Images
Glenn Healey, W. E. Blanz
Previous work [2] has examined the properties of metals which might allow them to be distinguished from dielectrics in color images. This previous work assumes ideal surfaces, i.e. surfaces which are optically smooth and uncontaminated by coatings. In this paper, we discuss the problem of material identification and review the previous work. We then examine to what extent the ideas presented in [2] extend to more real-istic surfaces. We present analysis describing a color shift which might occur for the light reflected from a rough metal surface. We also give a brief discussion of the effect of surface coatings on the optical properties of materials. Some experimental results are given.
Interpolative Adaptive Vector Quantization
H. Sun, C, N. Manikopoulos
Adaptive vector quantization with interpolation has been applied to the problem of edge degradation. An activity index A has been devised and used to classify the image into active and non-active regions according to the level of local detail. The non-active blocks were encoded by sampling and decoded by interpolation. Each active block was split into four smaller blocks which were coded by vector quantization. The number of samples extracted from each non-active block equals the size of the small blocks. So, each non-active block can be quantized with the same codebook. Thus, only one codebook was required. This greatly reduces the encoding and decoding computational effort. Computer simulation experiments have been carried out with an image of 256x256 pixels, 8 bit quantization and of medium detail level. The rate distortion curves obtained have shown that the adaptive interpolative encoding scheme outperforms alternative non-adaptive coding methods. Moreever, the edge information in the reconstructed image is well preserved . This was achieved at coding bit rates in the range of 0.8 to 1.0 bits per pixel.
Dynamically Differenced Rectangles For Reversible Image Data Compression
Gerardus S. Plattel, Kevin Bowyer
The goal of this work is to develop better reversible data compression techniques for image data. A variety of known standard techniques (Run length, Huffman, Arithmetic, Contour tracing and Rectangles) are compared with two new hybrid techniques (Dynamically differenced contours and Dynamically differenced rectangles). The comparison includes the degree of data compression and the encode/decode times for 256 by 256 eight-bit per pixel grayscale images. Results indicate that the 'Dynamically differenced rectangles' technique generally gives the greatest degree of data compression, and also has favorable encode/decode times.
Moving Image Signal Processing By Markovian Random Walk Approach
Yung - Lung Ma, Chialo Ma, Tsing - Yee Tu
A moving object recognition approach is presented in this paper. The motion of an object includes the linear or nonlinear translation and rotation. For a 3-D object, the images taken by a camera are in planar form. They are varied by different distances between camera and the object, variant angles and timing for taking pictures. However, the change rate among these images taken at different instant are logically related. The brightness level between any two neighbour string cells of machine digital scanning raster varies according to the Markovian random walk process. Thus, the direction and position of a moving object can be found by the variations of the cell random walk. The angles between a machine digital scanning raster and the edges of an object in a planar image are called pseudo-refractional angles. The variations of angles can be used as features for object recognition. Together with the Kolmogorov complexity program, the probability function of the process can be changed into a finite length of string arrays to simplify the recognition procedure. The distance between camera and the object can be measured by a radar or supersonic signal for military or industrial applications.
Parallel Optical Pyramidal Image Processing
G. Eichmann, A. Kostrzewski, B. Ha, et al.
Pyramidal processing is a form of multiresolution image analysis in which a primary image is decomposed into a set of different resolution image copies. Pyramidal processing aims to extract and interpret significant features of an image appearing at different resolutions. Digital pyramidal image processing, because of the large number of convolution type operations, is time-consuming. On the other hand, optical pyramidal processors, because of their ease in performing convolution operation, are preferable in real-time image understanding applications. Two methods of optical pyramidal image generation, a Fourier spectrum filtering based, and a local averaging using 2D lens array, are presented. Preliminary experimental results for optical Gaussian, Laplacian and other fivadtree pyramidal image processing are shown. Experimental results, using commercial lifiuid crystal TVs, for a real-time pyramid image generation are presented.
"Optical Laboratory Realization Of Distortion Invariant Filters"
David Casasent, Ren-Chao Ye
A hybrid optical/digital correlator filter synthesis architecture for distortion-invariant pattern recognition and scene analysis is described. Distortion-invariant correlation filter synthetic discriminant function design is reviewed. Computer generated hologram filter synthesis concepts are then reviewed. Emphasis is given to the optical laboratory realization of the filters for such systems. The issues addressed include: the number of training images, selection of the shift parameter in filter design, the ability of the system to recognize rotated (non-training) images, what the largest false class peak correlation (anywhere, not just at the correlation peak point) is, and solutions to the non-zero optical transmittance of input and filter films. Laboratory results for all major points are provided.
Hybrid Methods To Compute Image Moments
B. V.K. Vijaya Kumar
Three hybrid optical/digital methods for computing the geometric moments are reviewed. These methods have an optical processor producing the transform of the input image and a digital processor computing the various spatial derivatives of the observed optical transform intensity. Algorithms are provided to obtain the geometric moments from these various derivatives.
Application Of Luminescent Devices To Electronically Controlled Optical Image Processors
Alastair D. McAulay
This paper discusses, for the first time, the application of luminescent rebroadcasting devices to image processing. The devices are described and shown to be capable of parallel analog addition and multiplication. Equations are provided for space-variant spatial filtering. An optical system that uses luminescent devices is proposed for implementing such spatial filters.
Hybrid Techniques For Postal Address Location
Keith Mersereau, Mark Kuhner, Daniel Grieser
Many real-world pattern recognition problems entail location of an image within a very cluttered environment. Hybrid processing can be valuable in such a situation by using the high-speed parallelism inherent in optical processing for the feature extraction step, since feature extraction is often the most time-consuming operation to perform digitally. The work reported here takes such an approach for finding addresses on cluttered mail. Two optical systems, a coherent spatial filtering system and an incoherent correlation system, are each discussed for use in combination with digital image processing techniques. Examples of mail processed and statistical results are presented.
Automatic Visual Inspection System For IC Bonding Wires
Hiroyuki Tsukahara, Masato Nakashima, Takefumi Inagaki
This paper discusses a high-contrast imaging capture system and a wire inspection algorithm for an automatic visual inspection system. On IC assembly lines, automated visual inspection is essential to maintain IC productivity and reliability. An automatic inspection system requires realtime inspection without defects overlooked. To overcome the difficulties of getting clear wire images and inspecting flexible object shapes, we developed a high-contrast imaging capture system and a wire inspection algorithm. The imaging capture system consists of a special camera, called a MegaCamera, by Fujitsu Ltd. , and structured lighting. The MegaCamera satisfied both the requirements of 10-micron-per-pixel inspection resolution and an image capture area of 20 mm square. The structured lighting optic enabled us to get bright wire images and to reduce background noise. The wire inspection algorithm is based on border following. The algorithm enabled us to inspect curved and straight wires and to detect defects including broken wires, too close wires, and incorrect wiring paths. When we installed the system on an actual IC assembly line, it took 28 s to inspect a 68-wire IC. The rate of correct detection is 99.7 percent for ICs and the rate of overlooked defects is 0 percent. These results indicate the system can inspect in real time without defects overlooked, which up to now have required visual inspection.
Machine Vision: A Multi-Disciplinary Systems Engineering Problem
Donald G. Bailey
The successful application of image processing to industrial inspection and measurement requires the combining of a number of techniques from different disciplines including optics, electronics, computer hardware and software design, and mechanical engineering. The main subsystems of a typical machine vision system are the image capture subsystem, incorporating sensors and image digitization, the image processing subsystem, where the required information is extracted from the image, and the control subsystem, which uses the information obtained to control a task or activity. In recent years, the commercial availability of general purpose modules such as cameras, frame grabbers, robots, and their controllers has simplified system design considerably. The advent of very large scale integrated circuit technology is broadening the range of applications of machine vision by enabling fast hardware to be designed.
Image Processing On Hypercube Multiprocessors
Russ Miller, Susan E. Miller
This research is concerned with developing efficient algorithms and paradigms to solve geometric problems for digitized pictures on hypercube multiprocessors. At present, it appears that commercially available medium-grained hypercube multiprocessors are not well suited to low level vision tasks, such as convolution and Hough transform. Therefore, our research has focused on medium level vision problems involving connectivity, proximity, and convexity. In this paper, data reduction techniques are developed for medium level vision tasks. These techniques are used to present efficient hypercube algorithms for solving the convex hull problem. Results are given for implementing a variety of convex hull algorithms on an Intel iPSC1 hypercube. Implementation issues and algorithm paradigms are discussed in their relationship to the running times of the algorithms on this machine.
LVEDGE: A Knowledge-Based Heuristic Program For Border Finding Of The Left Ventricular Cavity In Cardiac Digital X-Ray Images
Richard Fozzard, David Gustafson
Traditional edge-detection methods have involved grey-level thresholding and pixel neighborhood operations, as well as tracking algorithms. These purely mathematical aroaches have a tendency to generate many extra edges not relevant to the desired edge and are often fooled by artifacts such as catheters or rib boundaries. LVEDGE utilizes several techniques from artificial intelligence to deal with these difficulties. It has a user trainable knowledge-base to constrain the search to an expected left ventricular (LV) shape. A structure likelihood matrix is created based onprobabilities that pixels are on the actual edge. This matrix is then dynamically searched incorporating both local and global information to generate the most likely continuous single edge of the ventricle. The program has been run on digital LV images (some quite poor) and has compared favorably to human generated edges. The generated edges can be input to standard cardiac function analysis software (ejection fraction, regional wall motion, etc.)
Interactive 3-D Image Display And Analysis
R. A. Robb, C. Barillot
A comprehensive software package, called ANALYZE, has been developed (1) which permits detailed investigation and evaluation of 3-D and 4-D biomedical images. The software is written entirely in "C" and runs on standard UNIX workstations. The software architecture permits systematic enhancements and upgrades which has fostered development of a readily expandible package. ANALYZE can be used with 3-D imaging modalities based on x-ray computed tomography, radionuclide emission tomography, ultrasound tomography, and magnetic resonance imaging. The ANALYZE package features integrated, complimentary tools for fully interactive display, manipulation and measurement of multi-dimensional image data. It provides an effective shell for custom software prototyping and turnkey applications. This paper provides a general description of this software with illustrations of its use, and provides specific details on the interactive volume rendering display module used for 3-D display.
Flexible Mask Subtraction For Digital Angiography
Luong Van Tran, Jack Sklansky
We describe a parallel structured algorithm that suppresses motion artifacts in coronary digital subtraction angiography (DSA). DSA involves the subtraction of a "mask" -- an image of the heart before injection of contrast medium -- from a "live" image. The mask and live images are x-ray images of the heart before and after, respectively, the injection of contrast medium. Although the two images are at the same or nearly the same phase of the cardiac cycle, physical changes occurring in the interim between the acquisition of the images produce several artifacts and distortions. Among these artifacts and distortions are mean gray-level shift, rotation, translation and nonisotropic scaling. The artifacts and distortions are spatially and temporally varying. In our algorithm each image is partitioned into an array of rectangular segments. Each segment is first matched to the mask, then transformations of rotation, translation and nonisotropic scaling are carried out iteratively, converging to the best match in the least squares sense. This algorithm, we believe, is an improvement over earlier methods, because it is fully automatic, it overcomes nonisotropic scaling artifacts, it gives a good correction for gray level variations, and it offers a matching accuracy of 0.1 pixel or better. Experimental results on both phantoms and real images are presented.
Minimum Spanning Tree Algorithm On An Image Understanding Architecture
David B. Shu, J.Greg Nash
A parallel algorithm for computing the minimum spanning tree of a weighted, undirected graph on an n x n mesh-connected array with a special "gated connection network" is presented. For a graph of n vertices, the algorithm requires O(log2n) time. At each step in the parallel algorithm, each node selects one of its links with the least cost as a spanning tree link. Linked nodes form connected components, so that each node eventually belongs to a group with its own identity. The connected components and their associated indices are then treated as super nodes at the next minimum link determination step. The gated connection network function is to allow all the nodes within a connected component to be electrically connected, regardless of where they are located in the adjacency matrix. The index or label used for that component is the local minimum of the node index. All the connected component operations, and those for finding minimum links between them, can be performed in parallel.
Analysis Of Ultrasound Image Sequences By A Data-Flow Architecture Supporting Concurrent Processing
P. Jensch, W. Ameling
In recent years we have improved processing of ultrasound cardiac images depicting the human heart by sophisticated algorithms programmed for a self developed Image Sequence Processing System incorporating some connected microprocessors. For a higher speed-up we have extended our system with a data-flow/pipeline processing unit supporting multiple data streams. In this paper we explain the chosen data-flow architecture with pipeline facilities. Image processing of sequences of pictures (echocardiograms) is done by a mixture of filtering and texture analysis. Local intraframe computations are based on a static data-flow mode, but interframe processing follows a typed dynamic data-flow mode. Besides results of different preprocessing and feature extraction algorithms, we describe some attractive properties and some limitations of the data flow concept in the context of image processing.
A Systolic Array Architecture For Processing Sonar Narrowband Signals
L. Mintzer
Modern sonars relay more upon visual rather than aural contacts. Lofargrams presenting a time history of hydrophone spectral content are standard means of observing narrowband signals. However, the frequency signal "tracks" are often embedded in noise, sometimes rendering their detection difficult and time consuming. Image enhancement algorithms applied to the 'grams can yield improvements in target data presented to the observer. A systolic array based on the NCR Geometric Arithmetic Parallel Processor (GAPP), a CMOS chip that contains 72 single bit processors controlled in parallel, has been designed for evaluating image enhancement algorithms. With the processing nodes of the GAPP bearing a one-to-one correspondence with the pixels displayed on the 'gram, a very efficient SIMD architecture is realized. The low data rate of sonar displays, i.e., one line of 1000-4000 pixels per second, and the 10-MHz control clock of the GAPP provide the possibility of 107 operations per pixel in real time applications. However, this architecture cannot handle data-dependent operations efficiently. To this end a companion processor capable of efficiently executing branch operations has been designed. A simple spoke filter is simulated and applied to laboratory data with noticeable improvements in the resulting lofargram display.
Least Squares Filtering Via Systolic Array
Ilan Ziskind
In this paper we present a triangular systolic array for computing the solution of least-squares problems such as arises in spatial processing in sonar, radar, and seismology.
A Class Of Reconfiguration Schemes For Fault-Tolerant Processor Arrays
Mengly Chean, Jose A. Fortes
A class of reconfiguration schemes for fault-tolerant processor arrays is proposed and studied. According to these schemes, a processor array that is inoperative due to presence of faulty processors is restructured by logically "spreading" faulty processors as evenly as possible throughout the array. From the characteristics of the proposed reconfiguration schemes and the processor array structures for which they are intended, closed form expressions for processor interconnection requirements are derived and implementation issues are discussed. Next, a special case of the proposed reconfiguration schemes is studied and simulated assuming two different array structures. For one of the array structures, simulation results show that the probability of survival achieved by the reconfiguration scheme is close to 1 for up to a number of faults that equals 50% of the total number of spare processors. For a larger number of faults the probability of survival degrades rapidly. On the other hand, for the other array structure, simulation results show that the probability of survival is lower than that of the first structure when the number of faults is less than 90% of the total number of spares; it is higher otherwise.
Light Scattering For Testing Roughness Of Formed Surfwees
Haiming Wang
Roughness is one of the most important characters of optical surfaces, especially in the case of diamond turned surfaces, the surface roughness reflects the dynamic characters of the diamona turning machine. It is suitable to monitor the roughness of diamond turned surfaces by light scattering. But up to now almost all of the scattering methods only deal with the flat surfaces. In a lot of actual cases, e.g. for machining the x-ray microscope objectives or Synchrotron Radiation beam-line mirrors, it is necesstrry to treat the surfaces with definite form. In this paper the author set up a theoretical model for describing light scattering from formed surfaces, and by means of statistical analysis of the scattered light, the contributions of form and roughness to the angular distribution of light can be separated, and the influences of the surface form on the testing of the surface roughness by light scattering can be removed.