Proceedings Volume 0205

Image Understanding Systems II

Carol Clark
cover
Proceedings Volume 0205

Image Understanding Systems II

Carol Clark
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 21 February 1980
Contents: 1 Sessions, 28 Papers, 0 Presentations
Conference: 23rd Annual Technical Symposium 1979
Volume Number: 0205

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Intelligent Microscopes: Recent And Near-Future Advances
Judith M. S. Prewitt
Robert Hooke conjectured about fluid circulation in plants as well as in animals in Micrographia in a passage that is equally important as a commentary on the dependence, not of technology on science, but of science on technology: It seems very probable that Nature has ... very many appropriated instruments and contrivances, whereby to bring her designs and end to pass, which 'tis not improbable but that some diligent observer, if helped with better Microscopes, may in time detect.This paper, written in the form of a scientific poem, reviews the current and near- future state-of-the-art of automated intelligent microscopes based on computer science and technology. The basic concepts of computer intelligence for cytology and histology are presented and elaborated. Limitations of commercial devices and research proto- types are examined (Dx), and remedies are suggested (Rx). The course of action pro- posed and being undertaken constitutes an original contribution toward advancing the state-of-the-science, in the hope of advancing the state-of-the-art of medicine.With rapid, contemporary advances in both science and technology, it may now be appropriate to modify Hooke's passage:It seems very probable that Nature has ... very many appropriated instruments and contrivances, whereby to bring her designs and end to pass, which 'tis not improbable but that some diligent observer, if helped with Intelligent Microscopes, may in time detect.
Volume Estimation: A New, Accurate, Computerized Algorithm
Larry Cook, P. N. Cook, Samuel J. Dwyer III
The volume of a polyhedron, which approximates an anatomical feature or lesion is calculated by means of an algorithm which is evoked by the Gauss' Theorem. Such polyhedra are useful in reconstructing surfaces of anatomical features and lesions from data obtained from CT or ultrasound scans. The volume estimation algorithm is presented. A proof of the algorithm is provided together with applications.
Digital Boundary Detection Techniques For The Analysis Of Gated Cardiac Scintigrams
Eric G. Hawman
Boundary detection in conventional nuclear medicine scintigrams is often difficult for several reasons. First, scintigrams generally have a low signal-to-noise ratio. Second, edge structures are poorly defined because of the low resolution of gamma ray cameras; and finally, edge contrast is usually reduced by foreground and background activity. In this paper we report on heuristic approaches we have taken to solve these problems and to develop programs for the display of cardiac wall notion and for the automatic determination of left ventricular ejection fraction. Our approach to processing cardiac scintigrams entails several steps: smoothing, edge enhancement; thresholding, thinning and contour extraction. We discuss each of these steps in light of the goal of producing cardiac boundaries which are spatially and temporally smooth and continuous. Boundary detection results are presented for some selected clinical images.
Image Segmentation And Texture Analysis
Charles A. Harlow, Richard W. Conners
This paper briefly reviews methods which have been employed to do image segmentation and indicates how texture analysis might be utilized to do this task. A method will be described for finding the unit cell size of a texture. The unit cell represents a tile which can be used to tile in the plane and generate the texture. It is the fundamental building block of repetitive textures.
Digital Images, Mathematical Modeling And Physiological Parameters
J. Nosil, P. McOrmond
An iterative method for the determination of the start and end of the tracer flow through a compartment was developed and used to create a digital image of the compartment of interest. This digital image was then used to select a region-of-interest corresponding to a selected compartment. Dynamic curves were corrected for background, dead time, and for possible overlapping of different physiological compartments. The shape of the corrected curves was then compared with expected behavior of the tracer in a compartment. On the basis of the physiological considerations, mathematical models are developed. Once the mathematical model is developed, the experimental data is used as suggested by the model. We applied these general considerations to the studies of the heart, lungs, kidneys and hepatobiliary system.
Significance Of Contrast-Detail-Dose Analysis In Evaluation Of Medical Imaging Systems
S. R. Amtey, G. Cohen, F. A. DiBianca
The method of contrast-detail-dose analysis offers a practical way of obtaining the information content per unit dose to the patient on an imaging system. The method is described and its assets are discussed.
New Trends In Computer Vision Research
Eamon Barrett
The problem of understanding how human vision works and how visual perceptions are transformed into words, has inspired a significant body of intriguing scientific research over the last two decades. Many of the ideas which determine the directions of contemporary research in this field stem from an inclination to apply to vision, concepts which describe intricate computer-based electro-optical imaging systems. The question as to how complete a theory of human vision can be constructed from such concepts has emerged as a key issue.
Derivation Of Information From Images
Larry E. Druffel
The purpose of this paper is to describe the philosophy of the DARPA Image Understanding Program and to highlight some early results.
Reusable Computer Vision Systems
Bruce L. Bullock
One of the primary motivations for developing computer vision is to provide a practical, application oriented tool vital in the solution of a wide range of important contemporary problems. Although system performance has steadily improved, most of the current systems have a limited range of operating characteristics and cannot be reused for other applications because of their ad hoc organization. This presentation will describe early results on the development of a methodology for constructing vision systems that do not suffer from these problems.
Three-Dimensional Object Recognition System: Ranging Camera And Algorithm
Frank Pipitone
A scanning triangulation-based laser range-finder is described, along with a preliminary version of an algorithm for 3-D object recognition. The ranging camera employs a novel design which should ultimately enable a range sample rate of approximately 100 kHz with an accuracy of lcm at ranges <3m. The system employs a spherical coordinate scanning geometry; a laser beam emerges vertically from a hollow shaft and is reflected by a cubic mirror rotating on a horizontal shaft, yielding a vertical (polar angle) scan. A motor rotates the entire system through a slower 360° azimuthal scan. A detection system including a slitted wheel and photo-multiplier, located 1/2 meter above the mirror, repeatedly measures the angle between the vertical and the line of sight to the laser spot. This and the vertical scan angle is used by an LSI-ll computer to compute range. The field of view is a broad "equatorial" band of nearly 3π steradians. An algorithm is presented for recognition of objects known to the system. The surface of each object is approximated by a union of convex polyhedra, represented as a Boolean combination of linear inequalities. A shell is produced Enclosing the surface but not the interior. Then sets of contiguous points from a range-picture are tested for consistency with some rotation and translation of the polyhedral model of each object. The algorithm is tolerant of occlusion and random errors.
Edge Extraction Based Image Correlation
Fred Weinhaus, Gary Latshaw
The problem of calculating the proper relative registration of a pair of digital images has been examined using edges extracted from each image. The pairs of images were of the same scene and represented various registration situations: different perspectives, different solar conditions, real vs. synthetic, and additive noise vs. no-noise. Eight methods for performing the registration on edge extracted images were tested on 19 pairs of images. The most successful method combined the use of an edge-overlap metric in conjunction with a filter to remove extraneous edges and an edge-parallel metric. Descriptions of the eight methods and the registration scenes are presented.
Intelligent Bandwidth Compression
D. Y. Tseng, B. L. Bullock, K. E. Olin, et al.
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system cogether with the adaptive priority controller are described. Results of simulated 1000:1 band-width-compressed images are presented.
Context-Dependent Automatic Image Screening System
Raj K. Aggarwal, Durga P. Panda
Over the past two years Honeywell has developed a context dependent automatic image recognition system for analyzing the imagery automatically and detecting tactical as well as strategic targets in the image. The main features of the image recognition system are sequential frame processing, symbolic image segmentation, syntactic recognition, and, recognition of multicomponent objects and conflict removal. In this paper we describe various components of this context dependent automatic image recognition system and information flow between these components.
Layered Relaxation Network For Object Detection
David W. Webster
Relaxation techniques have been successfully applied to both edge and region based segmentation schemes. However, neither an edge or region scheme makes use of all information available for segmentation. A technique for allowing edge and region relaxation networks to interact locally will be described. Experimental results will be presented which demonstrate that such interaction will improve network performance.
Weighted Line-Finding Algorithm
K. Abdoshshah, A. Klinger
This paper introduces two new algorithms to detect a line in a digitized picture. The algorithms are compared with the Hough algorithm, and their computational advantages are shown. We discuss the potential for, and the use of, the algorithms in applications.
Segmentation-Based Boundary Modeling For Natural Terrain Scenes
Charles A. McNary, Diane K. Conti, Wilfried O. Eckhardt
This paper describes a segmentation-based boundary-modeling processor for natural terrain scenes. Techniques of this type can achieve high-precision trajectory updating with image-based guidance systems.1 The boundary-modeling processor is based on region extraction and was developed as an alternative to edge-based boundary-modeling techniques. Region boundaries provide a high degree of boundary connectivity and eliminate competing edge and line structure resulting from texture gray-level gradients. Segmentation thresholds are derived with an adaptive-averaging preprocessor, which enhances the modal structure of the image gray-level histogram by replacing local-region gray-level distributions (texture) with their mean values. A contrast-edge map can be used to validate the selection of gray-level thresholds for region segmentation by locally correlating the region boundary points with the contrast-edge map of the scene. With this refinement, the segmentation-based boundary-model processor can combine the best characteristics of region segmentation and contrast-edge extraction: a high degree of region-boundary connectivity and high spatial fidelity of the extracted edge points. The derived boundaries form models of curvilinear scene-boundary features that may be accessed at several levels of approximation for fix-area acquisition and precision fix-point identification. Scene-boundary models and hierarchical line representations of the curvilinear features were generated using this segmentation-based boundary-modeling processor for a variety of natural terrain scenes. The resultant models demonstrate the effectiveness of this processor and its utility in scene pattern matching.
New Approach To Forward-Looking Infrared (FLIR) Segmentation
Lawrence M. Rubin, Richard L. Frey
An approach to the analysis of shape extraction (segmentation) operators is presented based on maximum likelihood parameter estimation. A simple image model is presented and objective measures of segmentation quality are presented. Bounds on the noise performance of any segmentation operator are derived and these are compared with experimental results obtained on sample images.
Segmentation Based On Second-Order Statistics
Erica M. Rounds, George Sutty
The gray level or intensity distribution of an image is frequently inadequate to select a good threshold for image segmentation. Considerable improvement can be achieved by a histogram transformation based on second order gray level statistics. This paper describes an approach using the cooccurrence matrix. Results are shown for target extraction in forward looking infrared (FLIR) Lmages.
Image Registration Using Material Information
Firooz A. Sadjadi, Ernest L. Hall
In studying the performance of any matching technique for image registration it is important to select an appropriate reference image. In this paper a set of a priori probabilities of detection based on explicit information about the materials which surround the edges is used in a strategy for reference selection. This set of a priori probabilities of detection is obtained by finding the frequency of occurence of material edges. The correlation results for three edges, each belonging to a different material class, are presented. The results of two criteria for optimum reference selection, namely the Neyman-Pearson and the ideal observer, are used to show that the best choice appears to be a compromise. This minimizes the total probability of error while giving a higher probability of correct acquisition for a fixed probability of false alarm.
Edge Linking Using Thresholding
David Lee Milgram
Edge detection followed by thinning produces a set of points which lie along edges in the original image. It is important to link together the pixels which lie along the same edge. The resulting associated groups can then be fitted with line segments. This paper presents a new technique for this edge point linking problem. Each edge point is linked to its appropriate neighbor on either side by considering those contours, produced by thresholding, which pass through the given edge point. For each such contour, the edge point nearest the given edge point along the contour (in, say, the clockwise direction) is recorded. The edge point occurring most often as an associate across the set of gray level thresholds, is chosen as the clockwise associate. A figure of merit based on path length and straightness is used to break any ties. The counterclockwise neighbor is chosen similarly. Two points which are mutual associates denote a symmetric link. If all non-symmetric links are deleted, the resulting structure is a set of linear chains of nodes called 'symchains' It is claimed that symchains provide a rich edge structure for further processing.
Generation Of Random Scenes With Controlled Statistics
Robert A. Gonsalves, Thomas A. Lianza, Andrew Masia
We consider the problem of generating a random field of numbers (a random, digital scene) with specified spectral and spatial statistics. In particular we specify the amplitude spectrum Ag(f) and the first order probability distribution p(g) of the scene. When the latter is Gaussian, the problem has a simple, well-known solution. We interpret this solution in an algorithm. For a general p(g) the problem is more difficult. We present an iterative technique as a possible solution. We also describe some applications of the scenes in visual and objective measurements of optical systems.
Nonplanar Projections In Image Simulations
George J. Sutty
Many image processing applications (e.g., correlation guidance systems) require synthetically generated images to simulate the data obtained by electro-optical sensors. All currently available computer image generation (CIG) systems model only the perspective projection. Thus, there is a need for CIG systems that can simulate all sensors, including those that do not have a linear geometrical function (projection). In this paper, a system capable of modeling all projections is described, and three nonplanar projections (spherical, cylindrical and polar) are discussed in detail.
Multispectral Imaging/Ranging System
David K. Lynch, Donavon D. Pretzer, Scott D. Fouse
A five-band multispectral(MIS)imaging system has been constructed and tested by the Hughes Research Laboratories. It has five boresighted sensors mounted on a scanning pedestal in a completely mobile and self-contained instrumentation trailer. The systems are .8μ passive, 1.06μ active/ranging, 10.6μ active, 8-12μ passive and 95 GHz (3.2 mm) active/ranging. The MIS operates in all weather conditions and produces registered, calibrated and digitized 12-bit raster scanned imagery. Designed as a sensor test bed for DARPA, the MIS is a versatile and adaptable system, which has potential use for a wide range of imaging and non-imaging applications. Field tests conducted in Culver City, California and Climax, Colorado will be discussed and the applicability of the MIS to the acquisition and analysis of more diverse scenes will be discussed.
Recognition Of Handprinted Characters For Automated Cartography: A Progress Report
Matthew Lybanon, Robert M. Brown, Larry K. Gronmeyer
The preparation of maps and other cartographic products involves bringing together information from a number of different sources. Many of the steps currently involve tedious manual operations. Handwritten character recognition (CR) techniques are being developed as a part of efforts to automate procedures. To be useful in such applications, strict accuracy constraints must be observed. Substitutions (misrecognitions) are particularly harmful. Rejections (inability to recognize) are less serious, although as high a recognition rate as possible within the constraints is desirable. A complete practical CR system involves more than just the recognition algorithm. Increased performance may be obtained effectively by improving preprocessing or changing a feature calculation, as well as by modifying the recognition logic. The CR system under development was discussed in "Recognition of Handprinted Characters for Automated Cartography" at this conference last year. Current performance will be reviewed and the planned approach to improve it will be presented.
Automatic Technique For Accurately Locating Planet Centers In Voyager Images
Jean J. Lorre
An important step in determining the line/sample to latitude/longitude transformation in the Voyager images of the Jovian system is to locate in each frame the position of the planet center to sub-pixel accuracy. A method of accomplishing this based upon automatic recognition of the planet limb was developed and is being used on a production basis at JPL. This paper discusses the algorithm and its limitations.
Automatic Temporal Analysis Of Smoke/Dust Clouds
George R. Blackman, John W. Marvin
The problem of dynamically identifying and tracking clouds in noisy digital data has been successfully solved for certain classes of 1.06μm infrared images. Techniques have been developed to separate "object" from "background" and to extract quantitative measurements of the geometry of the object in each scene as well as parameters describing the trajectory and dispersion rates of the object in time. Separation of object from background is achieved through combinations of classical and novel digital smoothing operators and edge detectors with contouring techniques. A computationally efficient least-squares algorithm is then applied to the edge of the object to obtain a quadratic form from which the quantitative results are derived.
Adaptive Estimation Approach To Estimating Displacement In Time Varying Images
Haroon ur Rashid, Richard A. Jones
In this paper an approach for estimating the displacement of objects in successive frames of images is introduced. The algorithm operates on data that has been produced by an adaptive estimation redundancy removal technique. This approach has the added advantage over the pel-recursive and the coefficient recursive algorithms in that no frame to frame gradient vector calculation is required. Experimental results on generated images are presented for the purpose of comparing the stability of the algorithm with that of other displacement estimation algorithms.
Low-Cost Optical Character Recognition System
Charles C. K. Cheng
Optical Character Recognition (OCR) equipment previously has been complex, massive, and very expensive, hence really practical only for large credit-card operations, insurance companies, and postal applications. However, because of the rapid advance in large scale integrated (LSI) circuit technology and the need for a low cost OCR device to improve business data entry operations, designing and engineering such a low cost OCR system is feasible. The described system is capable of reading directly from a human readable information source, thus eliminating manual keying of data for conversion to a computer processable form. The optical front-end for data acquisition, image conversion and correlation, and recognition processing subsystems using LSI circuits and high speed microprocessors, the algorithms of feature analysis and contextual editing, as well as output and control operational considerations are all essential segments of the described system.