Results Concerning Image Restoration Applications Of The Mutual Information Principle
Author(s):
Joseph P. Noonan;
James R Marcus
Show Abstract
Results from the mutual information principle, or MIP, approach to image restoration are presented. The MIP approach to image restoration, based on minimizing the mutual information of an image degradation system model subject to known prior statistical knowledge constraints, is briefly reviewed. An implementation of the MIP image restoration equations is then discussed. The results from a number of examples show the MIP technique to be a robust image restoration approach with apparently none of the convergence difficulties associated with similar techniques.
Morphology In A Wraparound Image Algebra
Author(s):
C. R. Giardina
Show Abstract
An algebraic environment is established for the morphological processing of wraparound binary, as well as grey valued images. The equivalence of pointwise definitions for dilation and erosion with parallel implementations of these operations is proven. Additionally, geometric fitting algorithms are also derived for these operations. Finally, a Z transform theory is established and applied for determining both dilation and erosion, and for synthesizing morphological operations.
A Linear Programming Approach To Maximum Entropy Signal Restoration
Author(s):
Gary A Mastin;
Dennis C. Ghiglia;
Richard J. Hanson
Show Abstract
In future computing environments where computer resources are abundant, a linear programming (LP) approach to maximum entropy signal/image restoration could have advantages over traditional techniques. A revised simplex LP algorithm with inequality constraints is presented. Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a LP problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. Linear inequality constraints may be used to assure a basic feasible solution. Experimental results with 512-point signals are presented. These include restorations of noisy signals. Problems with as many as 513 equations and 6144 unknowns are demonstrated. The complexity of the LP restoration approach is briefly addressed.
Pyramidal Image Processing Using Morphology
Author(s):
George Eichniann;
Chao Lu;
Jianxin Zhu;
Yao Li
Show Abstract
Linear pyramidal image processing (LPIP) is a form of multiresolution analysis in which a primary image is decomposed into a set of different resolution copies. LPIP aims to extract and interpret significant features of an image appearing at different resolutions. Morphological filtering, a nonlinear image processing technique, because of its simplicity of operation and its direct relation to the shapes in an image, has been widely studied. In this paper, the use of morphological filtering technique for a nonlinear pyramidal image processing (NPIP) is proposed. By using a set of desirably structured masks as the 'Structuring Elements" (SEs), a primary image is decomposed into a sequence of pyramidal image copies. For each Gaussian image pyramid, objects smaller than a predetermined image threshold size are filtered, while for each Laplacian pyramid, objects other than a predetermined size are blocked. For NPIP operation, pipelined and parallel software implementation algorithms are suggested. In order to reconstruct the original image, an inverse morphologic transform with some special Structuring Elements is considered.
Maximum Likelihood Image Registration With Subpixel Accuracy
Author(s):
Michael S Mort;
M. D. Srinath
Show Abstract
The problem addressed in this paper is to estimate, with an error which is substantially less than the dimensions of a pixel, the unknown displacement d between two images of a common scene, given only the image data. Assuming a Gauss-Markov model for the scene, the joint probability density function of the two images is obtained and an implicit expression for the maximum likelihood estimate of the displacement is found as the maximum of a functional J(c1). The sensitivity of the algorithm to the model parameters has been determined by experiments on 32 real images. The experiments show that a mean absolute error of 1/20 of a pixel dimension is achievable for rms signal to rms noise ratios down to a value of 5.
Image Restoration Via Iterative Improvement Of The Wiener Filter
Author(s):
Charles V. Jakowatz Jr.;
Paul H. Eichel;
Dennis C. Ghiglia
Show Abstract
The Wiener filter has often been employed as a means for solving the signal/image restoration problem. Unfortunately, the information that is required as input for the realization of this filter is, in many practical situations, not available. In particular, when only one degraded, noisy observation of the signal is presented as data, the spectral density function for the signal to be recovered will generally not be known. A typical 'fix' to this dilemma has been to assume that the ratio of noise to signal spectral densities is constant with frequency. The value of this constant that makes the best reconstruction is then determined via trial-and-error by a human. In this paper we present an alternative to this over-simplified version of the Wiener filter (K-filter). Specifically, we demonstrate that an estimate of the signal spectral density can be made via an iterative procedure from the data. This results in reconstructions that are generally superior to any output that the simplified Wiener filter can provide. We show simulated results for the case of one-dimensional degradations of image data.
Statistical Models Of A Priori Information For Image Processing; II. Finite Distribution Range Constraints
Author(s):
Z. Liang;
R. Jaszczak
Show Abstract
A probabilistic formulation of statistical models of a priori source distribution information is presented with considerations of source strength correlations and finite distribution range constraints. A Bayesian analysis incorporating the a priori source distribution probabilistic information is given in treating measured data obeying Poisson statistics. A system of equations for determining the source distribution given the measured data is obtained by maximizing the a posteriori probability. An iterative approach for the solution is carried out by a Bayesian image processing algorithm derived using an expectation maximization technique. The iterative Bayesian algorithm is tested using computer generated ideal and experimental radioisotope phantom imaging noisy data. Improved results are obtained with the Bayesian algorithm over those of a maximum likelihood algorithm. A quantitative measurement of the improvement is obtained by employing filtered objective criteria functions.
Dual-Mode Discrete Cosine Transform
Author(s):
H. S. Hou;
M. J. Vogel
Show Abstract
This paper describes the band-splitting properties of the discrete cosine transform (DCT). The one-dimensional DCT, viewed as a filter, can split the filtered output into two halves, one corresponding to the low-frequency band of the input and the other to the high-frequency band of the input. In a two-dimensional case, the DCT behaves like a quadrature mirror filter. Based on the band-splitting properties of the DCT, a dual-mode image compression method has emerged. This new method is used to quantize and encode the image edges in the spatial domain and the uniform image portions in the DCT frequency domain. Analysis and simulation are included herein. Improvement of image quality by the use of this new method has been demonstrated through simulation.
New Directions In Video Coding And Transmission
Author(s):
T.Russell Hsing
Show Abstract
Stimulated by the increasing availability of digital transmission and the evolution from application specific network to integrated service networks, cost effective transmission for sophisticated video service will become more economic and widespread in the near future. Videocoding technology has been evolving over the past 20 years beginning with DPCM, entropy coding and run-length coding, followed by motion detection/compensated coding and subband coding research. Psychovisual coding and knowledge-based image coding are now the major focuses in this field and, recently, only a few groups have been publishing their work in this area. The common research interests in this field is to investigate a new ways of improving picture quality significantly at adequately low bit rates. Because of dynamic bandwidth allocation of transmission and resource sharing, video transmission is no longer limited to a particular transmission rate ih the coming generation of packet-switched networks. The development of variable-bit-rate compression techniques that can guarantee constant image quality at the destination in packet-switched network applications will be a research thrust now for integrated service networks. In this paper, the up-to-date research efforts in the areas of psychovisual coding, knowledge-based image coding and variable-bit-rate compression will be discussed. The effectiveness and the simulations results of our variable-bit-rate compression and a psychovisual coding techniques will be presented. In addition, several open research topics will be briefly pointed out in each subject.
Feature Extraction Of Retinal Images Interfaced With A Rule-Bas-2D Expert System
Author(s):
Na seem Ishag;
Kevin Connell;
John Bolton
Show Abstract
Feature vectors are automatically extracted from a library of digital retinal images after considerable image processing. Main features extracted are location of optic disc, cup-to-disc ratio using Hough transform techniques and histogram and binary enhancement algorithms, and blood vessel locations. These feature vectors are used to form a relational data base of the images. Relational operations are then used to extract pertinent information from the data base to form replies to queries from the rule-based expert system.
Lighting And Optics Expert System For Machine Vision
Author(s):
Amir Novini
Show Abstract
Machine Vision and the field of Artificial Intelligence are both new technologies which have evolved mainly within the past decade with the growth of computers and microchips. And, although research continues, both have emerged from the experimental state to industrial reality. Today's machine vision systems are solving thousands of manufacturing problems in various industries, and the impact of Artificial Intelligence, and more specifically, the use of "Expert Systems" in industry is also being realized. This paper will examine how the two technologies can cross paths, and how an Expert System can become an important part of an overall machine vision solution. An actual example of a development of an Expert System that helps solve machine vision lighting and optics problems will be discussed. The lighting and optics Expert System was developed to assist the end user to configure the "Front End" of a vision system to help solve the overall machine vision problem more effectively, since lack of attention to lighting and optics has caused many failures of this technology. Other areas of machine vision technology where Expert Systems could apply will also be discussed.
Extracting Depth By Binocular Stereo In A Robot Vision System
Author(s):
Suresh B. Marapane;
Mohan M. Trivedi
Show Abstract
New generation of robotic systems will operate in complex, unstructured environments utilizing so-phisticated sensory mechanisms. Vision and range will be two of the most important sensory modalities such a system will utilize to sense their operating environment. Measurement of depth is critical for the success of many robotic tasks such object recognition and location; obstacle avoidance and navigation; and object inspection. In this paper we consider the development of a binocular stereo technique for extracting depth information in a robot vision system for inspection and manipulation tasks. Ability to produce precise depth measurements over a wide range of distances and the passivity of the approach, make binocular stereo techniques attractive and appropriate for range finding in a robotic environment. This paper describes work in progress towards the development of a region-based binocular stereo tech-nique for a robot vision system designed for inspection and manipulation and presents preliminary experiments designed to evaluate performance of the approach. Results of these studies show promise for the region-based stereo matching approach.
New Adjustable Pseudo-Coloring Method For Density Image By Effect Of Dual Gratings
Author(s):
Ying Zhou;
Zhen-Pei Cheng
Show Abstract
This paper presents a new optical method for density pseudo-coloring that the color can be adjustable. This method is based on the technique for phase image pseudo-coloring and realizes adjustable pseudo-coloring by using the diffraction effect of dual gratings and filtering technique in white-light system. First the paper proves the principle of the method. Then it verifies the validity of the theory by experiment. Finally certain images are processed. Both the theory and the experiment show that the new method is effective.
High Resolution Digitization Of Color Images
Author(s):
John P. Cookson;
Lyndon S. Guy
Show Abstract
This paper is a brief introduction to design and performance factors in high resolution color image digitization systems. High resolution is defined as 1024x1024x24 or 2048x2048x24 bits per image where eight bits per pixel is used to represent each of the three (red, green and blue) color components. To investigate design considerations in such a system, a locally integrated experimental prototype, based on relatively inexpensive and individually acquired components, was developed. The core of this system is a high resolution camera that utilizes a solid state charge coupled device (CCD) sensor array. One objective in building the prototype is to determine the feasibility of using high resolution digital imaging techniques to present educational material to medical students. Essential system elements and critical parameters are discussed; potential problem areas are mentioned.
Post-Filtering Of Transforms-Coded Irnagest
Author(s):
Kou-Hu Tzou
Show Abstract
An effective filtering technique has been introduced to reduce the blocking effect in transform-coded images. The proposed approach, employs circular-asymmetric two-dimensional (anisotropic) filters adapted to the characteristics of the blocking noise. By adaptively pointing the major axis of the anisotropic filters to the direction perpendicular to the orientation of the block boundary, the anisotropic filter can effectively reduce the block appearance while preserving the sharpness of the transform-coded pictures. Computer simulation of this new method shows significant improvement in image quality subjectively as well as objectively.
Adaptive Coding For Image Sequences In Transform Domain Based On A Classification Strategy
Author(s):
G. Tu;
L. Van Eycken;
A. Oosterlinck
Show Abstract
This paper presents an adaptive coding technique, based on a classification strategy, for color image sequence signal. Due to the classification process, carried out in the data domain, the non-stationary image data are partitioned into different classes characterized by the local spatial activities. The coding process is then optimized in each class. The low-passed image data are coded from two processing loops: a dynamic one in which an image block is obtained by using motion-estimation; a static loop in which a best matched vector from a vector quantization sub-codebook is chosen. The motion estimated data block, on the other hand, contains also the classification information. The difference data are then adaptively encoded by using the classification informations, and the properties of the human visual system can be incorporated in order to increase the performance.
A New Type Of Image Processing Using A Dynamic Graphic Data Structure
Author(s):
Masao Sakauchi;
Yutaka Ohsawa
Show Abstract
Software-based image processing has not been practical for large images such as drawings or remote sensing data because of its huge computing time. This paper will discuss a new type of software-based image processing technique using geometrical operations on graphical primitives stored in a multi-dimensional data structure, which can overcome this problem. In the proposed method, image data are first converted to suitable graphical primitives such as contour vectors or segments representing the objects in the image. These primitives are then inserted and managed in the devised dynamic graphical data structure named the BD tree[1]. All the required processing for given images are performed by efficient graphical operations such as "range searching" or "relation checking" in this graphical data structure. The BD tree makes such graphical or geometrical operations fast and flexible. Then, several applications based on this method, including autodigitizing of drawings, drawing image recog-nition and understanding, and color image quantization are presented. These successful examples reveal the effectiveness of the proposed image processing technique.
Virtual Image Processing
Author(s):
Ranjit Mulgaonkar
Show Abstract
Many imaging applications require high speed acquisition, storage, processing and display of large size images (over 10k X 10k pixels). Such applications include remote sensing, medical imaging, electron microscopy, publishing and document/photograph processing. Most existing image processors and special purpose acquisition devices have limitations on the image size being acquired (usually 4096 X 4096 pixels). Storage of these large images, sometimes with multiple bands takes a lot of storage space. The available storage devices that have large storage capacity cannot keep up with the transfer speed requirements. The ideal storage device should provide both high storage capacity and high transfer speeds. Processing these large images requires a large amount of Random Access Memory (RAM), and the image has to be broken into smaller sections (sub-images). The processed image is usually rebuilt from the subimages, resulting in visible boundaries between the subimages. Even though devices that display images of up to 2048 X 2048 in size are available today, they still display only a small portion of a large image at a time. In order to display the entire image, the display window has to be roamed in the image data base. In this paper we will discuss a system architecture suitable for processing such large images. This system performs acquisition, storage, display and processing of large image databases.
CRT And Flat Panel Inspection Via PC-Based Image Processing
Author(s):
John Melson
Show Abstract
The light output inspection process of cathode ray tubes, flat panels and other displays often requires extremely precise and time-consuming manual steps. This paper describes the PR-900 Video Photometer, a PC-based photometric, spatial and colorimetric instrument which automates this inspeCtion process - eliminating operator error and greatly reducing measurement time. The PR-900 digitizes the displayed image as acquired in video form and converts the data into an accurate NBS-traceable measurement. The IBM PC/AT-based photometer is designed for speed and resolution. The system includes a personal computer, an image digitizer, a customized CCD-array Video Sensor Assembly (VSA), interwhangeable objective lenses and proprietary VideoView software. Optionally, an automatic VSA Positioning Stage is added for Automatic Test Environment (ATE) applications. With its modular approach, the hardware and software of the video photometer is readily optimized to meet diverse measurement requirements. This versatile instrument is used to calculate such a wide scope of parameters as luminance (brightness) uniformity, line width, luminance profiles, character size, spot contours, modulation transfer function (MTF), misconvergence, geometric distortion and relative chromaticity (color) coordinates.
PC-Based Image Restoration
Author(s):
R A Gonsalves;
J. P. Kennealy
Show Abstract
We present an overview of PC-based image restoration as practiced in typical university and industry settings. We describe the hardware, software, details of some algorithm, and some examples.
Graphics With Special Interfaces For Disabled People
Author(s):
A. Tronconi;
M. Billi;
A. Boscaleri;
P. Graziani;
C. Susini
Show Abstract
Some physically impaired persons are unable to use standard drawing tools (pencil, eraser, paper, etc.). An effective way to deal with the problem is to create a graphic environment with features that motor disabled can control. A motor disabled person can use some commercial graphic program by means the special interface (working as mouse emulator), that we developed. Special software oriented to provide mentally impaired children with some computer graphic capabilities can be developed. This software can be useful when the commercial available software is unsuitable. We are developing special software that provide physically impaired children with the capability of graphical represention of three-dimensional scenes. This software can be controlled by means some different special input interface including a speech recognition system.
(HSI) Color Processing On Personal Computers
Author(s):
Bernadette M. Morrissey
Show Abstract
Color image processing can be made much less complicated and more time-efficient, if color images can be specified in terms of color attributes, namely, hue, saturation, and intensity (HSI). These attributes are the closest approximation to human interpretation of color, simply because they are products of human perception. Converting color images into hue, saturation, and intensity, can be done by compute-intensive algorithms executed on software or on single plug-in cards, both available for personal computers. The major benefit of HSI is its compatibility with monochrome techniques techniques currently available. HSI builds on programs made to date for black and white image processing applications. Monochrome techniques are intensity-related operations, thus being fully portable. Also, because the attributes are independent, HSI processing is just as fast and straightforward as monochrome processing, while allowing programmers a choice in color processing.
Image Data Compression In A Personal Computer Environment
Author(s):
Paul M. Farrelle;
Daniel G. Harrington;
Anil K Jain
Show Abstract
This paper describes an image compression engine that is valuable for compressing virtually all types of images that occur in a personal computer environment. This allows efficient handling of still frame video images (monochrome or color) as well as documents and graphics (black-and-white or color) for archival and transmission applications. Through software control different image sizes, bit depths, and choices between lossless compression, high speed compression and controlled error compression are allowed. Having integrated a diverse set of compression algorithms on a single board, the device is suitable for a multitude of picture archival and communication (PAC) applications including medical imaging, electronic publishing, prepress imaging, document processing, law enforcement and forensic imaging.
An Error Analysis For Surface Orientation From Vanishing Points
Author(s):
Richard S. Weiss;
Hiromasa Nakatani;
Edward M. Riseman
Show Abstract
There are many cases in which perspective information can be used to derive three-dimensional spatial information about objects from their two-dimensional images. There are established algorithms for estimating the direction of lines and the orientation of surfaces based on their projections onto the image plane. Given two parallel lines on a plane, their projections onto the viewing plane intersect at a vanishing point, which provides a constraint on the orientation of the plane. Two such independent constraints define a vanishing line, and thereby determine the orientation of the plane uniquely. In order to effectively recover surface orientations via lines extracted from the image, it is necessary to put bounds on the errors while applying these constraints. Our approach involves representing line directions and surface normal vectors as points on a Gaussian sphere and computing the error bounds as regions on the sphere. Multiple constraints are combined by intersecting the corresponding regions. The starting point for computing the error bounds is an estimate of the accuracy of the lines which are extracted from the image. A mathematical analysis of the imaging geometry is used to propagate these errors to vanishing points, vanishing lines, and surface orientations. In addition, constraints based on a priori knowledge can be introduced to improve the accuracy. Some experimental results are presented to illustrate this.
Algorithm For The Determination Of Vessel Center And Diameter From X-Ray Shadowgraphs
Author(s):
Paul Fenster;
Joseph Barba;
Richard Wong
Show Abstract
Details of a method of determining the center and radius of an apacified vessel, such as an artery, from its x-ray projection are given. These parameters are determined to a typical accuracy of better than 0.1 pixel in the absence of noise. The effects of both gaussian noise and digitization interval are determined, as well as the effect of pixel size/vessel diameter. A method of finding x-ray system parameters is presented, and the algorithm for center and radius determination is confirmed from measurements on a DSA shadowgraph of a opacified tube.
Multiple Directed Graph Large-Class Multi-Spectral Processor
Author(s):
David Casasent;
Shiaw-Dong Liu;
Hideyuki Yoneyama
Show Abstract
The use of a decision net (hierarchical classifier and a multiple directed graph processor) is detailed and demonstrated for an imaging spectrometer identification problem.
Extracting Distance Using Gray Level Gradient On The Defocused Image
Author(s):
Moriyuki Matsuo
Show Abstract
This paper proposes a new method to extract distance between an object and a TV-camera from a defocussed image by using gray level gradient. In this work, image clearness is defined as a measure of the gray level gradient as a whole. The defocussed image is not clear in the sense that the clearness of the defocussed image is of a low value, compared with the focussed image. First, the image clearness of a defocused image is discussed from the view-point that image clearness is represented by changes of the gray level in the local image. Secondly, the linear and the nonlinear clearness of the image are defined as the mean of gray level gradient in the image. Finally, a satisfactory result is obtained, extracting distance between an object and a TV-camera using clearness.
"Rule-Based Processing For String Code Identification"
Author(s):
David Casasent;
Sung-Il Chien
Show Abstract
A 3-D distortion-invariant multi-class object identification problem is addressed. Our new, fast and robust string-code generation technique (using optical and digital methods) makes the rule-based system quite practical and attractive. Emphasis is given to our rule-based system and to initial data results. Excellent multi-class recognition and reasonable object distortions can be accommodated in this system. We achieved 80-90% correct recognition (Pc) for 10 object classes and +30 ° 3-D distortions and full 360 ° in-plane distortions.
System Design For A Dental Image Processing System
Author(s):
Fredrick M. Cady;
John C. Stover;
William J. Senecal
Show Abstract
An image processing system for a large clinic dental practice has been designed and tested. An analysis of spatial resolution requirements and field tests by dentists show that a system built with presently available, PC-based, image processing equipment can provide diagnostic quality images without special digital image processing. By giving the dentist a tool to digitally enhance x-ray images, increased diagnostic capabilities can be achieved. Very simple image processing procedures such as linear and non-linear contrast expansion, edge enhancement, and image zooming can be shown to be very effective. In addition to providing enhanced imagery in the dentist's treatment room, the system is designed to be a fully automated, dental records management system. It is envisioned that a patient's record, including x-rays and tooth charts, may be retrieved from optical disk storage as the patient enters the office. Dental procedures undertaken during the visit may be entered into the record via the imaging workstation by the dentist or the dental assistant. Patient billing and records keeping may be generated automatically.
Neural Network Approach To Stereo Matching
Author(s):
Y. T. Zhou;
R. Chellappa
Show Abstract
A method for matching stereo images using a neural network is presented. We first fit a polynomial to find a smooth continuous intensity function in a window and estimate the first order intensity derivatives. Combination of smoothing and differentiation results in a window operator which functions very similar to the human eye in detecting the intensity changes. Since natural stereo images are usually digitized for the implementation on a digital computer, we consider the effect of spatial quantization on the estimation of the derivatives from natural images. A neural network is then employed for matching the estimated first order derivatives under the epipolar, photometric and smoothness constraints. Owing to the dense intensity derivatives a dense array of disparities is generated with only a few iterations. This method does not require surface interpolation. Experimental results using natural images pairs are presented to demonstrate the efficacy of our method.
Electronic Control And Display System In The Optical Processor Of SAR (Synthetic Aperture Radar)
Author(s):
Qian -Yang Yu;
Lian-qing Huang;
Chao-Yang Li
Show Abstract
An universal optical processor of SAR which can be easy use to process SAR data film with various parameters and obtain hish resolution image was developed. The electronic control and display system used in the optical processor will mainly be discussed in this paper, The system consist of tension control, phase-lock velocity control, CCD image storage And display, Laser beam intensity stabilizasion sub-systems, electric focusing mechanism, decoding and record of auxiliary information, and so on. These systems make the developed optical processor get great success. The static resolution of the visible image goes up to 90 lam, and the dynamic resolution is 60 1p/mm.
Compression Of High Spectral Resolution Imagery'
Author(s):
Richard L. Baker;
Yi Tong Tse
Show Abstract
NASA will acquire billions of gigabytes of data over the next decade. Often there is a problem just funneling the data down to earth. The 80 foot long Earth Orbiting Satellite (EOS), scheduled for launch in the mid-1990s, is a prime example. EOS will include a next generation multispectral imaging system (HIRIS) having unprecedented spatial and spectral resolution. Its high resolution, however, comes at the cost of a raw data rate which exceeds the communication channel capacity assigned to the entire EOS mission. This paper explores noisy compression algorithms which may compress multispectral data by up to 30:1 or more. Algorithm performance is measured using both traditional (mse) and mission-oriented criteria (e.g., feature classification consistency). We show that vector quantization, merged with suitable preprocessing techniques, emerges as the most viable candidate.
Digital Image Velocimetry
Author(s):
Y-C Cho
Show Abstract
A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.
Digital Image Processing Applied To Problems In Art And Archaeology
Author(s):
John F. Asmus;
Norman P. Katz
Show Abstract
Many of the images encountered during scholarly studies in the fields of art and archaeology have deteriorated through the effects of time. The Ice-Age rock art of the now-closed caves near Lascaux are prime examples of this fate. However, faint and subtle details of these can be exceedingly important as some theories suggest that the designs are computers or calendars pertaining to astronomical cycles as well as seasons for hunting, gathering, and planting. Consequently, we have applied a range of standard image processing algorithms (viz., edge detection, spatial filtering, spectral differencing, and contrast enhancement) as well as specialized techniques (e.g., matched filters) to the clarification of these drawings. Also, we report the results of computer enhancement studies pertaining to authenticity, faint details, sitter identity, and age of portraits by da Vinci, Rembrandt, Rotari, and Titian.
Preliminary Study Of Triple Photon Coincidence Imaging Technique
Author(s):
Z. Liang
Show Abstract
Preliminary study of triple photon coincidence imaging technique (TPCIT) is presented for two imaging modes of angular unconstrained and constrained coincidence triple photons. The angular unconstrained triple photon coincidence mode reconstructs source distribution by small source volume fields (z 1.0 x 1.0 x 1.0 cm 3), each of which is specified by the intersection of three hollow cones generated from a triple photon coincidene event. Each hollow cone represents a source probability field of a photon undergoing an electronic collimation of scattering-absorption process. The angular constrained triple photon coincidence mode determines two small source volume fields from each triple photon coincidence event. The two volume fields are specified by the intersections of a hollow cone generated by a photon and a solid cylinder generated by another two photons from a positron annihilation with 180 degrees constaint. A truncated spherical detector system for brain imaging studies is proposed for quantitative analysis on resolution, sensitivity, and signal to noise ratio. The nuclear isoto,Re of Hf ;12 for the angular unconstrained TPCIT and Cs lgi for the angular constrained TPCIT are among many other isotopes, used for quantitative calculation. A comparison of the proposed imaging system with customary SPECT and PET is reported.
An Optical Flow Field Measurement Algorithm For Flir (Forward-Looking Infrared) Imagery
Author(s):
D. N. Kato
Show Abstract
In this paper, a technique for measuring the optical flow field from forward-looking infrared (FLIR) imagery is presented. The optical flow field has many applications including scene tracking, registration, moving target identification (MTI) and occlusion detection. However, the lower resolution of FLIR imagery, as well as other considerations such as sensor artifacts and noise, makes it more difficult to accurately measure the flow vectors than is the case with normal visible imagery. We have developed an optical flow field measurement algorithm for FLIR imagery which is a correlation matching technique based on an approach developed by Barnard and Thompson [1]. Example results for FLIR images are shown demonstrating the capabilities of the algorithm.
Processing Of IR Images From Ptsi Schmtky Barrier Detector Arrays
Author(s):
Jerry Silverman;
Johnathan M. Mooney
Show Abstract
12-bit digitized IR images acquired with PtSi Schottky barrier detector arrays have been processed on Sun work stations. Two techniques for 8-bit global display are compared: the standard method of histogram equali-zation and a newly devised technique of histogram projection. The projection technique generally gives distinctly superior results as demonstrated on an extensive set of indoor, outdoor, day and night scenes. The new algorithm also affords a robust and powerful local contrast enhancement technique. For scenes where the "signal targets" occupy a larger fraction of the pixels (close-ups), projection and histogram equalization have comple-mentary strengths, and a hybrid technique, in which the first 25% of the occupied levels are equalized and the remainder are projected, yields the best features of each method.
AIRCRAFT NAVIGATION USING I.R. IMAGE ANALYSIS
Author(s):
R A Samy;
A. Lucas
Show Abstract
Recent I.R. image analysis techniques can be used to enhance the accuracy of aircraft navigation systems. A scene matching is one of the most promising techniques that can be combined with an Inertial Navigation System to give very high accuracy autonomous navigation. Apars segments detection using an optimal edge operator is used to selected points in a I.R. image. Matching of reference map with "apars" is achieved by hypotheses prediction and evaluation algorithm which include a Kalman estimation for image-to-map transformation refinements.
Sub-Band Coding Of Images Using Nonrectangular Filter Banks
Author(s):
Rashid Ansari;
A.Enis Cetin;
Sang H. Lee
Show Abstract
A procedure for image coding using a non-rectangular subband decomposition is described in this paper. The procedure provides a hierarchical partition where at each stage a decomposition into two subbands is accomplished with no bias toward either the horizontal or the vertical high frequency content of the signal. The filters used in the processing are low order IIR filters with approximately linear phase and the analysis/synthesis filter banks provide exact reconstruction in the absence of quantization and coding. The filters are implemented using a generalized notion of separable processing. A two stage decomposition was investigated in which the subband signal with lowest frequency content was coded using Discrete Cosine Transform (DCT) and the outer bands were coded using runlength coding and pulse code modulation (PCM) . The coding scheme is demonstrated on some test images. The scheme provides a natural hierarchy of image resolution where the sampling is reduced by a factor of two at each stage. In this diamond shaped frequency subband analysis/synthesis scheme if the information of the outer band is lost, the reconstructed signal appears visually more acceptable than in the case where either the high vertical or high horizontal information in the rectangular subband analysis/synthesis is ignored.
Skeletonizing The Distance Transform Parallelwise
Author(s):
Carlo Arcelli;
Gabriella Sanniti di Baja
Show Abstract
An iterated parallel algorithm to get the labeled skeleton of a digital figure F is presented. At every iteration, the current 8-connected contour is identified and the pixels preventing contour simplicity are taken as the skeletal pixels. The set of the skeletal pixels has all the properties generally satisfied by the skeleton, except for unit width. This property is enjoyed if one iteration of a standard parallel thinning algorithm is applied. Every skeletal pixel is assigned a label indicating the first iteration of the process at which it has been recognized as a skeletal pixel. Such a label is equal to the 4-distance of the pixel from the complement of F. The obtained labeled skeleton coincides with the skeleton one could get by skeletonizing the 4-distance transform of F.
Recognition Of Hand Written Arabic Characters
Author(s):
H. Al -Yousefi;
S. S. Udpa
Show Abstract
This paper introduces a novel statistical approach for recognizing handwritten Arabic characters. The proposed method involves, as a first step, digitization of the segmented character. The secondary characters are then isolated and identified separately thereby reducing the recognition issue to a 20 class problem. The moments of the horizontal and vertical projections of the remaining primary characters are estimated and normalized with respect to the zero order moment. Simple measures of shape are obtained from the normalized moments and incorporated into a feature vector. Classification is accomplished using quadratic discriminant functions. Results confirming that the method show considerable merit are presented.
Aircraft Recognition Using A Parts Analysis Technique
Author(s):
G. A. Roberts
Show Abstract
A knowledge based system for aircraft recognition is described. This system uses a parts matching technique to identify aircraft. The target aspect is used and is determined using motion and skeletal feature analysis. Silhouette models for the particular aspect of the aircraft are generated using the aspect information. A parts analysis that compares the models' parts to the segmented aircraft is used to identify the aircraft. The techniques used for parts analysis, model generation, and aspect determination are described. Also a study is presented which compares the performance of two statistical classifiers to the knowledge based classifier.
Combining Motion And Segmentation Information For Localization Of Occlusion Boundaries
Author(s):
Keith Moler;
Alan Scherf
Show Abstract
An algorithm is presented for the accurate detection of occlusion boundaries in an image sequence. Both segmentation and optical flow information is utilized to form a motion image. This fusion process employs an assumption of optical flow continuity within each segmentation region. Continuity is not assumed between segmentation regions, thereby preserving potential occlusion boundaries. Two approaches for the detection of occlusion boundaries in this motion image are presented and compared. The first is the Marr-Hildreth edge detector and the second applies cluster analysis and linear modeling techniques. The algorithm has been tested on infrared imagery, and occlusion boundaries have been detected with a high degree of localization accuracy.
Object Tracking Using Curvilinear Features
Author(s):
J. G. Landowski;
R. S. Loe
Show Abstract
This article discusses a technique for following an object of interest through a sequence of time varying imagery. The approach described is based on matching strong curvilinear edge features using a chain code 'representation for the edges. A simple match function suitable for efficient real-time implementation is described. It can handle reasonable amounts of obscuration, small rotations and edge fragmentation. We have applied this process to several infrared image sequences with results that are better than standard correlation strategies and that meet our operational requirements.
Precise Delineation Technique By Quantitative Computed Tomography
Author(s):
Tianhu Lei;
Wilfred Sewchand
Show Abstract
Two techniques are presented for precise image delineation. They are based on the statistical properties of CT image. In the first, the image is directly sampled to obtain estimates of statistical parameters of regions; filters are then designed (for a given confidence level) to extract the desired image regions. The second technique uses a cluster algorithm to obtain the parameter estimates of image classes and then uses a Bayesian segmentation procedure to obtain the delineated images. The two techniques verified each other. The results of applying these two techniques to the delineation of CT image are also described.
Edge Detection In Cytology Using Local Statistical Properties.
Author(s):
Joseph Barba;
Paul Fenster;
Henrick Jeanty;
Joan Gil
Show Abstract
Accurate edge detection is a fundamental problem in the areas of image processing and pattern recognition/classification. The lack of effective edge detection methods has slowed the application of image processing to many areas, in particular diagnostic cytology, and is a major factor in lack of acceptance of image processing in service oriented pathology. In this paper, we present a two step procedure which detects edges. Since most images are corrupted by noise and often contain artifacts, the first step is to cleanup the image. Our approach is to use a median filter to reduce noise and background artifacts. The second operation is to locate image pixels which are "information rich" by using local statistics. This, step locates the regions of the image most likely to contain edges. The application of a threshold can then pinpoint those pixels forming the edge of structures of interest. The procedure has been tested on routine cytologic specimens.
Detection Of Road Boundary
Author(s):
James P Sowers;
Rajiv Mehrotra
Show Abstract
Several researchers have proposed and implemented various systems pertaining to the development of Autonomous Land Vehicles (ALV). One fundamental problem associated with the navigation of an ALV is the ability to efficiently extract the boundaries of the pathway that need to be navigated. In this paper a method is presented that will determine the road boundaries in one pass using a limited search area in the input image. The method employs the statistical information regarding the gray levels present in the images along with geometrical constraints concerning the road. Some examples are given to demonstrate the efficacy of the method.
3D Arterial Trace Reconstruction From Biplane Multi-Valued Projections
Author(s):
Joseph Barba;
Paul Fenster;
Manuel Suardiaz
Show Abstract
An automatic algorithm for reconstructing arterial center lines in three dimensional (3D) space from two orthogonal angiographic views is presented. As a result of representing projected center lines by, cubic spline polynomials, corresponding points in both views are automatically determined. A previous paperl showed automatic positional reconstruction to be possible when the projected center line can be expressed as a single-valued function. This algorithm generalizes the method to include cases where the center lines are described by multi-valued functions. Three dimensional curves, representing arterial center lines, were sampled and projected onto two orthogonal planes to simulate the projected vessel center line in each view. Gaussian noise of different magnitudes was added to the projected coordinates in both views to simulate vessel center line estimation errors. Stenosed segments were simulated by deleting sections of the projected center lines. Positional reconstruction accuracy for various mean centering errors (MCE) and stenosis lengths are presented.
Function Minimisation With Partially Correct Data Via Simulated Annealing
Author(s):
Jean J. Lorre
Show Abstract
The simulated annealing technique has been applied successfully to the problem of estimating the coefficients of a function in cases where only a portion of the data being fitted to the function is truly representative of the function, the rest being erroneous. Two examples are given, one in photometric function fitting and the other in pattern recognition. A schematic of the algorithm is provided.
An HVS-Based Image Quality Measure
Author(s):
John A Saghri;
Patrick S. Cheatham;
All Habibi
Show Abstract
A preliminary image quality measure which attempts to take into account the sensitivities of the human visual system (HVS) is described. The main sensitivities considered are the background illumination-level and spatial frequency sensitivities. Given a digitized image the algorithm produces, among several other figures of merit, a plot of the information content (IC) versus the resolution. The IC for a given resolution is defined here as the sum of the weighted spectral components at that resolution. The HVS normalization is done via first intensity-remapping the image by a monotone increasing funciton representing the background illumination-level sensitivity, followed by a spectral filtering via an HVS-derived weighting function representing the spatial frequency sensitivity. The developed quality measure is conveniently parametereized and interactive. It allows experimentation with numerous parameters of the HVS model to determine the optimum set for which the high-est correlation with subjective evaluations can be achieved.