Proceedings Volume 1771

Applications of Digital Image Processing XV

cover
Proceedings Volume 1771

Applications of Digital Image Processing XV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 January 1993
Contents: 9 Sessions, 68 Papers, 0 Presentations
Conference: San Diego '92 1992
Volume Number: 1771

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Image Representations and Models I
  • Systems and Implementations
  • Image Representations and Models II
  • Image Understanding Issues
  • Algorithms
  • Nonlinear Technology for Signal Processing, Communication, and Control
  • Image Coding and Transmission
  • Innovative Applications
  • Contributions of General Interest
  • Innovative Applications
Image Representations and Models I
icon_mobile_dropdown
Subpixel resolution for target tracking
John T. Reagan, Theagenis J. Abatzoglou, John A. Saghri, et al.
Optical sensing mechanisms are designed to provide adequate resolution for the images of the intended objects. Often, the image of an object is so small that the resolution of the image falls beyond the resolution of the sensing device, and some method must be used to attain finer resolution. In these cases, a model-based approach, in which a parametric object model is assumed, can attain the desired sub-pixel resolution capabilities. In the model-based approach, the object model is convolved with the optical system and then matched against the limited number of available sensor samples. The unknown parameters of the object model are then determined by an appropriate estimation technique. This study will focus on estimating the two-dimensional location parameters (i.e. (x,y) location ) of a single point source from a limited number of sensor readings. We present a comparative study of three estimation techniques: maximum likelihood, centroiding, and conditional mean. The sub-pixel resolution capability of these techniques will be studied as a function of signal-to-noise ratio (SNR). The Cramer-Rao theoretical lower bound for unbiased estimators is derived for this problem, and it is shown that the maximum likelihood solution attains the Cramer-Rao bound for SNR’s considered. The merits and deficiencies of the three estimation techniques and their applicability to solving the problem for multiple point sources will also be addressed.
Automatic visual inspection system for thin film magnetic head wafer using optical enhancement and image processing techniques
Yukio Matsuyama, Hisafumi Iwata, Hitoshi Kubota
An automatic visual inspection system has been developed for Thin Film magnetic Head (TFH) wafers. Although there are only a few classes of defects to be detected, the difficulty of defect detection varies drastically depending on the location of the defect. When the optical characteristics of a defect and the underlying element pattern are similar, the defect becomes difficult to detect. To detect all defects reliably, we developed the following new techniques. (1) Optical enhancement: The wafer is illuminated by a slit-shaped light source from an oblique direction, and the scattered light is used for detecting flakes in the transparent protection layer. Reflected light from the surface is also used for detecting surface defects. Defects are easily extracted by thresholding the detected image. (2) Image processing : An element-to-element comparison method is employed to detect defects that cannot be enhanced optically. Many bright spots within the ceramic substrate cause discrepancies when compared. Local minimum processing is used for eliminating these and stabilizing defect detection. The system has been evaluated in an actual production line and the defect detection rate achieved is approximately 13% higher than human performance.
Nonlinear techniques for the enhancement of images reconstructed from projections
Miguel Roser, G. Cisneros
A useful non linear technique is employed in this paper to enhance the quality of reconstructed images. These images are characterized by a loss of contrast and by a special effect that is inherent to the reconstruction process. This effect has been called focus effect. The filter introduced by De_Vries was chosen and modified in order to solve these problems, reaching extraordinary results. The results presented in this paper could be of great application in medical imaging and in non-destructive testing.
Novel digital unsharp masking algorithm for color images
Tie-Jun Shan
A Digital Unsharp Masking algorithm is proposed for enhancing color image in color reproduction systems. The proposed Digital Unsharp Masking filter uses variable enhancement gain to adapt to edge and tone features of the image. By including edge and white/black color detection mechanisms, the new Digital Unsharp Masking algorithm is able to perform detail enhancement without increasing graininess. The tone detection enables the proposed Digital Unsharp Masking to control the sharpness of different tone regions, i.e., highlight, midtone, and shadow, in the scanned image separately. A luminance channel is used as an unsharp channel in the algorithm. The color shift problem faced by existing digital unsharp masking algorithm is eliminated.
Intelligent-based optical Chinese character recognition system by using integrated character segmentation and text compression techniques
Dung-Ming Shieh, Ming-Wen Chang, Bing-Shan Chien, et al.
This paper presents a method dealing with mixed character segmentation. Conventionally, the region-based approach has been proposed to process the mixed character segmentation , but failure may occur due to some English lettters’ block size which is similar to that of Chinese characters. And also some Chinese characters which are composed of two or three parts will be recognized as two or three English letters. In this paper, we propose an intelligent method to overcome this difficulty. It is the integrated region-based and recognition-based approach. Furthermore, we assume the internal code of character to be the source symbol for coding. So if there are some characters which appear in the text repeatedly, then we will remove the redundancy by compressing text according to the algorithm of Lempel-Ziv coding.
Segmentation of aneurysms via connectivity from MRA brain data
Jasjit Singh Suri, Ralph Bernstein, Chirinjeev Kathuria
This paper describes techniques to visualize aneurysms in three dimensions from Magnetic Resonance Angiographic data sets to aid surgeons and radiologists in surgical planning and treatment of cerebrovascular (brain) aneurysms. Maximum Intensity Projection using ray tracing is implemented to highlight the aneurysm zones in 2d. Segmentation Via Voxel Connectivity (SVVC) supports the recognition of the blood vessels and aneurysms and presents them in 3d.
Realization of fully automated keyword extraction in image database systems
Masao Sakauchi, Jun Yamane
Recently, a flexible image database retrieval system where image keywords can be captured automatically is strongly required, to manage a practical number of image data successfully. But image recognition/understanding technology level is not generally inefficient enough to achieve this requirement. In order to overcome this problem, we propose a new image database framework here. In the proposed system, image keywords are extracted in fully- automated fashion by the deviced image recognition system. Image keywords used here is a collection of recognized objects in the image, where recognition levels are allowed to be intermediate or imperfect. In other words, each of these keywords has a various abstract level; some are very high and satisfactory, but other are very low and unsatisfactory. We introduce the concept of “recognition thesaurus” for managing these various level keywords successfully. “Recognition thesaurus" inherits the topology of the image recognition system, and consists of the relations between the different level of keywords. When someone wishes to retrieve images, he addresses a conceptual request to the system. The system interprets the request and makes an expanded keywords, then the system seeks images which satisfy the “expanded” keywords by using “recognition thesaurus”. As an embodiment of this concept we implemented an image database with various types of sports scenes. Retrieval evaluations reveals the effectiveness of the proposed method.
Three-dimensional object modeling and recognition using absorption in a color liquid
Hao Shi, Fazel Naghdy, Christopher D. Cook
The application of computer vision in industry has been increasing as greater use is made of flexible automation and robotics. Quality control and sorting can also be heavily dependent on artificial vision interfaced to an intelligent decision making system. Traditionally industrial tasks requiring computer vision are simplified to a 2-D problem in a plane. This permits the use of a single camera and hence reduces the complexity of the procedures of frame grabbing, image processing and decision making. Such a solution is however not suitable when 3-D information is vital in the control or decision making processes. Generation and processing of 3-D images are required for such applications. The work presented in this paper provides a simple method of deriving a 3-D computer model for a special class of industrial objects and then using this model for machine recognition. The object is immersed in a colour liquid and the intensity of the pixels of the captured image is modulated by the depth of the object along the camera axis. The depth maps generated from the image are represented by parallel layers located in planes normal to the camera axis. The 2-D features of the layers are derived and a 3-D model is constructed for the object based on these features. The object is distinguished by contour groups which are classified into three types according to their features. These 3-D features include object features and contour group features. Three steps are adopted for object recognition. The object features are first used in a basic test in order to reduce the number of possible models which an unknown object can match. Secondly, the contour features are used to test each of the contour group models. The models with a higher match rate are then selected for verification using chi-squared (?2) statistical methods. Finally the ?2 test is employed to verity the above test results. The object match is governed by both the ?2 test and the contour group test. From these tests, a model which best matches the object can be obtained.
Systems and Implementations
icon_mobile_dropdown
Thin linear network extraction NEXSYS: a knowledge-based system for SPOT images
Jean-Marc Gilliot, Jean-Louis Amat, Georges Stamon
The purpose of this paper is to describe a knowledge-based system for detection and interpretation of thin linear networks in remote sensing images. The proposed system : NEXSYS (Network Extraction System) is based on a mixed object-oriented / rule-based approach. It is built on a frame-based representation and contains three levels of processing: a) A low level for segmentation, using mathematical morphology. b) An intermediate level for line tracking guided by a Hough transform. c) A high level for interpretation. A prototype implementation has been written in LISP, C, using the Knowledge Engineering Environment (KEE), on a Sun Sparc IPC 4/40 station.
Video enhancement workbench: an operational real-time video image processing system
Stephen R. Yool, David L. Van Vactor, Kirk G. Smedley
Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low-contrast objects. Contrast-regulated unsharp masking enhances differentially-shadowed or otherwise low- contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can by screened automatically for low-frequency, high-magnitude events. Combined zoom, roam and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.
Endoscopic inspection and measurement
John A. Gilbert, Donald R. Matthys, Christelle M. Lindner
The design of a panoramic viewing system and its application to cavity inspection and measurement are discussed, along with the general properties of a special panoramic annular lens (PAL). Various examples are described, showing how the PAL can be used for simple inspection or for precision contouring of interior walls of cavities using techniques such as moire, holointerferometry, and Electronic Speckle Pattern Interferometry (ESPI).
Real-time multiuser image processing system for research and development
Ingmar A. Andersson
This is a description of the SAAB Missiles image processing laboratory at his department. At present four persons can work simultaneously with advanced image processing and it is easy to expand the system. A lot of functions can be realised in real-time since it consists of 21 image processing boards. It is the combination of the extremly fast image processing and the multi-user function that makes this system unique.
IDIPP: a user-friendly easily expansible image processing package
Rui Ribeiro, A. Marinheiro, Antonio Sousa Pereira, et al.
IDIPP ( Interactive Digital Image Processing Package ) is a general purpose image processing package, built upon two requirements : user friendliness and easy integration of new processing modules. To satisfy these requirements, a special graphical user interface (GUI), made of elements easily manipulated through a set of high level tools, was designed. The availability of these tools allows the addition of new processing modules with very little effort. Around this interface, several image processing modules have been developed. This paper describes the user interface structure and the developed image processing modules.
Regularized multiframe restoration of Hubble space and ground-based telescope images with parallel implementation
Rudolf Klaus, Hans Burkhardt
The spherical aberration of the primary mirror of the Hubble Space Telescope (HST) has seriously reduced the sensitivity and degraded the resolution of images from the instruments. After the first period of observation, most of the astronomic researchers do not try to make photometric measurements on original or somewhat restored images, but they see the opportunity to take the better resolution of the HST to upgrade their ground-based images. From that point of view, the need of multiframe restoration methods appeared. This paper first of all shows the problem of multiframe image restoration methods by looking at several well known single-frame methods and their generalization. The problem of week radiation in astronomical images is modelled by poisson processes which is well known from positron emission tomography. The unconstrained maximum likelihood method for this class of problems leads to an equation that can be solved by an expectation maximization type iterative algorithm. A regularizing subband method is proposed that incorporates the multiframe problem in a single-frame scheme by a subband type solution. The filtering for the subbands incorporates the model inherent freedom. Taking the simplest case of ideal low- and high-pass filters, there is still the choise of the cut-off frequency. The low frequency band is taken from the ground-based telescope with its good photometric property, the high frequency band is taken from the HST with its good resolution property. A transputer based parallel implementation with an underlying message passing system will be shown as a time efficient solution of the algorithms.
Highly functional and versatile camera platform for a multipurpose robotic vehicle
Dai Hyun Kim, Andrew A. Kostrzewski, Jenkin C. Chen, et al.
A highly functional and versatile camera platform for a multi-purpose robotic vehicle utilizing a novel modular approach is presented. The platform uses only three CCD camera, four computer-controlled rotation stages and three modular optical imaging systems to provide both a stereo vision mode at any desired viewing angle and a panoramic vision mode with a 180° nonoverlapping field-of-view. The panoramic view from the imaging system of each module is collected by one of three mirrors and transmitted collectively towards a corrresponding CCD camera. Stereo vision mode is accessed by aligning any two of the three modules in parallel using their respective rotation stages. When the entire assembly is rotated by another rotation stsge, any desired viewing angle is obtained.
Real-time 2D and 3D imaging system based on parallel processor array
Anne Demoyen, David Gray
Digital Image Processing, with its ever-growing application base, is placing ever higher demands on processing power, high transmission bandwidth and large, online storage capacity. The recent availability of algorithm-specific, chip-level devices certainly increases raw throughput, but does not change the eternal compromise of high-performance, application-specific systems with lower-performance, open, versatile solutions. If a wide range of applications is to be dealt with by a single machine then it must be fully programmable. As even the fastest mono-processor today still falls well short of the required performance, the only viable solution seems to be in massive parallelization of easy-to-program processors. This paper presents a fully-programmable, complete image processing machine designed to handle 2-D and 3-D applications. Two important aspects of this machine will be covered: its internal data transfer & processor architecture, and its open, nonspecific modality interface.
Design and development of a 4D scanner
John N. Carter, Tim P. Monks, C. N. Paulson, et al.
This paper describes the current development of a novel 4D scanner, which can fully sample three dimensional motion at 12.5Hz. The scanner a variant on structured light produces a series of position lines every 40ms, from each video frame. The reconstruction of the range information uses a colour coded pattern, which is analyzed by a flexible and robust pattern matching algorithm.
Image Representations and Models II
icon_mobile_dropdown
Three-dimensional object recognition using a decision hierarchy
Anna Helena R.C Rillo
In this paper we suggest a computational model for the 3D interpretation of a 2D view based on the selection and organization of useful and discriminatory features for a large number of objects designed on a CAD system. These features are used to construct a strategy hierarchy for recognition, which provides the representation of associations between features detected from multiobject scenes and the data base of object models, enabling the on-line recognition to be particularly efficient. The proposed model has been implemented in a model-based computer vision system that can recognize threedimensional objects from unknown viewpoints in single gray-scale images. The system is based on an off-line model preprocessing stage and an on-line recognition stage. In the off-line processing, 3D recognition oriented models, which are used in the verification process, and the strategy hierarchy, which is used in the matching process, are automatically determined. During the on-line recognition, three steps are performed: feature extraction, matching and verification. The process of feature extraction first applies traditional low-level image processing to detect feature primitives and then produces a description in terms of contour features and relationships between them, based on the statements made by the phenomenon of Perceptual Organization, as collinearity, parallelism, connectivity, and repetitive patterns among image elements. Matches are then made on this intermediate representation, the feature groupings. This process generates initial hypotheses for objects and viewpoints by accessing the strategy hierarchy constructed in the off-line stage, integrating both top-down and bottom-up approaches. Finally, the verification phase solves the spatial correspondence, verifying whether the match leads to a legal interpretation of the image.
Human articulation simulation
Alain Perez
A dynamic model of the wrist articulation is presented in this article. Bones are reconstructed into three dimensional static models from Scanner or IRM computed images. As many computed vision models are possible, the demonstration is focused on the consequences on the dynamic model and on the contact detection procedure in particular. An adapted static model is then deduced. Two dynamic models are then presented, one based on mechanics, the other on geometric concepts.
Robust shape reconstruction from combined shading and stereo information
Kyoung Mu Lee, C.-C. Jay Kuo
In this research, we first show that single-image shape from shading (SFS) algorithms share an inherent limitation in the accuracy of reconstructed surfaces due to the property of the reflectance map. That is, surface orientations can be accurately recovered if they lie along the gradient direction of the reflectance map function, but cannot be easily recovered if lying along the tangential direction. Then, we consider two methods which incorporate stereo information with shading to improve the performance. One is to use multiple images taken under different lightening conditions known as photometric stereo, and the other is to incorporate the height information obtained from images taken from different viewing angles known as the geometric stereo. With photometric stereo, we compensate the weakness of each reflectance map by combining several reflectance maps in a proper way in the gradient space and hence improve the accuracy of the results. With geometric stereo, absolute heights at sparse feature points are obtained and used as constraints on the resulting surface so that the ambiguity can be resolved. Simulation results for several test images are given to show the performance of our new robust algorithms.
Using the technique of color image processing to enhance the cytomorphological deformation
Sing T. Bow, Jian Zhang, Xia-fang Wang
A great deal of research has been carried out by biologists, pathologists, as well as biomedical physicists on the studies of the metamorphosis in various types of cells with symptoms of cancerous disease [1-8]. Color and texture of the cell and their interrelationships are important features for analyzing cells and prove successful to differentiate abnormal from normal ones. However, to categorize the abnormal (or suspicious) cell into cancerous or non-cancerous ones, more informations rather than those obtained only from the observations on the microscopic images of the smear are needed, and therefore, further step, such as biopsy etc., has to be taken. In this paper, color image processing technique is introduced as a means to enhance the visualization and diagnostic capability of an human expert, and hope that it would come out to be an effective tool in identifying the non-cancerous cells from the cancerous ones even when they looks alike under microscope due to some reasons. A real microscopic image is first resolved into several spectral-component images. More and useful features are extracted respectively from 0.4 to 0.5 urn, 0.5 to 0.6 urn, and 0.6 to 0.7 urn spectral bands. Combination of these different spectral-band images after separate processings will derive a color image, which will sharpen the distinguishing features between cancerous and non-cancerous cells, even when they all look alike originally both morphologically and chromatically under the microscope. In this paper some recent promising results obtained with real color image processing in our laboratory are given. To improve the resolution, 512 x 512 pixels image, other than 256 x 256, are employed for processing.
Systematic method of design and realization of applications in vision
Patrick J. Bonnin, Bertrand Zavidovique
Image processing is progressively becoming a science, but it remains closer to techniques, than to theory. Hence one needs to define a common frame to settle general enough problems and to build systems for various purposes in vision: a systematic method for tackling applications in computer vision. The tentative method here stems from three main principles: - taking the operational framework into account, to get constraints, - introducing specific knowledge related to the application as early as possible, - extracting local and global image properties at both segmentation and matching steps. As it will be shown along the paper, all principles ask for an explicit expertise on classical image processing techniques: application bounds and limits. They lead to less classical image procedures, to be especially developped as in the present case: - cooperative segmentation, - use of planarity constraints. Two applications have been selected, they are different on both their operational framework and image processing problems they pose : - target tracking in IR imagery, - 3D scene reconstruction of classical mobile robot environments: indoor or outdoor urban scenes. Both systems have been actually designed and built. It is impossible to prove any generality of a method based on two different applications only. But these have been selected to be generic enough and with sufficient change between them, so that our systematic method of applicative system designing be likely used with succes in many other image processing applications. After explaining the systematic method outlines through the three basic principles, each principle is illustrated by examples derived from both abovementionned applications.
Quadtree-structured multiple resolution segmentation of some computed images
Tianhu Lei, Zuo Zhao
Image statistics of some computed images and statistical history of the corresponding quadtree are analyzed. A new multiple resolution segmentation (MRS) approach using quadtree for these computed images is presented. The results obtained by using this approach demonstrate the correctness of the derived statistical properties and efficacy of this MRS scheme.
Algorithms to separate text from a mixed text/graphic document and generate a succinct description for this complex graphic
Sing T. Bow, Jianjun Sa
The objective of this paper is to describe an approach to separate text from a mixed text/graphic document, and describe this graphic as overlapping meaningful shapes. Accuracy in the reconstruction of the mixed text/graphic document from the description file is also reported. This paper is a continuation of our previous work, which was mainly on engineering drawings with polygonal shapes. This paper focuses on documents consisting of any curved shape components with text. In this paper algorithms are designed to automate the process of generation of loops with minimum redundancy from the bit map of the image, and to break the interweaved complex loops into simpler interpretab l e shapes of curved segments. Finally, a succinct description file can be established for the whole image,thus achieving drastic saving in memory when archiving the document images.Effectiveness of the algorithms has been evaluated through experiments on a large number of mixed text/graphic documents. Results show that the algorithms developed are computationally efficient. Once the text is separated from the graphic, the graphic image is then decomposed into the meaningful component parts, the data reduction achieved through this succinct description is extremely high. Even for those silhouettes of curved shape, an approach, called concatenated-arc representation, is developed for their description. With this concatenated-arc approach, much fewer number of arc segments are need than that needed by line segment approximation. Shapes reconstructed from these description files match closely with the original ones, even for the very complex graphics.
Image Understanding Issues
icon_mobile_dropdown
Projectively invariant structures in multisensor imagery
Eamon B. Barrett, Paul Max Payton
We describe in this paper several geometry problems in photogrammetry and machine vision; the geometric methods of projective invariants which we apply to these problems; and some new results and current areas of investigation involving geometric invariants for object structures and non-pinhole-camera imaging systems.
Task-focused modeling in automated agriculture
Mark Richard Vriesenga, K. Peleg, Jack Sklansky
Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.
Automatic 3D target model generation
Louis A. Oddo
Model-based target recognition is an active area of research. However, little attention has been given to the problem of target model generation for a model-based Automatic Target Recognition (ATR) system. This paper describes novel algorithms which automatically generate a 3-D object-oriented spatial database that is used to represent and manipulate 3-D target models.
Improved moment invariants for shape discrimination
Chaur-Chin Chen, Tung-I Tsai
Moment invariants used as features for shape recognition has been widely used. The moments were computed using all the information of shape boundary associated with the interior region. This paper presents the theoretical improved moments computed based only on the shape boundary. Some invariants derived from improved moments in variation to translation, rotation, and scaling are presented. The computations of improved moment invariants based on chain code representation of a shape boundary can be done in real time. Experiments of discriminating country maps, industrial tools, and printed numerals by using improved moment invariants as features via graphical plots suggest that the improved moment invariants be good shape features close to human visual processing.
Registration of noisy SAR imagery using morphological feature extractor and 2D cepstrum
Alok R. Kher, Sunanda Mitra
Registration of synthetic aperture radar (SAR) images is a non-trivial task because of the significant speckle noise associated with them. We have performed the registration using 2-D cepstrum technique which has been verified to be more noise-tolerant and computationally more efficient than the conventional correlation methods. The ccpstral peaks revealed linear translations between SAR image pairs, accurately. Further work is in progress to isolate the registration peaks from spurious peaks in a more reliable way than the present heuristic approach. Removal of speckle noise from the SAR images is also addressed. Spatial averaging is a standard technique used on SAR images to reduce speckle. However, this causes a loss of resolution. We have employed mathematical morphology techniques to remove more speckle than spatial averaging can, with little loss of resolution. Long, one-dimensional structuring elements in different orientations are used to filter speckle while maintaining the sharpness of region boundaries. Afterward, a small, two-dimensional structuring clement is used to remove thin line elements. The targets appearing as small bright spots are separated from the original images by a thresholding operation and superimposed on the filtered images. The computational time required on a sequential machine is comparable to that for spatial averaging. In addition, like other morphological filters, this technique could be implemented on a real time parallel architecture. The improvement in resolution and noise reduction over the spatial averaging is demonstrated for images acquired at different wavelengths.
Fractal model for digital image texture analysis
Michael G. Petrolekas, Sunanda Mitra
The present paper uses a fractal model for differentiating and quantifying image texture. The employment of the fractal model to texture classification involves evaluation of the fractal dimension of the images concerned. A parametric representation of the image texture in terms of fractal dimension is achieved by extending fractional Brownian motion to the discrete case and using a maximum likelihood estimator (MLE) for estimation of the fractal parameter H. The algorithm developed for this model is applied successfully to texture classification of synthetic polymeric membranes. Such texture classification provides us with a quantitative descriptor of polymeric membrane morphology for establishing a correlation between the morphology and the chemical transport phenomena in generating membranes for various industrial applications.
Algorithms
icon_mobile_dropdown
Quantifying the super-resolution capabilities of the CLEAN image processing algorithm
Bobby R. Hunt
The problem of image restoration has an extensive literature (e.g.J1^^’^) and can be expressed as the solution of an integral equation of the first kind. Conventional linear restoration methods reconstruct spatial frequencies below the diffraction-limited cutoff of the optical aperture. Nonlinear methods, such as maximum entropy^, have the potential to reconstruct frequencies above the diffraction limit. Reconstruction of information above diffraction we refer to as super-resolution. Specific algorithms developed for super-resolution are the iterative algorithms of Gerchbergf5! and Papoullis^, the maximum likelihood method of Holmes^, and the Poisson maximum-a-posteriori algorithm of Hunrf8^. The experimental results published with these algorithms show the potential of super-resolution, but are not as satisfactory as an analytical treatment. In the following we present a model to quantify the capability of super-resolution, and discuss the model in the context of the well-known CLEAN algorithm.
Model adaptive optimal image restoration
Brian D. Jeffs, Wai Ho Pun
This work addresses the problem of restoring blurred and noise corrupted images when typical deterministic methods (least squares, max entropy, etc.) are not known to be optimal. The proposed approach is to adapt, based on observed image data only, the optimization criterion used in the restoration to one most suited to the statistical properties of the observed image. This is done without prior knowledge or restriction assumptions about the data. Maximum likelihood (ML) image restoration is considered where the noise distribution is not known ar-priori, but is modeled by a general family of parametric distributions whose widely varying shapes are controlled by a small set of parameters. It is shown that the generalized p-Gaussian (gpG) distribution family can match a surprisingly wide range of typical noise distributions (uniform, Gaussian, exponential, Cauchy, etc.) by varying a single shape parameter p. Restoration is accomplished by adapting the noise model through adjusting p as part of the estimation problem. Once p is found, the ML estimate is simply the associated lp norm minimization solution. The optimization criterion is thus adapted to suit the observation. Examples of improved reconstruction using this method, as compared with least squares and maximum entropy, are presented. The extension of model adaptive restoration to maximum a-posteriori (MAP) estimation is discussed. The potential applicability of another more general parametric distribution, the generalized beta of the second kind (GB2), is discussed.
New fast implementation of cellular array for morphological filters, stack filters, and median filters
Long-Wen Chang, Wen-Shen Fong, Shang-Shung Yu
Cellular Array are very important and very suitable in image processing. The local neighborhood operations can be implemented in the cellular array to increase their speed. Recently, the morphological filters and stack filters received very much attention. They are very appropriate to be implemented in the cellular array. In this paper, we use a very powerful method called threshold decomposition to implement these filters in the cellular array. Threshold decomposition provides a very powerful method to transform gray filtering into binary filtering. The filters includes median, order-statistic(OS) and morphological filters. Filtering of a M— level image by threshold decomposition, requires three stages: (1) thresholding (2) binary filtering (3) reconstruction. To increase the performance of the processing element in the array, a fast reconstructor with (2M-2-Log 2M) half adders was developed.
New efficient algorithm for image binarization based on probabilistic interpretation of gray levels
Gerard Yahiaoui
This paper deals with a new algorithm for binarization of digital gray levels images. This algorithm might be useful for all applications consisting on the interpretation of a binarized image. Indeed, our algorithm has been designed with two main constraints : - fast binarization : we wanted to avoid every computings of averages and global repartitions of grey levels that often lead to heavy algorithms. - "good looking" binarization : human vision needs a compromise between gray levels and space frequencies resolutions in order to interprate properly an image. Therefore, we did not try to build an algorithm that does not lose too much information. But we wanted the algorithm not to lose information used by human vision. First, we give a description of some human vision properties. Second, we propose a simple method that binarizes an image and takes into account the constraints mentionned before. We give a detailed description of our algorithm, insisting on the easiness of computing implementation. Then, we show simulation results on real data.
Model-based 3D object recognition using reciprocal basis sets and direction of arrival techniques
David Cyganski, Richard F. Vaz, Charles R. Wright
This paper presents a new method for model-based object recognition which uses a single, comprehensive analytic object model representing the entirety of a suite of gray-scale views of the object. In this way, object orientation and identity can be directly established from arbitrary views, even though these views are not related by any geometric image transformation. The approach is also applicable to other real and complex sensed data, such as radar and thermal signatures. The unprocessed object model is comprised of a set of basis images with complex exponential harmonic terms as coefficients. A new model is formed comprised of the reciprocal set of the object basis set. The projection of an acquired image onto the reciprocal basis thus produces samples of a complex exponential, the phase of which reveals the pose parameters. Estimation of this phase for several degrees of freedom corresponds to the plane wave direction of arrival (DOA) problem; thus the pose parameters can be found using DOA solution techniques. Results are given which illustrate the performance of a simplified, preliminary, implementation of this method using real-world images.
Nonlinear Technology for Signal Processing, Communication, and Control
icon_mobile_dropdown
Putting chaos into communications
Helena S. Wisniewski
This paper presents an overview of the methods in nonlinear dynamics which are being applied to signal processing; and a new nonlinear coding method which is based on strange attractors. In particular, these nonlinear dynamic methods are being used for: noise reduction, discriminating signals from a noisy background - for example, when nonlinear media are involved, signal identification, classification, and prediction schemes, filtering out multipath propagation, speech modeling, as well as nonlinear codes, medical applications, and the monitoring and control of vibrations.
Experimental techniques for exploiting chaos
William L. Ditto, Mark L. Spano
We describe the implementation of the Ott-Grebogi-Yorke method of controlling chaos in a physical system. This method requires only small time dependent perturbations of one system parameter and does not demand the use of model equations to describe the dynamics of the system. One advantage of the OGY method is that, between these perturbations, the system remains on chaotic trajectories. One can thus use the sensitivity of the chaotic system to switch between different orbits at will.
Noise reduction for chaotic data by geometric projection
Robert G. Cawley, Guan-Hsong Hsu
We present a method for noise reduction that does not depend on detailed prior knowledge of system dynamics. The method has performed reasonably well for known maps and flows. Also we present an empirically based technique to estimate the initial signal-to-noise ratio for time series whose dynamical origin may be unknown.
Chaos, communications, and signal processing
Louis M. Pecora, Thomas L. Carroll
Recent work using chaotic signals to drive nonlinear systems shows that chaotic dynamics is rich in new application possibilities. Among these are stable system design and synchronization. New Driving Signals Driven systems are easily visualized as dynamical systems which have as one of their input parameters a dynamical variable from another, often autonomous, dynamical system. We often refer to the source of the driving signal as the drive system and to the driven system as the response system. This can be viewed as the drive sending a signal to the response which then alters its behavior according to the signal. Typically, when driven systems are studied or engineered the driving signals come from constant forces or sine wave forcing. The use of signals from a chaotic system to drive a nonlinear system offers a new type of driving signal. In our approach [1,2,3,4] two major themes stand out. One is the idea of stability as generalized to chaotic systems. Another is the use of a constructive approach to building useful, chaotically driven systems. We cut apart, duplicate, and paste together nonlinear dynamical systems. Many things can be done with some guidance from what is now known in nonlinear dynamics. We first examine stability. Stability of Chaotically Driven Systems Consider a general «-dimensional, nonlinear response system, w = h(w,v), where the w=dw/dt, the driving signal v is supplied by a chaotic system and w and h are «-dimensional vector functions. The question of stability arises when we ask: given a trajectory w(t) generated by this system for a particular drive v, when is w(t) immune to small differences in initial conditions, i.e. when is the final trajectory unique, in some sense? Fig. 1 shows this schematically.
Image Coding and Transmission
icon_mobile_dropdown
Comparison of image coding techniques with a picture quality scale
V. Ralph Algazi, Yoshiaki Kato, Makoto M. Miyahara, et al.
A newly developed Picture Quality Scale (PQS) provides a numerical measure of image quality for monochrome images well correlated with the Mean Opinion Score. In this paper, we report some results on the evaluation and comparison of image coding methods with such an objective quality measure. The emphasis is given here to the evaluation of the quality of JPEG coding standard. We also review and discuss the important new areas of research that an objective distortion measure now makes possible.
Future trends in image coding
Ali Habibi
The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then by observing the trends in developments of theory, software and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks predict the future state of image coding.
Image compression with QM-AYA adaptive binary arithmetic coder
Joe-Ming Cheng, Glen G. Langdon Jr.
The Q-Coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.
Improved motion estimation algorithm with post processing and its application in motion-compensated interpolation
Kan Xie, Luc Van Eycken, Andre J. Oosterlinck
Motion-compensated interpolation has several applications in digital image processing like field frequency conversion of different television standards, image coding with the frame skipping method, and so on. Differing from a motion-compensated (MC) predictive algorithm which aims to minimize the prediction errors that must be coded and transmitted, the motion estimator for MC interpolation must provide reliable motion vectorfields which closely approximate the actual motions in scenes in order to reconstruct the missed images in the receiver end. Generally speaking, motion vectorfields obtained by the conventional motion estimation algorithms are not good enough for the application of MC interpolation. In this paper, a new motion estimation algorithm including smoothness constraints is proposed, and its application to MC interpolation of skipped images in the environment of a low bit-rate codec is investigated. The simulation results show that motion vector fields obtained by the proposed motion estimation algorithm are more homogeneous and more reliable than those obtained by the conventional algorithms, and the quality of interpolated images is substantially improved for the examined sequence.
Subband coding of prediction error images using constrained-storage VQ
Christoph Stiller, Olaf Hirsch
A subband coding scheme for the prediction error in a hybrid coder at a rate of 8 kbit/s is presented. In order to achieve small adress overhead we propose intraband encoding of rectangular shaped regions. An iterative procedure of relaxation type which jointly considers all subbands determines number, size and position oi update rectangles. It is driven by optimization of the coding-gain per bit ratio. The prediction error amplitudes within the selected rectangles compose vectors of different dimensions. They are quantized by a multi codebook vector quantizer which contains one codebook per vector dimension. The problem of large storage requirement of multi codebook vector quantizers is circumvented by a codebook sharing approach. A training procedure for the constrained-storage VQ is presented. In terms of SNR it performs less than 0.5 dB worse than a common multi codebook VQ while saving 90% storage.
New techniques for subband/wavelet transform coefficient coding applied to still image compression
Emmanuel Reusens, Touradj Ebrahimi
New techniques for redundancy removal of quantized coefficients of a wavelet transform are discussed. Several strategies are developed to improve the redundancy reduction stage in subband/wavelet based image compression. The influence of the scanning path in coding the coefficients of the subbands after transformation are pointed out and solutions are proposed for exploiting the correlation between the coefficients in a more efficient way. New methods are also proposed to encode the address of non zero coefficients using blocks in both lossy and lossless approach. Simulations show a better performance of the proposed techniques when compared to classical methods, while maintaining an efficient implementation complexity.
Transform vector quantization: application and improvement
Herbert Plansky
This article presents a coding scheme using variable block-size transform coding together with vector quantization (VQ) called VBSTVQ (variable block-size transform vector quantization). The VBSTVQ shows a satisfying picture quality at bitrates of about 0.3-0.6 bit/pel (coding of the luminance signal only). The coding scheme is well suited for multimedia, computer and distribution applications due to its asymmetry in complexity and its inherent hierarchical structure. The picture is segmented into rectangles of different sizes. These rectangles are transformed by a two-dimensional DCT and coded by VQ based on analysis in the spatial and transform domain. A decomposition scheme of the rectangles into vectors which is adapted to non-stationary signals like edges will be introduced. Computer simulations compare the results of constant and variable blocksize TVQ.
Interlaced image sequence coding for digital TV
Bruno Rouchouze, Frederic Dufaux, Murat Kunt
This paper presents a method for interlaced image sequence coding for digital TV. The interlaced nature of the CCIR 601 format, which is the current standard for digital TV, is a serious drawback in most digital video codecs. In order to obtain a more efficient compression, we propose to process only the fields of one parity instead of processing the frames resulting from interlaced to progressive format change. The fields of the other parity are predicted using spatial interpolation based on the corresponding decoded fields, and the prediction error is also coded and transmitted. In this way, the decoder can reconstruct the odd and even parity fields with a reduced transmission cost. Experimental results, where the proposed interlaced coding method is applied to Gabor-like wavelet transform coding of MPEG2 image sequences, show a very good performance.
Image coding through predictive vector quantization
Ajai Narayan, Tenkasi V. Ramabadran
This paper describes a Predictive Vector Quantizer (PVQ) for coding grayscale images. The method described can be regarded as an extention of an existing speech coding algorithm in 1- dimension to 2-dimensional images. The method applies vector quantization (VQ) to innovations generated by the well known scalar Differential Pulse Code Modulation (DPCM) method. It tries to exploit the advantages of both the simplicity of DPCM and the high compressibility of VQ. Two types of code books, viz., random and deterministic, are used in the implementation. Performance results of the method with both types of codebooks are presented for industrial radiographic images. The results are also compared with reconstructions obtained using the Discrete Cosine Transform (DCT) method.
Transform encryption coding for performance improvement of JPEG standard
Chung Jung Kuo, Maw S. Chen
Transform encryption coding (TEC) is a universal technique for the performance improvement of conventional transform coding (TC) techniques. It not only increases the compression ratio, quality, and security level of coded image but also decreases the coded image sensitivity to channel noise. In TEC, TC technique is applied to the encrypted image instead of the natural scene image. The sample of encrypted image is the weighted sum of the original image samples. Apparently, TEC is compatible with all the TC techniques. JPEG system is applicable to continuous-tone gray-scale or color digital still image data. Since TC technique is employed in JPEG baseline system, TEC technique can then be used to improve its performance. In this paper, the quantization tables in JPEG system are redesigned to match the statistical characteristics of encrypted images. In addition, some parameters required by encryption process is also chosen. Therefore, the performance of JPEG baseline system on the encrypted images can then be increased. According to the simulation results, about 0.9 dB luminance and 0.7 dB chrominance SNR increase in JPEG demonstration images is obtained by the combined JPEG-TEC technique.
Innovative Applications
icon_mobile_dropdown
Enhancement and restoration of ancient manuscripts
Anil Christopher Kokaram, J. A. Stark, William J. Fitzgerald
This paper presents some results obtained by applying standard image enhancement techniques to improve the readability of ancient manuscripts. We implement adaptive histogram equalisation techniques for the removal of the stain on a particular Greek parchment. We also use the residual from a linear prediction technique to highlight edges in the text. Although the results can reduce the time it takes to decipher a document, the results in their present form do not highlight characters that the reader could not have seen given enough time. There is wide scope for future work in this area and it promises to be most beneficial to the study of ancient text.
Analysis and matching of degraded and noisy fingerprints
Emre Kaymaz, Sunanda Mitra
The present work reports a three stage matching algorithm for latent fingerprints. The algorithm includes preprocessing by a transform domain filter, computation of moments as invariant features and finally, use of a nearest neighbor clustering analysis for fingerprint matching. The transform domain filter involves selective amplification of the spectral band containing the highest energy, and subsequent use of a band-pass filter. The enhanced image is almost noise-free, and shows prominent features in the fingerprint that cannot be extracted by other conventional enhancement techniques. The moments of enhanced fingerprints provide invariant features. The classification of fingerprints is performed by a nearest neighbor clustering of the moment features characterizing a specific fingerprint.
Measurement of appearances of oxide residues on rolled metals
Robert C. Chang, John C. Montagna, Bernard J. Hobi, et al.
Oxide residues on rolled aluminum sheets appear as blemishes which deteriorate aesthetic surface quality. Consequently, the relative severities of surface oxides must be evaluated for product quality control, especially in packaging applications. In the current practice, an experienced QA person visually inspects sheet surface for oxide residues and assigns a grade based on the apparent severity. This procedure is limited by the ability of the operator to resolve varying levels of oxides, and inspection results vary from operator to operator. This paper presents an imaging technique developed to measure undesirable oxides on metal surfaces. The technique provides quantification of oxide severity with potentially finer resolution and improved repeatability than currently possible. Preliminary results from the evaluation of oxides on cold rolled aluminum samples show that oxide severities quantified using this technique correlate well with the discrete grades assessed by QA personnel.
Fast accessing method of color image
Machiko Sato, Jung-Kook Hong
This paper describes a method of indexing the color distribution of an image, which makes it easier to search for and access of regions by their colors. The index has a quadtree structure with color indexes for successively divided image quadrants in its nodes. Thus the colors in each image quadrant are represented by a color index defined to take account of human visual perception. To perform a search, we descend through the index from a root, checking whether the color index satisfying a condition. The nodes satisfying that condition correspond to image quadrants that contain the color being queried. Experimental result shows that this method is useful for detecting regions according to their colors in the early stages of the search process.
Three-dimensional submicron tomography of interfacial defects in GaAs IC ohmic contacts
Michel Castagne, E. Baudry, P. Crudo
It is known that ohmic contact technology is a key problem in the development of GaAs MESFET circuits. It is usually achieved through a multilayer (Au, Ge, Ni) interdiffusion operation under controlled annealing. The electrical quality of the contact comes from the textures of complex alloy islands or micro-dots induced by the technology process. There is presently no non destructive means of observing the physical nature of the interface of the contact with the bulk material. We propose using Laser Scanning Tomography to explore the interfacial microprecipitates non invasively and non destructively. A new method of micro scanning and corresponding data processing allows us to obtain a 3 dimensional view of the internal region underlying the contact; this method is at a micron scale in the lateral direction but it is however largely sub-micron in the z direction perpendicular to the surface which means that it gives a precise analysis of the critical region of the electronic transfer in the transistor. Experimental results are presented on standard circuits which have undergone thermal aging processes.
Contact-free determination of human body segment parameters by means of videometric image processing of an anthropomorphic body model
Herbert Hatze, Arnold Baca
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e.g. orthopaedic gait analysis), bioengineering, sport biomechanics and the various space programmes. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject’s body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass-element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM-compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a Super Video Windows Framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Tissue characterization by texture analysis of ultrasonic images
Olivier Basset, Zhigang Sun, Gerard Gimenez
Ultrasonic B-scan images present a particular texture known as "speckle" which may reveal information relative to the investigated tissue structure. The present work is devoted to the discrimination of various prostatic tissues (normal tissue, benign prostatic hypertrophy and cancer). This is done on ultrasonic scans by means of texture analysis. Three methods have been implemented: the autocorrelation function and the co-occurrence matrices, measuring second order statistics and the grey level run lengths matrices. Parameters derived from the co-occurence matrices provides a fairly good tissue signature. The processing of 37 images gives a 78% score of samples classified with success. Note that the various images can not be visually discriminated. However, such results are obtained when wide regions of interest are investigated (64 x 64 pixels), but they are less significant when the sample size decreases, that is when the pathological area is very small.
Three-dimensional reconstruction of the prostate from transverse or sagittal ultrasonic images
Olivier Basset, Isabelle R. Dautraix, Gerard Gimenez, et al.
A device devoted to the 3D representation of the prostate has been developped. It operates with either sagittal or transverse images. On each selected image, an operator outlines the prostate (and/or an eventual pathological area), by means of a digitizing tablet. These contours are then described by a limited number of points. From these points, belonging to the object envelope, two 3D representation techniques have been implemented: the B-spline parametric surface and the triangulation method. The main advantage of the triangulation algorithm is its rapidity, contrary to the one using the parametric surfaces. Moreover it allows satisfying representations of simple anatomic shapes like the prostate. The understanding of the geometry of the object is improved by a fast rotation of the whole object.
Contributions of General Interest
icon_mobile_dropdown
One automatic segmentation method of x-ray image
De-Chen Zhan, Jing-Chun Chen
This paper mainly addresses the problem of the automatic segmentation of X-ray image which is applied in Non-destructive inspection on weld defects and proposes a new automatic segmentation method which does not use the histogram but the statistics of the image. A mathematical model and an implementation algorithm are proposed and discussed, and several experimental results (photographs) have been given out to show the effectiveness of the algorithm.
Merit maximization approach to binocular vision using dynamic programming
Suya ok You, Jian Liu, Faguang Wan
In this paper, wp cast stereo matching as a problem in merit maximization. This is achieved by the formulation of a merit function which influence the similarity between primitives in the right and left images and the mutual dependency between primitives. Stereo matching are done by finding the "best" paths that maximize the merit function. This is handled by using dynamic programming technique. With this algorithm, a global optimum matching can be obtained. We give a mathematical description for the merit function and the algorithm has been implemented. The experimental results are presented to show the efficacy of the proposed stereo matching method.
Markov iterated function system model of images
Huiguo Luo, Yaoting Zhu, Guang-Xi Zhu, et al.
All images have two basic characters: geometry and texture. Basing the two characters, we present Markov Iterated Function System (MIFS) model, consisting of Iterated Function System (IFS) and Markov Random Field (MRF). The fractal images have self-similarities in statistics, especially in geometric topological structure. IFS is useful for discribing the self- similarities. In the meantime, we consider a texture to be a stochastic field and usually it is anisotropic. We choose MRF as the model of the texture. It is convenient to control the geometric characters and texture through changing the parameters of MIFS model. We explored a fast iterated algorithm to realize a MIFS sample and an algorithm to estimate the parameters of MIFS.
Novel approach to human face recognition
Ke Liu, Frederic Jallut, Ying-Jiang Liu, et al.
This paper presents a new method of human face recognition based on a novel algebraic feature extraction method. An input human face image is First transformed into a standard image; Then, the projective feature vectors of the standard image are extracted by projecting it onto the optimal discriminant projection vectors; Finally, face image recognition is completed by classifying these projective feature vectors. Experimental results showed that the present method is effective.
Aimpoint selecting method based on target image shape analysis
Zhiyong Li, Zhenkang Shen
A shape analysis method for target images is presented in this paper. It can be used for aioqtoint selecting in missile homing. For asymmetric and structure-branching targets, the method solves the problem of the selected aimpoint on the edge or even out of the target resulted from traditional methods those usually take the centroid of the target image as the aimpoint. Our method can give the graph representation of the target image and find the key part of the target as the aimpoint. It promotes the intelligence and the efficiency of the attacking at the target.
Automatic analysis of strongly noisy particle images for particle sizing
Tianshu Lai, Yushan Tan, Zhilin Xiang
The some of problems related to automatic analysis of strongly noisy particle images are discussed in this paper. The new double thresholding and correlating focussing recognition methods are developed and used to automatically analyze a particle image reconstructed from the hologram of solid rocket propellant combustion. The tested results are given and show that the new methods are effective in automatic analysis of strongly noisy particle images.
Digital filtering methods used in eliminating diffraction halo of speckle interferograms
Yibing Yang, Zhenya He
Homomorphic filtering and digital processing methods are proposed in this paper to eliminate diffraction halo in speckle interferograms. According to the principle of speckle interferometry, speckle pattern can be regarded as the one which is composed of two multiplying parts—diffraction halo part and fringe part. The light intensity change of speckle diffraction halo is much slower than that of fringes in the direction perpendicular to fringes, not only does the diffraction halo causes the decrease of speckle fringe's contrast, but also the deviation of fringe's extremum positions. Because all of the useful information is contained in fringes, the two multiplying parts can be divided into two additive parts to make digital filtering in frequency domain by homomorphic processing method in order to compress the diffraction halo and enhance the fringe's contrast. Digital methods to eliminate diffraction halo effect are realized by using the simulation of the light intensity distribution of diffraction halo. These methods can effectively eliminate or reduce the diffraction halo background in speckle patterns.
Effect of MTF of digital image acquisition device on phase-measuring profilometry
Xianyu Su, Xiang Zhou
As affected by MTF of digital ynage acquisition device, the algorithm of phase-measuring profilometry(PMP) 1,2 cannot be regarded as ”point to point” operation.The phase of a point is not only dependent on its N intensity values obtained by digital phase-shifting technique, but also related to its adjacent points. So systematic errors are introduced by MTF.The theoretical analysis is presented, results for simulation and experiment are given.
Three-dimensional reconstruction of nodus sinuatrialis
Jinxiang Wang, Jianson Liu
An example is presented to apply the technique of constructing realistic images of three dimensional (3D) objects on a two dimensional (2D) display screen for nodus sinuatrialis of the heart.The technique is suited to objects represented by slices . Various organization can be represented by different colors in the example
Quantitative Schlieren method using digital image processing for supersonic flow characterization
Ionel Valentin Vlad, Nicholas Ionescu-Pallas, Ion G. Apostol
New analytic formulas relating the light intensity distribution from the output plane of the schlieren system to the refractive index gradients of the air flow around a model are reported. The digital processing of the schlieren image using this new algorithm and a proper calibration led to precise measurements of the refractive index (and, hence, mass density) distribution, in the error limits of 5%.
Innovative Applications
icon_mobile_dropdown
PALIMADAR: a PAL-optic-based imaging module for all-round data acquitision and recording
Pal Greguss, Attila Kertesz, Viktor Kertesz
The problems of nonscanning three-dimensional allround data acquisition are discussed using the concept of "sphere of vision". Based on the recently developed panoramic annular lens (PAL-optics), which abandons the "see-through-a-window" (STW) concept for data acquisition, a new signal collecting modul has been developed using the visual strategy of birds. The PAL Imaging Modul for Allround (spherical) Data Acquisition and Recording (PALIMADAR) uses two PAL-optics juxtapositioned on the same optical axis, facing each other's flat surfaces. The unique feature of PALIMADAR is that its visual field consists of three regions, in the same way as that of the birds: a binocular or stereoscopic region, an anterior visual field, and a lateral visual field. As a consequence, this modul - for the first time in imaging history - covers a spherical visual field without the need for using any scanning technique.