Optimal nonlinear extension of linear filters based on distributed arithmetic
Author(s):
David Akopian;
Jaakko T. Astola
Show Abstract
Distributed arithmetic (DA) based implementation of linear filters relies on the linear nature of this operation and has been suggested as a multiplication free solution. In this work we introduce a nonlinear extension of linear filters optimizing under MSE criterion the memory function (MF, multivariate Boolean function with not only binary output) which is in the core of DA based implementation. Such an extension will improve the filtering of noise which can contain non Gaussian components without increasing the complexity of implementation. Experiments on real images have shown the superiority of the proposed filter over the optimal linear filters. Different versions of these filters are also considered for the removal of impulsive noise, processing with the large input data windows and fast processing.
Multifiltering approach to adaptive speckle reduction in textured SAR images
Author(s):
Bruno Aiazzi;
Luciano Alparone;
Stefano Baronti;
Roberto Carla;
S. Lolli
Show Abstract
Speckle reduction in synthetic aperture radar (SAR) images is a key point to facilitate applicative tasks. A filter aimed at speckle reduction should energetically smooth homogeneous regions, while preserving point targets, edges, and linear features. A tradeoff, however, should be arranged on textured areas. Filtering capabilities depend on local image characteristics, and generally no filter outperforms the others in every situation. In this work, a set of adaptive filters is considered with attention to those oriented towards a multiresolution approach. Images are individually processed by each filter, and the output at each pixel position is obtained by choosing one out of the channels. The selection is based on thresholding local features accounting for both space-varying statistics and geometry. Results on true SAR images show that also an empirical choice of thresholds is noncritical: the novel scheme outperforms each filter individually, at least according to visual criteria.
New training method for linear separable threshold Boolean filters
Author(s):
Octavian Valeriu Sarca;
Jaakko T. Astola;
Edward R. Dougherty
Show Abstract
The key point of the LS-TBF (Linear Separable Threshold Boolean Filter) design is the training of the Linear Separable Boolean Function (LSBF). The standard LS-TBF design method approximates the LSBF with a linear function. This procedure leads to a closed form expression of the filter weights but it does not provide the optimal solution. Other LSBF training algorithms are not really applicable in filter design because they either require too many iterations or do not offer a reasonable stability. This paper introduces a new gradient- type method applicable for LS-TBF design. The proposed algorithm is able to reach the optimal solution in very few iterations. In order to provide high convergence rate together with stability the method uses multiple gain factors at the same time. This way the proposed algorithm simulates a continuous-time implementation of the steepest-descent method. While the known training methods use many iterations the proposed one minimizes the number of iterations but increases the amount of calculations at each step. Consequently the computational effort spent for additional operations like disk access, windowing and thresholding becomes negligible and also the overall effort is very much reduced. Among other advantages the proposed training algorithm is very suitable for parallel implementation.
Neural method of spatiotemporal filter design
Author(s):
Jaroslaw Szostakowski
Show Abstract
There is a lot of applications in medical imaging, computer vision, and the communications, where the video processing is critical. Although many techniques have been successfully developed for the filtering of the still-images, significantly fewer techniques have been proposed for the filtering of noisy image sequences. In this paper the novel approach to spatio- temporal filtering design is proposed. The multilayer perceptrons and functional-link nets are used for the 3D filtering. The spatio-temporal patterns are creating from real motion video images. The neural networks learn these patterns. The perceptrons with different number of layers and neurons in each layer are tested. Also, the different input functions in functional- link net are searched. The practical examples of the filtering are shown and compared with traditional (non-neural) spatio-temporal methods. The results are very interesting and the neural spatio-temporal filters seems to be very efficient tool for video noise reduction.
Adaptive-vector LQ filter for color image processing
Author(s):
Andrei A. Kurekin;
Vladimir V. Lukin;
Alexander A. Zelensky;
Jaakko T. Astola;
Pauli Kuosmanen;
Kari P. Saarinen
Show Abstract
Robust adaptive vector filtering algorithms applicable to color and multichannel image processing are proposed. They are based on the use of Q-parameter that is a vector analog of quasirange. Considered algorithms have a good combination of properties: effective noise reduction, ability to remove spikes, edge and detail preservation, and low computational complexity. Their characteristics are evaluated quantitatively and compared to non-adaptive counterparts. Advantages of proposed algorithms are also demonstrated by simulated image processing results.
Estimating bidirectional reflectance parameters by forward modeling and statistical inversion of remotely sensed data
Author(s):
Robin P. Fletcher;
Howard J. Grubb;
Christopher Godsalve
Show Abstract
We discuss the estimation of ground reflectance from remotely-sensed measurements made by a satellite-borne instrument. The particular sensor under study includes a spatially-varying weighting of the ground values within overlapping, large, low-resolution image pixels. Thus the problem is one of estimating sub-pixel information from a low resolution observation. The observation process involves interaction with an unknown atmosphere, realistic modeling of which requires sophisticated and computationally expensive algorithms. These make it difficult to specify likelihood functions for use in a fully Bayesian approach to the inversion and so we work with atmospherically-corrected data in a penalized least-squares framework. We formulate realistic physical models for the observation process which can then be inverted using a forward modeling approach. This can be solved by using a stochastic optimization algorithm on a suitably chosen energy function, which regularizes the accuracy of reconstruction of the observed satellite data with our prior beliefs about spatial smoothness of the ground reflectance. A further complication is introduced as the sensor views the same ground point from two viewing angles at closely-spaced times. As the surface property of reflectance intrinsically varies with viewing angle, depending on the vegetation or ground cover, these two views allow us to jointly estimate two parameters of a suitable physical model for this variation. From this we can also derive other functionals of interest to environmental applications. This estimation can be considered as a form of image fusion, via the forward model of the observation process. We consider practical aspects of the smoothness priors, the forward model and its implications for the design of an energy function and stochastic inversion algorithms in this application. We compare their performance with an existing method, on some synthetic data, generated with the important sensor properties. We discuss a `stochastic refinement' algorithm, which improves on starting estimates, and is a computationally-cheap and effective alternative to full stochastic optimization when images are smooth.
Statistical estimators of spatial vector fields in defect classification and texture modeling of high-tech surfaces
Author(s):
Hendrik Rothe;
Dorothee Hueser
Show Abstract
Especially for wafers, hard disks and flat panel displays fast and accurate technical means for roughness measurement, texture modeling, defect detection and classification are needed. However, speed and accuracy are often contradictory in these fields. It is shown that by using scatter (ARS/BRDF) data a very fast acquisition of surface microtopography information is possible. Furthermore, it is pointed out that the von-Mises-distribution can replace the Gaussian distribution for circular or spherical vector fields, i.e. BRDF data obtained from a variety of technical surfaces by stray light measuring or sensing. For the purpose of in line quality control formulae for the parameters corresponding to mean and variance in Gaussian distributions as well as parameter tests and confidence intervals for circular unimodal vector fields will be given. Finally, measurement and simulation results will be compared to circular statistical inference.
Spectral and bispectral feature-extraction neural networks for texture classification
Author(s):
Keisuke Kameyama;
Yukio Kosugi
Show Abstract
A neural network model (Kernel Modifying Neural Network: KM Net) specialized for image texture classification, which unifies the filtering kernels for feature extraction and the layered network classifier, will be introduced. The KM Net consists of a layer of convolution kernels that are constrained to be 2D Gabor filters to guarantee efficient spectral feature localization. The KM Net enables an automated feature extraction in multi-channel texture classification through simultaneous modification of the Gabor kernel parameters (central frequency and bandwidth) and the connection weights of the subsequent classifier layers by a backpropagation-based training rule. The capability of the model and its training rule was verified via segmentation of common texture mosaic images. In comparison with the conventional multi-channel filtering method which uses numerous filters to cover the spatial frequency domain, the proposed strategy can greatly reduce the computational cost both in feature extraction and classification. Since the adaptive Gabor filtering scheme is also applicable to band selection in moment spectra of higher orders, the network model was extended for adaptive bispectral filtering for extraction of the phase relation among the frequency components. The ability of this Bispectral KM Net was demonstrated in the discrimination of visually discriminable synthetic textures with identical local power spectral distributions.
Global and local translation and magnification
Author(s):
S. Umesh;
Aibing Rao;
Gabriel Cristobal;
Leon Cohen;
Jos H. van Deemter
Show Abstract
If we translate a function then all information about the translation appears in the phase of the Fourier transform of the translated function. Similarly if we magnify a function all information about the magnification appears in the phase of the scale transform. In the case where the function is translated or magnified and also warped we discuss how one can define approximate translation and magnification factors. We also discuss how these factors may depend on the phases and amplitude of functions. Partial answers to these questions are given. Also we discuss how one can define local and global translational and magnification factors.
Probabilistic classification of forest structures by hierarchical modelling of the remote sensing process
Author(s):
Jeffrey L. Moffett;
Julian Besag;
S. D. Byers;
W.-H. Li
Show Abstract
Satellite sensors observe upwelling radiant flux from the Earth's surface. Classification of forest structures from these measurements is a statistical inference problem. A hierarchical model has been developed by linking several sub-models which represent the image acquisition process and the spatial interaction of the classes. The model for blur assumes the underlying, unobserved image is degraded according to the system point spread function. The model for topographic effects assumes the unblurred pixel values are determined by the corresponding bidirectional reflectance distribution function (BRDF) and the mean spectral reflectance of each class. A discrete Markov random field (MRF) model provides information about the spatial contiguity of the classes. Prior distributions are specified for the mean and covariance parameters. Bayes theorem is used to construct a posterior probability distribution for the classification given the data. Due to the high dimensionality of the resulting MRF, estimates of image attributes are obtained using a Markov chain Monte Carlo technique. The marginal posterior modes (MPM) point estimate minimizes the expected number of misclassifications by maximizing the marginal probability with which each pixel is classified. The advantages of this approach include the ability to specify a unique BRDF for each class and to have posterior probability estimates provide spatially explicit information about the certainty of the MPM estimate. Limitations of the model include the assumptions necessary for modeling bidirectional reflectance, the difficulty of defining classes as an appropriate scale, and assessing the accuracy of probabilistic classifications. Specimen results using Landsat TM data are presented.
Robustness of optimal binary filters for sparse noise
Author(s):
Edward R. Dougherty;
Artyom M. Grigoryan
Show Abstract
An optimal binary image filter is an operator defined on an observed random set (image) and the output random set estimates some ideal (uncorrupted) random set with minimal error. Assuming the probability law of the ideal process is determined by a parameter vector, the output law is also determined by a parameter vector, and this latter law is a function of the input law and a degradation operator producing the observed image from the ideal image. The robustness question regards the degree to which performance of an optimal filter degrades when it is applied to an image process whose law differs (not too greatly) form the law of the process for which it is optimal. The present paper examines robustness of the optimal translation-invariant binary filter for restoring images degraded by sparse salt-and-pepper noise. An analytical model is developed in terms of prior probabilities of the signal and this model is used to compute a robustness surface.
Genetic algorithms for automated texture classification
Author(s):
Dan Ashlock;
Jennifer L. Davidson
Show Abstract
In this paper we demonstrate that a genetic algorithm can be used to produce collections of pixel locations termed foot patterns useful for distinguishing between different types of binary texture images. The genetic algorithm minimizes the entropy of empirical samples taken with a particular foot pattern on a training image. The resulting low entropy foot patterns for several texture types are then used to classify test images. In order to classify a given image, foot patterns for several texture types are applied to the image to obtain entropy scores. The lowest entropy foot patterns are then used in a vote with the majority among the ten lowest scoring being taken as the classification. On the original test set of sixty images, twelve each from five image types, the resulting classification was 98.3% accurate (one image was not classified). When a sixth texture type, picked specifically to confound the classification technique, was added to texture types in the original test the technique misclassified several images of the two similar types. This latter experiment helps explain much of the how and why of the texture classification technique. We discuss potential methods for overcoming limitations of the texture classification technique.
Mine boundary detection using partially ordered Markov models
Author(s):
Xia Hua;
Jennifer L. Davidson;
Noel A. C. Cressie
Show Abstract
Detection of objects in images in an automated fashion is necessary for many applications, including automated target recognition. In this paper, we present results of an automated boundary detection procedure using a new subclass of Markov random fields (MRFs), called partially ordered Markov models (POMMs). POMMs offer computational advantages over general MRFs. We show how a POMM can model the boundaries in an image. Our algorithm for boundary detection uses a Bayesian approach to build a posterior boundary model that locates edges of objects having a closed-loop boundary. We apply our method to images of mines with very good results.
Fast neural-network-based image segmentation
Author(s):
Slawomir Skoneczny;
Jaroslaw Szostakowski;
Marcin Iwanowski;
Andrzej Orlowski
Show Abstract
Image segmentation is a process in which from all the picture important information is extracted enabling distinguishing between objects and background of the image. There are several methods of image segmentation depending on different kinds of specimens and usually on some a priori knowledge of the picture. Segmentation consists of clustering and classification. In this paper an improved version of neural network implementation of image segmentation methods is presented. Both clusterization and classification stages can be performed by neural network approach. Theoretical investigations and practical results are described.
Detection of local objects in radiographic images by structural hypothesis-testing approach
Author(s):
Roman M. Palenichka;
Peter Zinterhof
Show Abstract
Detection and binary segmentation of low-contrast flaws (defects) in noisy radiographic images is considered with an application to non-destructive evaluation of materials and industrial articles. The known approaches, like the edge detection or unsharp masking with a consecutive thresholding operation, yield poor results for such images. In the presented method of object detection, a model-based approach is adopted which relies on shape constraints of the objects to be detected as well as exploits the image multiresolution representation. For detection of local objects, the maximum likelihood principle and statistical hypothesis testing is used with the confidence control during all stages of the image analysis. The proposed novel procedure of estimation of the image intensity from noisy pixels ensures a robust evaluation of basic model parameters in the presence of outliers which are considered as impulsive noise.
Minimum-description-length-based approach to CT reconstruction using truncated projections from objects with unknown boundaries
Author(s):
Tetsuya Yuasa;
Balasigamani Devaraj;
Yuuki Watanabe;
Tomoo Sato;
Yoshiaki Sasaki;
Atsunori Hoshino;
Humio Inaba;
Takao Akatsuka
Show Abstract
This paper considers the interior problem of CT reconstruction in which outer data are deficient in each projection. It is effective to this problem to restrict the parameters, i.e., the pixels, to be estimated to the region in which an object exists. We investigate this problem using the minimum description length principle proposed by Rissanen which is the amount of information required to describe a model based on information theory. Reconstruction algorithm and the data structure for this model to reduce amounts of calculation and memory are proposed. Finally, its effectiveness is shown by simulation.
3D conoscopic vision
Author(s):
Didier Gava;
Francoise J. Preteux
Show Abstract
This paper describes a new 3D shape reconstruction method based on conoscopic techniques. Conoscopy is a novel interferometric technique which provides depth information. Such an approach has been integrated in a conoscopic range-finder which can be used to reconstruct 3D macroscopic as well as microscopic information either in a laboratory context or in a controlled or hostile industrial environment. Application examples of 3D profile reconstruction are presented and discussed.
Comparative analysis of discrete and continuous Boolean models
Author(s):
John C. Handley;
Edward R. Dougherty
Show Abstract
The Boolean random set model is a tractable random set model used in image analysis, geostatistics, and queueing systems, among others. It can be formulated in the continuous and discrete settings, each of which offers certain advantages with regard to modeling and estimation. The continuous model enjoys more elegant theory but often results in intractable formulas. The discrete model, especially in the 1D directional case, provides flexible models, tractable estimators, and optimal filters.
Spatially adaptive local-feature-driven total variation minimizing image restoration
Author(s):
David M. Strong;
Peter Blomgren;
Tony F. Chan
Show Abstract
Total variation (TV) minimizing image restoration is a fairly new approach to image restoration, and has been shown both analytically and empirically to be quite effective. Our primary concern here is to develop a spatially adaptive TV minimizing restoration scheme. One way of accomplishing this is to locally weight the measure or computation of the total variation of the image. The weighting factor is chosen to be inversely proportional to the likelihood of the presence of an edge at each discrete location. This allows for less regularization where edges are present and more regularization where there are no edges, which results in a spatially varying balance between noise removal and detail preservation, leading to better overall image restoration. In this paper, the likelihood of edge presence if determined from a partially restored image. The results are best for images with piecewise constant image features.
Adaptive Boolean predictive modeling with application to lossless image coding
Author(s):
Ioan Tabus;
Jaakko T. Astola
Show Abstract
This paper develops new algorithms belonging to the class of context modeling methods, with direct application to lossless coding of gray level images. The prediction stage and the context modeling stage are performed using nonlinear techniques rooted in the field of order statistics nonlinear filtering, which proved competitive in image restoration applications. The new nonlinear predictors introduced here can be easily rephrased as adaptive nonlinear filtering tools, useful in image restoration applications. We propose a new variant of the Context algorithm, where the prediction, modeling of errors and coding are realized using a Finite State Machine modeler, (which reduces the complexity of tree modelers, by lumping together similar nodes). The coding performance of the new Context algorithm is better than that of the best available algorithms, as illustrated in the experimental section.
Improved dynamic programming-based handwritten word recognition using optimal order statistics
Author(s):
Wen-Tsong Chen;
Paul D. Gader;
Hongchi Shi
Show Abstract
Handwritten word recognition is a difficult problem. In the standard segmentation-based approach to handwritten word recognition, individual character class confidence scores are combined to estimate confidences concerning the various hypothesized identities for a word. The standard combination method is the mean. Previously, we demonstrated that the Choquet integral provided higher recognition rates than the mean. Our previous work with the Choquet integral relied on a restricted class of measures. For this class of measures, operators based on the Choquet integral are equivalent to a subset of a class of operators known as linear combinations of order statistics. In this paper, we extend our previous work to find the optimal LOS operator for combining character class confidence scores. Experimental results are provided on about 1300 word images.
Modeling, segmentation, and caliber estimation of bronchi in high-resolution computerized tomography
Author(s):
Francoise J. Preteux;
Catalin Iulian Fetita;
Philippe Grenier M.D.
Show Abstract
In this paper, we address bronchi segmentation in high resolution computerized tomography in order to estimate the bronchial caliber. The method developed is based on mathematical morphology theory, and relies on morphological filtering, marking techniques derived from the concept of connection cost, and conditional watershed. In order to evaluate the robustness of the segmentation and the accuracy of the caliber estimates, a realistic bronchi modeling based on physiological characteristics has been developed. According to the size of the bronchi, the accuracy is up to 90%. Results are presented and discussed.