Proceedings Volume 2299

Mathematical Methods in Medical Imaging III

Fred L. Bookstein, James S. Duncan, Nicholas Lange, et al.
cover
Proceedings Volume 2299

Mathematical Methods in Medical Imaging III

Fred L. Bookstein, James S. Duncan, Nicholas Lange, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 8 July 1994
Contents: 9 Sessions, 33 Papers, 0 Presentations
Conference: SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation 1994
Volume Number: 2299

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Deformable Models I
  • Deformable Models II
  • Segmentation Methods I
  • Segmentation Methods II
  • Statistical Methods I
  • Statistical Methods II
  • Reconstruction I
  • Reconstruction II
  • Poster Session
Deformable Models I
icon_mobile_dropdown
Hinting about causes of deformation: visual explorations of a novel inverse problem in medical imaging
Fred L. Bookstein, William D. K. Green
In previous papers in this series, we have extended the praxis of image warping by thin-plate splines to accord with information about edge direction and other features of affine derivatives at landmarks. It is interesting to consider the derived warp that is induced solely by the changes of edge induced by a landmark displacement. To the extent that this latter transformation resembles the former we may speak of the landmark displacement as a hypothetical `cause' of the change observed in the edge image, a cause to be estimated by an inverse regression between these energetically oblique subspaces. The apparent `scales' of these specifications are discrepant even when they agree in all large- scale features of deformation. We show how nontrivial regressions arise when edge-information is collinear with interlandmark segments. Other forms of affine constraints on mappings are not interchangeable with landmark displacements, but correspond rather to relationships among diverse subspaces of derivative specifications likewise apparently discrepant in scale and spatial localization. These derivative-constrained splines, which all arise as singular perturbations of the original landmark- driven spline, thereby supply a very flexible and diverse language of image changes to complement various current approaches emphasizing the tracking or `evolution' of features pertaining to single images.
Image warping using derivative information
Kanti V. Mardia, John A. Little
Following the pioneering work of Bookstein and Green on using edgel information through splines, we provide a kriging approach which is general, exact and theoretically elegant. We give first the mathematical formulation through the regression approach of the kriging predictor with derivative information on line. It is shown how Bookstein and Green's limiting approach leads to the same operational solution. Further, the predictor and the bending energy are decomposed into relevant parts. We extend our approach to any dimension including the edgel case. Some applications to medical images are provided.
Face description from laser range data
J. T. Kent, Kanti V. Mardia, Sophia Rabe
Laser scanners are used to measure and store surface coordinates of the human face. Descriptions of shape can be extracted to provide visual interpretations or building blocks for further statistical analysis. We discuss three methods of describing shapes; (a) segmentation of the surface into regions of uniform `surface-type' based on curvature information; (b) representation of the shape by the locations of a parsimonious set of landmarks and derivative information at these landmarks; (c) surface fitting by kriging using landmarks with derivative information. A surface-type segmentation provides an easily interpretable map of the face and highlights areas of the surface where the curvatures change abruptly. Landmark data may be used for statistical comparisons of shapes. And finally, kriging surfaces may be used to visualize the landmark information.
Analysis of cardiac motion with recursive comb filtering
John C. McEachen II, Arye Nehorai, James S. Duncan
A framework for temporal analysis of left ventricular (LV) endocardial wall motion is presented. This approach uses the technique of 2D comb filtering to model the periodic nature of cardiac motion. A method for flow vector computation is presented which defines a relationship between image-derived, shape-based correspondences and a more desirable, smoothly varying, set of correspondences. A recursive filter is then constructed which takes into consideration this relationship as well as knowledge of temporal trends. Experimental results for contours derived from cycles of actual cardiac magnetic resonance images are presented. Applications to the analysis of regional LV wall function are discussed.
Deformable Models II
icon_mobile_dropdown
Automated motion estimation from M-mode echocardiograms
Robert A. Close, James Stuart Whiting, Jack Sklansky, et al.
New algorithms for motion estimation from sequential images are applied to M-mode echocardiograms. Motion is estimated by finding a transformation which relates an initial and final image. The transformation includes a 1D displacement field and modifications in image intensity. The displacements and intensity modifications are adjusted iteratively using the method of convex projections applied to linearized constraint equations. Preliminary results indicate that this method is effective in estimating motion from M-mode images. Computed velocity vectors are approximately tangent to the visible heart wall boundary trajectories. Motion computed from a single reference time appears to provide a means for tracking individual heart wall boundaries.
Hybrid boundary-based and region-based deformable models for biomedical image segmentation
John M. Gauch, Homer H. Pien, Jayant Shah
The problem of segmenting an image into visually sensible regions has received considerable attention. Recent techniques based on deformable models show particular promise for this problem because they produce smooth closed object boundaries. These techniques can be broadly classified into two categories: boundary-based deformable models, and region-based deformable models. Both of these approaches have distinct advantages and disadvantages. In this paper, we introduce a hybrid deformable modeling technique which combines the advantages of both approaches and avoids some of their disadvantages. This is accomplished by first minimizing a region-based functional to obtain initial edge strength estimates. Smooth closed object boundaries are then obtained by minimizing a boundary-based functional which is attracted to the initial edge locations. In this paper, we discuss the theoretical advantages of this hybrid approach over existing image segmentation methods and show how this technique can be effectively implemented and used for the segmentation of 2D biomedical images.
Segmentation Methods I
icon_mobile_dropdown
Automatic contour tiler (CTI): automatic construction of complex 3D surfaces from contours using the Delaunay triangulation
Gregg S. Tracton, Jun Chen, Edward L. Chaney
An automatic contour tiler (CTI) has been designed and implemented for use with planar, simple, possibly concave, nonintersecting `wireloop' contours which are typical in medical applications. Without user interaction or guidance, CTI connects 2D contours into 3D branching structures and then produces the tiles by extracting the surface of the resulting volume. CTI is a perturbing tiler based on Boissannat's method. Previous ideas are extended by offering implementation suggestions--above and beyond theoretical considerations--that result in a robust program even in the face of ill-formed contours. CTI is one of a suite of tools written to a NCI standard. This results in very portable code and makes it practical and economical to produce portable tools regardless of the site's local data formats.
Medical anatomy segmentation kit: combining 2D and 3D segmentation methods to enhance functionality
Gregg S. Tracton, Edward L. Chaney, Julian G. Rosenman, et al.
Image segmentation, in particular, defining normal anatomic structures and diseased or malformed tissue from tomographic images, is common in medical applications. Defining tumors or arterio-venous malformation from computed tomography or magnetic resonance images are typical examples. This paper describes a program, Medical Anatomy Segmentation Kit (MASK), whose design acknowledges that no single segmentation technique has proven to be successful or optimal for all object definition tasks associated with medical images. A practical solution is offered through a suite of complementary user-guided segmentation techniques and extensive manual editing functions to reach the final object definition goal. Manual editing can also be used to define objects which are abstract or otherwise not well represented in the image data and so require direct human definition - e.g., a radiotherapy target volume which requires human knowledge and judgement regarding image interpretation and tumor spread characteristics. Results are either in the form of 2D boundaries or regions of labeled pixels or voxels. MASK currently uses thresholding and edge detection to form contours, and 2D or 3D scale-sensitive fill and region algebra to form regions. In addition to these proven techniques, MASK's architecture anticipates clinically practical automatic 2D and 3D segmentation methods of the future.
Automatic construction of an attributed relational graph representing the cortex topography using homotopic transformations
Jean-Francois Mangin, Vincent Frouin, Isabelle Bloch, et al.
We propose an algorithm allowing the construction of a high level representation of the cortical topography from a T1-weighted 3D MR image. This representation is an attributed relational graph (ARG) inferred from the 3D skeleton of the object made up of the union of gray matter and cerebro-spinal fluid enclosed in the brain hull. In order to increase the robustness of the skeletonization, topological and regularization constraints are included in the segmentation process using an original method: the homotopically deformable regions. This method is halfway between deformable contour and Markovian segmentation approaches. The 3D skeleton is segmented in simple surfaces (SSs) constituting the ARG nodes (mainly sulcus parts). The ARG relations are of two types: first, the SSs pairs connected in the skeleton; second, the SSs pairs delimiting a gyrus. The described algorithm has been developed in the frame of a project aiming at the automatic detection and recognition of the main cortical sulci. Indeed, the ARG is a synthetic representation of all the information required by the sulcus identification. This project will contribute to the development of new methodologies for the human brain functional mapping.
Segmentation Methods II
icon_mobile_dropdown
Uncertainty associated with segmenting structure in echocardiographic images
David C. Wilson, Edward A. Geiser, Yongzhi Yang, et al.
The four main goals of this paper are as follows: First, address the different types of uncertainty and variability encountered when designing algorithms for automatic estimation of the epicardial and endocardial borders in 2D echocardiographic short- axis image sequences. Second, indicate certain choices forced by the setting. Third, indicate requirements expected of the algorithm. Fourth, address the question of what criteria should be used to decide when a method is a success and point out that wall-thickness, chamber area, and the area change fraction tend to be unstable calculations.
Deblurring Gaussian blur
Bart M. ter Haar Romeny, Luc M. J. Florack, Mark de Swart, et al.
To enhance Gaussian blurred images the structure of Gaussian scale-space is studied in a small environment along the scale axis. A local Taylor-expansion in the negative scale-direction requires the calculation of high order derivatives with respect to scale. The generating differential equation for linear scale- space, the isotropic diffusion equation, relates these derivatives to spatial Laplaceans. The high order spatial derivatives are calculated by means of convolution with Gaussian derivative kernels, enabling well-posed differentiation. Deblurring incorporating even 32th order spatial derivatives is accomplished successfully. A physical limit is experimentally shown for the Gaussian derivatives due to discrete raster representation and coarseness of the intensity discretization.
Statistical Methods I
icon_mobile_dropdown
Variability and covariability in magnetic resonance functional neuroimaging
Nicholas Lange
Geometric statistical reasoning is useful for addressing variability and covariability in quantitative analyses of functional neuroimaging experiments. General image smoothness, i.e., intrinsic temporal and spatial autocorrelation, must be accommodated adequately in order to obtain reliable statistical inferences and meaningful practical conclusions. Several exploratory displays and models for such images in the temporal, spatial and spectral domains are summarized. These tools are applicable in functional magnetic resonance imaging, positron emission tomography and other modalities that produce spatial time series. Construction of an objective binary mask that excludes irrevelant voxels from the search volume increases statistical power. Further improvements in sensitivity and consistency are possible when intrasubject replications are available. A recent experiment that uses functional magnetic resonance imaging to detect focal activations significantly cross-correlated with a designed mental arithmetic task demonstrates the utility of these techniques.
Priors on scale-space templates
Alyson G. Wilson, Valen E. Johnson
Much of the Bayesian work in image analysis has focused on the incorporation of vague prior knowledge about the true image into the analysis and on the calculation of appropriate estimates of the resulting posterior distribution. However, in the field of medical imaging, there is a need to incorporate more specific prior information. This paper discusses various models for shape deformation and how they can be applied to the specification of priors on scale-space templates. A new model will be proposed that accounts for features at multiple spatial resolutions and the qualitative spatial relationships among those features.
Statistical methods for analysis of coordination of chest wall motion using optical reflectance imaging of multiple markers
C. M. Kenyon, R. H. Ghezzo, S. J. Cala, et al.
To analyze coordination of chest wall motion we have used principle component analysis (PCA) and multiple regression analysis (MRA) with respect to spirometry on the displacements of 93 optical reflective markers placed upon the chest wall (CW). Each marker is tracked at 10 Hz with an accuracy of 0.2 mm in each spatial dimension using the ELITE system (IEEE Trans. Biomed. Eng. 11:943-949, 1985). PCA enables the degree of linear coordination between all of the markers to be assessed using the eigenvectors and eigenvalues of the covariance of the matrix of marker displacements in each dimension against time. Thus the number of linear degrees of freedom (DOF) which contribute more than a particular amount to the total variance can be determined and analyzed. MRA with respect to spirometrically measured lung volume changes enables identification of the CW points whose movement correlates best with lung volume. We have used this analysis to compare a quiet breathing sequence with one where tidal volume was increased fourfold involuntarily and show that the number of DOF with eigenvalues accounting for >5% of the covariance increased from 2 to 3. Also the point whose movement correlated best with lung volume changed from halfway down the lower costal margin to a more lateral point at the level of the bottom of the sternum. This quantification of CW coordination may be useful in analysis and staging of many respiratory disorders and is applicable to any nonrigid body motion where points can be tracked.
Point set pattern matching using the Procrustean metric
Jonathan Phillips
A fundamental problem in computer vision is to determine if an approximate version of a geometric pattern P occurs in an observed set of points B. The pattern and the background are modeled as point sets Pequals{p$1,....,p$m} and Bequals{b$1,....,b$n} on the line or in the plane. We wish to find a transformation T, from a family of transformations , such that the distance between T(P) and B is minimized. The distance between T(P) and B is the sum of the distances squared between T(p$i) and the closest point in B. This is the Procrustean metric where the set of allowable mappings between P and B is the space F of all functions from P into B. The algorithms in this paper also apply when the metric is the sum of the distances between points in P and B. We present algorithms that minimize the Procrustean metric for the following families of transformations: translations in R1, translations in R2, and combined translations and rotations in R2. We prove that fixed point algorithms for computing the Procrustean metric converge to a fixed point and show a worst case lower bound on the number of fixed points.
Statistical Methods II
icon_mobile_dropdown
Study of statistical methods applied in the spatial, wavelet and Fourier domain to enhance and analyze group characteristics of images: application to positron emission tomography brain images
Daniel E. Rio, Robert R. Rawlings, Urs E. Ruttimann, et al.
Statistical methods in the spatial, wavelet and Fourier domain were applied to two groups of subjects imaged by PET. Furthermore, simulated PET images were created to study the behaviour of these tests under restricted conditions. In particular, a rigorous statistical model in the Fourier domain was used to study general properties of group images, image enhancement and discrimination as it pertains to classification. In the spatial domain, detection of localized differences between groups is presented by applying the recent extension of the theory of Gaussian random fields to medical imaging. Finally, comparisons are made of the Fourier, spatial and wavelet domain methods for detection of localized differences between groups.
Simulated phantom images for optimizing wavelet-based image processing algorithms in mammography
Yunong Xing, Walter Huda, Andrew F. Laine, et al.
Image processing techniques using wavelet signal analysis have shown some promise in mammography. It is desirable, however, to optimize these algorithms before subjecting them to clinical evaluation. In this study, computer simulated images were used to study the significance of all the parameters available in a multiscale wavelet image processing algorithm designed to enhance mammograms. Computer simulated images had a gaussian-shaped signal in half of the regions of interest and included added random noise. Signal intensity and noise levels were varied to determine the detection threshold contrast-to-noise ratio (CNR). An index of the ratio of output to input contrast to noise ratios was used to optimize a wavelet based image processing algorithm. Computed CNRs were generally found to correlate well with signal detection by human observers in both the original and processed images. Use of simulated phantom images enabled the parameters associated with multiscale wavelet based processing techniques to be optimized.
Probabilistic constraint network representation of biological structure
Russ B. Altman
A constraint satisfaction paradigm is useful for modeling uncertain biological structure. Under this paradigm, we begin with a general model of a biological structure with a set of structural parameters and their uncertainty. Any new information about the structure is considered a constraint on the values of these parameters. The goal is to combine the initial model with the constraints to find a solution that is compatible with both. In this paper, we describe the basic notions of a constraint satisfaction problem and describe a method for representing biological structure that is based on the principles of Bayesian probability, and formulated as a constraint satisfaction problem. Biological structures are modeled using parameters that are assumed to be normally distributed, with a mean and a variance. We illustrate the application of this method to two different types of biological structural calculations: one in which there is a weak prior model and a large amount of data, and one in which there is a strong prior model and a relatively small amount of data. In each case, the method performs well, and produces not only good estimates of mean structure, but also a useful representation of the uncertainty in the estimate.
Reconstruction I
icon_mobile_dropdown
Overview of reconstruction algorithms for exact cone-beam tomography
Rolf Clackdoyle, Michel Defrise
An overview of the current state of cone-beam tomography using `exact' analytic methods is presented. The fundamental theories of Smith and Grangeat are described as the even and odd parts of generalized unifying formulation. Various versions of the orbit condition for data sufficiency are reviewed, with examples to illustrate the relative restrictiveness and applicability to situations with truncated projection data. Existing strategies for reconstruction algorithms, and methods of handling redundant data are summarized. Finally a brief discussion of `equivalent methods' is included, with a demonstration that Tuy's inversion formula is equivalent to Grangeat's method.
Statistical model for tomographic reconstruction methods using spline functions
Habib Benali, Jeanpierre V. Guedon, Irene Buvat, et al.
The conventional approach to tomographic reconstruction in the presence of noise consists in finding some compromise between the likelihood of the noisy projections and the expected smoothness of the solution, given the ill-posed nature of the reconstruction problem. Modelling noise properties is usually performed in iterative reconstruction schemes. In this paper, an analytical approach to the reconstruction from noisy projections is proposed. A statistical model is used to separate the relevant part of the projections from noise before the reconstruction. As reconstruction of sampled noise-free projections is still an ill- posed problem, a continuity assumption regarding the object to be reconstructed is also formulated. This assumption allows us to derive a spline filtered backprojection in order to invert the Radon operator. Preliminary results show the interest of combining continuity assumptions with noise modelling into an analytical reconstruction procedure.
Computerized tomographic angiography image segmentation for 3D-volume reconstruction
A computerized scheme for the automated segmentation of contrast enhanced arteries is developed for computerized tomographic angiography (CTA) data. Segmentation is performed with two-dimensional (2D) images on a slice-by-slice base. Image processing techniques include gray-level thresholding, eight-point connectivity tracking, region growing, moment analysis and morphological erosion. The results enable the generation of separated three-dimensional (3D) displays of both vascular and non-vascular structures. The method has been applied to several clinical cases and has shown great promises.
Reconstruction II
icon_mobile_dropdown
Ultrasonic reflection tomography: specific problems and adapted solutions
Sewa Enyonam Mensah, J. P. Lefebvre
Ultrasonic reflection tomography borrows from echography its fundamental physical basis (exploitation of the diffracted echoes by the medium imaged) and from X tomography its numerical procedure of reconstruction. The method results from a linearization of the inverse problem, justified from an acoustical point of view, by the very small inhomogeneities of biological media. One can show that the inverse problem is reduced to a Fourier Synthesis problem based on lacunary data since the measured spectra are angulary equidistributed slices of the Fourier plane. The spectral extent of these cuts isconditioned by the frequency band of the echogrammes, that is, the high frequencies of the image correspond to the high temporal frequencies of the signals. The two problems raised are first the restauration of high frequencies (at the limit the extrapolation of the band analysed) which directly conditions resolution abilities of the instrument. Second, we have to face the problem of the angular interpolation in order to reduce reconstruction noise. Concerning the last point, we have developed a non linear filter operating on the data which contributes (in terms of energy) to the reconstruction of a given pixel. We have shown that these data are distributed on the "contributions circle" where, high frequency components mainly characterized artefacts induced by the reconstruction (backprojection) procedure. On the other hand, lower frequency components result from scattering phenomena. This distinction between useful information and artefact is revealed through a modelization of the interactions between biological interfaces and finite aperture ultrasonic beams. For the extrapolation of the band which allows good image restitution, we have integrated a deconvolution procedure based on a second order statistics filter. This enables us to reduce the input noise by a detection threshold. In addition, the in-line procedure implemented is well adapted to real-time applications. The performances reached, thanks to our experimental tomograph, are described through comparisons of the images obtained.
Application of a constrained optimization algorithm to limited-view tomography
Jesse Kolman, Waleed S. Haddad, Dennis M. Goodman, et al.
The quality of images reconstructed from projections obtained by transmission tomography depends on the range of angles over which measurements can be made as well as the number of projections. Conventional methods such as filtered backprojection suffer when the number of measurements is small, and methods such as ART produce noticeable artifacts when the angular range is limited. Another possible approach is the direct minimization of the squared error between the measurements and the projection of the reconstructed image onto the measurement space. Alternatively, the unfiltered backprojection of the data can be modeled as a linear blur of the desired image, and this blur can be removed with a deconvolution algorithm. One way to handle the latter approach is to minimize the squared error between the backprojection and the reconstructed image blurred by an appropriately chosen point spread function. These methods result in higher quality images when the angular range is limited and the number of projections is small. We use a conjugate gradient based constrained optimization algorithm to do the minimization. The available constraints on the variables are upper and lower bounds and a hyperplane constraint. Since the variables in this case are the image pixels, we can enforce known bounds on the pixel values, such as nonnegativity, as well as keep the sum of the pixels at its known value. These constraints greatly improve the reconstruction quality and increase the rate of convergence of the algorithm.
Tomographic image reconstruction and rendering with texture-mapping hardware
Stephen G. Azevedo, Brian K. Cabral, Jim Foran
The image reconstruction problem, also known as the inverse Radon transform, for x-ray computed tomography (CT) is found in numerous applications in medicine and industry. The most common algorithm used in these cases is filtered backprojection (FBP), which, while a simple procedure, is time-consuming for large images on any type of computational engine. Specially designed, dedicated parallel processors are commonly used in medical CT scanners, whose results are then passed to a graphics workstation for rendering and analysis. However, a fast direct FBP algorithm can be implemented on modern texture-mapping hardware in current high-end workstation platforms. This is done by casting the FBP algorithm as an image warping operation with summing. Texture- mapping hardware, such as that on the silicon Graphics Reality Engine, shows around 600 times speedup of backprojection over a CPU-based implementation (a 100 Mhz R4400 in our case). This technique has the further advantages of flexibility and rapid programming. In addition, the same hardware can be used for both image reconstruction and for volumetric rendering. Our technique can also be used to accelerate iterative reconstruction algorithms. The hardware architecture also allows more complex operations than straight-ray backprojection if they are required, including fan-beam, cone-beam, and curved ray paths, with little or no speed penalties.
Poster Session
icon_mobile_dropdown
EM-MAP algorithm versus ARTUR: theoretical and practical comparisons
Pierre Malick Koulibaly, P. Charbonnier, Michel Barlaud, et al.
A new algorithm for SPECT reconstruction called ARTUR is proposed, which uses a Bayesian approach toward the maximum a posteriori estimator. It preserves discontinuities in the image by the use of regularization. We propose to compare it with one of the regularized version of the iterative method commonly employed in SPECT i.e. the maximum a posteriori expectation maximization one step late method, developed by Green.
Global approach to multivariate correlation analysis of brain positron emission tomographic images
Chulhee Lee, Michael A. Unser, Terence A. Ketter
In this paper, we propose a multivariate correlation analysis of PET brain images. PET images provide the means to understand the functionality of brain. By examining the correlations between the PET images and external variables, such as emotional or psychosensory experiences, it is possible to determine which parts of the brain are related to the given external variables. So far, most correlation analyses have concentrated on one variable at a time. However, this type of univariate approach rapidly becomes impractical as the number of external variables increases because the analysis produces an overwhelming amount of data. In this paper, we extend the correlation analysis to multiple variables. The technique makes it possible to analyze a large number of variables and shows the importance of each variable.
Adaptive trimmed mean filter for computed tomographic imaging
The image quality of a computed tomography (CT) scan is frequently degraded by severe streaking artifacts resulting from excessive x-ray quantum noise. When this occurs, a patient has to be re-scanned at a higher x-ray technique to obtain an acceptable image for diagnosis. This approach results in not only unnecessary dosage to the patient, but also a delayed patient diagnosis and a reduced patient throughput. In this paper, we propose an adaptive trimmed mean filter (ATMF) in Radon space to combat this problem. The ATMF is an extension to the existing (alpha) -inner mean filter in that both sample size M and the trimming parameter (alpha) are selected based on the local statistics. In addition, the 2D ATMF is unsymmetrical to adapt to the sampling pattern in Radon pace. Phantom studies and clinical evaluations have shown that this type of filter is very effective in reducing or eliminating quantum noise induced artifacts. At the same time, the impact on the image spatial resolution has been kept to a minimum.
Efficient algorithm for diffuse edge detection
Tian-Hu Yu, Sanjit K. Mitra
In many digital images, edges do not have a step-like shape but appear as a ramp shape with a very low slope because of noise and blurring effects. In such cases, the white noise model does not hold and the edges are usually called diffuse edges. This paper describes a new second-order gradient based algorithm for the detection of diffuse edges. In the gradient-based edge detection algorithm, such as the LoG filter, there are three basic operations: smoothing as a preprocessing operation to reduce the noise effects, gradient generation - the main operation, and edge linking as a post-processing operation. We propose to use median filtering instead of Gaussian lowpass filtering to reduce the noise, and then introduce a complementary operator of 2D averaging as a second-order gradient generator, and a simple thresholding algorithm to detect zero-crossings. The edge image is finally constructed from edge thinning and linking operations. Simulation results obtained using the proposed edge detection algorithm in comparison with some other commonly used algorithms are included.
CAMIS: clustering algorithm for medical image sequences using a mutual nearest neighbor criterion
Habib Benali, Irene Buvat, Frederique Frouin, et al.
We present a new clustering algorithm for medical images sequences (CAMIS). It combines criteria of spatial contiguity, signal evolution similarity, and the rule of mutual nearest neighbors. The statistical properties of the signal in the images (CT, MRI, nuclear medicine) is taken into account when choosing the dissimilarity index and is explicitly expressed for scintigraphic images. The partition, into an unknown number of classes, was updated by merging and pruning clusters. The efficiency of CAMIS as the first step of factor analysis of medical image sequences has been tested using simulated scintigraphic images.
Incorporating semantics into 2D strings to recognize Candida strains and species
Mary Lou Dorf
This paper summarizes research performed to develop an image information system to aid in the recognition of Candida strains/species, to analyze trends in Candida outbreaks, and to trace mutations of Candida. The development of an image information system that will (1) incorporate a larger amount of knowledge extracted from an image, (2) allow for query by example, and (3) allow for efficient pattern matching requires the development of an efficient and effective means of representing the images. The result is the addition of semantics to the classical 2D string approach. Through this addition, queries are reduced to matching strings rather than matching multiple features of objects. This paper reports on background information concerning Candida and the extension of 2D strings to include attribute information and the addition of semantic operators to facilitate this expansion.
Adaptive unsupervised contextual Bayesian segmentation: application on images of blood vessel
Anrong Peng, Wojciech Pieczynski
Mixture estimation has been widely applied to unsupervised contextual Bayesian segmentation. We present at first the algorithms which estimate distribution mixtures prior to contextual segmentation, such as estimation-maximization (EM), iterative conditional estimation (ICE), and their adaptive versions valid for nonstationary class fields. Upon removing the stationarity hypothesis, contextual segmentation can give much better results in certain cases. Results obtained attest to the suitability of adaptive versions of EM, ICE valid in the case of nonstationary random class fields. Then we present our experiences on the application of the unsupervised contextual Bayesian segmentation to images of blood vessel.
Recovery of 3D deformable models from echocardiographic images
M. Neveu, D. Faudot, B. Derdouri
Our aim is to elaborate 3D models to recover 3D deformable solids from few nonparallel 2D cross-sections. This problem is difficult when solids have time evolutive shape with no peculiarity (no symmetry axis for instance). Our application deals with cardiac ventricles, for which only three or four nonparallel 2D cross- sections are given from a classical echocardiographic examination. Three dimensional reconstruction of a solid consists in model deformation: it implies a matching between the model and data. This matching is generally global (or rigid) in a first step then local (or elastic) in a second step. The first step roughly matches model with data. The second one performs a more accurate matching. In this paper, we describe 2D deformable structures, then 3D deformable models.
Impedance tomography using internal current density distribution measured by nuclear magnetic resonance
Eung Je Woo, Soo Yeol Lee, Chi Woong Mun
We have proposed a new method of image reconstruction in EIT (electrical impedance tomography). In EIT, we usually use boundary current and voltage measurements to provide the information about the spatial distribution of electrical impedance or resistivity. One of the major problems in EIT has been the inaccessibility of internal voltage or current data in finding the internal impedance values. The new method uses internal current density data measured by NMR imaging technique. By knowing the internal current density, we can improve the accuracy of the impedance images.