Registration of multiple video images to preoperative CT for image-guided surgery
Author(s):
Matthew J. Clarkson;
Daniel Rueckert;
Derek L.G. Hill;
David John Hawkes
Show Abstract
In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.
Novel binning method for improved accuracy and speed of volume image coregistration using normalized mutual information
Author(s):
Jon J. Camp;
Richard A. Robb
Show Abstract
There is a growing consensus that mutual voxel information based measures hold great promise for fully automated multimodal image registration. We have found that image greyscale binning using a specific variation of contrast- limited histogram equalization (which we call histogram preservation) provides significant reduction of noise and spurious local maxima in the normalized mutual information function without causing significant displacement or smoothing of the global maximum. These effects are also relatively robust in the presence of image subsampling, so that accurate subpixel coregistration of typical medical volume images may be achieved in a few seconds by a very simple optima search algorithm based on a few thousand sampled voxels. In this paper, we illustrate these effects by presenting the results of random tests on patient data. Intramodal performance is evaluated by image self- registration using a variety of patient image volumes. Reregistration error is measured as the mean of the residual Euclidean displacement of the eight corner points of the image volumes after reregistration. The performance of histogram preservation prebinning is compared to linear prebinning, and the effect of image subsampling and number of bins on algorithm speed and accuracy is also assessed.
Performance of 3D differential operators for the detection of anatomical point landmarks in MR and CT images
Author(s):
Thomas Hartkens;
Karl Rohr;
H. Siegfried Stiehl
Show Abstract
Point-based registration of images generally depends on the extraction of suitable landmarks. Recently, different 3D operators have been proposed in the literature to detect anatomical point landmarks in 3D images. While the localization performance of 3D operators has already been investigated (e.g., Frantz et al), studies on the detection performance of 3D operators are hardly known. In this paper, we investigate nine 3D differential operators for the detection of 3D point landmarks in MR and CT images. These operators are based on either first, second, or first and second order partial derivatives of an image. In our investigation we use measures, which reflect different aspects of the detection performance of the operators. in the first part of the investigation, we analyze the number of corresponding detections in 3D tomographic images, and in the second part we use statistical measures to determine the detection performance w.r.t. certain landmarks. It turns out that (1) operators based on only first order partial derivative of an image yield a larger number of corresponding points than the other operates and that (2) their performance on the basis of the statistical measures is better.
Detecting small anatomical change with 3D serial MR subtraction images
Author(s):
Mark Holden;
Erica R. E. Denton;
J. M. Jarosz;
T. C.S. Cox;
Colin Studholme;
David John Hawkes;
Derek L.G. Hill
Show Abstract
Spoiled gradient echo volume MR scans were obtained from 5 growth hormone (GH) patients and 6 normal controls. The patients were scanned before treatment and after 3 and 6 months of GH therapy. The controls were scanned at similar intervals. A calibration phantom was scanned on the same day as each subject. The phantom images were registered with a 9 degree of freedom algorithm to measure scaling errors due to changes in scanner calibration. The second and third images were each registered with a 6 degree of freedom algorithm to the first (baseline) image by maximizing normalized mutual information, and transformed, with and without scaling error correction, using sinc interpolation. Each registered and transformed image had the baseline image subtracted to generate a difference image. Two neuro-radiologists were trained to detect structural change with difference images containing synthetic misregistration and scale changes. They carried out a blinded assessment of anatomical change for the unregistered; aligned and subtracted; and scale corrected, aligned and subtracted images. The results show a significant improvement in the detection of structural change and inter-observer agreement when aligned and subtracted images were used instead of unregistered ones. The structural change corresponded to an increase in brain: CSF ratio.
Mutual information matching and interpolation artifacts
Author(s):
Josien P.W. Pluim;
J. B. Antoine Maintz;
Max A. Viergever
Show Abstract
Registration algorithms often require the estimation of grey values at image locations that do not coincide with image grid points. Because of the intrinsic uncertainty, the estimation process will invariably be a source of error in the registration process. For measures based on entropy, such as mutual information, an interpolation method that changes the amount of dispersion in the probability distributions of the grey values of the images will influence the registration measure. With two images that have equal grid distances in one or more corresponding dimensions, a large number of grid points can be aligned for certain geometric transformations. As a result, the level of interpolation is dependent on the image transformation and hence, so is the interpolation-induced change in dispersion of the histograms. When an entropy based registration measure is plotted as a function of transformation, it will show sudden changes in value for the grid-aligning transformations. Such patterns of local extrema impede the optimization process. More importantly, they rule out subvoxel accuracy. Interpolation-induced artifacts are shown to occur in registration of clinical images, both for trilinear and partial volume interpolation. Furthermore, the results suggest that improved registration accuracy for scale-corrected MR images may be partly accounted for by the inequality of grid distances that is a result of scale correction.
MRI-SPECT image registration using multiple MR pulse sequences to examine osteoarthritis of the knee
Author(s):
John Andrew Lynch;
Charles G. Peterfy;
David L. White;
Randall A. Hawkins;
Harry K. Genant
Show Abstract
We have examined whether automated image registration can be used to combine metabolic information from SPECT knee scans with anatomical information from MRI. Ten patients, at risk of developing OA due to meniscal surgery, were examined. 99mTc methyldiphosphonate SPECT, T2-weighted fast spin echo (FSE) MRI, and T1-weighted, 3D fat-suppressed gradient recalled echo (SPGR) MRI images were obtained. Registration was performed using normalized mutual information. For each patient, FSE data was registered to SPGR data, providing a composite MRI image with each voxel represented by two intensities (ISPGR, IFSE). Modifications to the registration algorithm were made to allow registration of SPECT data (one intensity per voxel) to composite MRI data (2 intensities per voxel). Registration sources was assessed by visual inspection of uptake localization over expected anatomical locations, and the absence of uptake over unlikely sites. Three patients were discarded from SPECT-MRI registration tests since they had metallic artifacts that prevented co-registration of MR data. Registration of SPECT to SPGR or FSE data alone proved unreliable, with less than 50% of attempts succeeding. The modified algorithm, treating co-registered SPGR and FSE data as a two-value-per-voxel image, proved most reliable, allowing registration of all patients with no metallic artifacts on MRI.
Comparison and evaluation of rigid and nonrigid registration of breast MR images
Author(s):
Daniel Rueckert;
Luke I. Sonoda;
Erica R. E. Denton;
S. Rankin;
Carmel Hayes;
Martin O. Leach;
Derek L.G. Hill;
David John Hawkes
Show Abstract
In this paper we present a new approach for the non-rigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modelled by an affine transformation while the local breast motion is described by a free-form deformation based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as the result of the contrast enhancement. Registration is achieved by minimizing a cost function which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of 3D breast MRI in volunteers and patients. In particular, we have compared the results of the proposed non-rigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the non-rigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.
Global optimization of weighted mutual information for multimodality image registration
Author(s):
Claudia E. Rodriguez-Carranza;
Murray H. Loew
Show Abstract
Failure to align images accurately often is due to the optimization algorithms being trapped in local maxima or spurious global maxima of the mutual information function. Strategies contemplated to improve registration involve modifying the optimization scheme or the registration measure itself. We recently found that normalized mutual information (for 2D image registration) provides a larger capture range and that is more robust, with respect to the optimization parameters, than the non-normalized measure. In this paper we assessed the utility of a stochastic global optimization technique for image registration using normalized and non-normalized mutual information. By conducting large-scale studies with patient data in 2D, we established a success rate baseline with the local optimizer only. Formal proof has not yet been found that incorporating the global optimizer does not impair performance. However, experiments to date indicate that its inclusion leads to better (i.e., higher probability of correct convergence) overall performance. More over, studies now underway show good effectiveness of our approach in a variety of 3D cases.
Effect of vertebral surface extraction on registration accuracy: a comparison of registration results for iso-intensity algorithms applied to computed tomography images
Author(s):
Jeannette L. Herring;
Calvin R. Maurer Jr.;
Diane M. Muratore;
Robert L. Galloway Jr.;
Benoit M. Dawant
Show Abstract
This paper presents a comparison of iso-intensity-based surface extraction algorithms applied to computed tomography (CT) images of the spine. The extracted vertebral surfaces are used in surface-based registration of CT images to physical space, where our ultimate goal is the development of a technique that can be used for image-guided spinal surgery. The surface extraction process has a direct effect on image-guided surgery in two ways: the extracted surface must provide an accurate representation of the actual surface so that a good registration can be achieved, and the number of polygons in the mesh representation of the extracted surface must be small enough to allow the registration to be performed quickly. To examine the effect of the surface extraction process on registration error and run time, we have performed a large number of experiments on two plastic spine phantoms. Using a marker-based system to assess accuracy, we have found that submillimetric registration accuracy can be achieved using a point-to- surface registration algorithm with simplified and unsimplified members of the general class of iso-intensity- based surface extraction algorithms. This research has practical implications, since it shows that several versions of the widely available class of intensity-based surface extraction algorithms can be used to provide sufficient accuracy for vertebral registration. Since intensity-based algorithms are completely deterministic and fully automatic, this finding simplifies the pre-processing required for image-guided back surgery.
Level-set surface segmentation and registration for computing intrasurgical deformations
Author(s):
Michel A. Audette;
Terence M. Peters
Show Abstract
We propose a method for estimating intrasurgical brain shift for image-guided surgery. This method consists of five stages: the identification of relevant anatomical surfaces within the MRI/CT volume, range-sensing of the skin and cortex in the OR, rigid registration of the skin range image with its MRI/CT homologue, non-rigid motion tracking over time of cortical range images, and lastly, interpolation of this surface displacement information over the whole brain volume via a realistically valued finite element model of the head. This paper focuses on the anatomical surface identification and cortical range surface tracking problems. The surface identification scheme implements a recent algorithm which imbeds 3D surface segmentation as the level- set of a 4D moving front. A by-product of this stage is a Euclidean distance and closest point map which is later exploited to speed up the rigid and non-rigid surface registration. The range-sensor uses both laser-based triangulation and defocusing techniques to produce a 2D range profile, and is linearly swept across the skin or cortical surface to produce a 3D range image. The surface registration technique is of the iterative closest point type, where each iteration benefits from looking up, rather than searching for, explicit closest point pairs. These explicit point pairs in turn are used in conjunction with a closed-form SVD-based rigid transformation computation and with fast recursive splines to make each rigid and non-rigid registration iteration essentially instantaneous. Our method is validated with a novel deformable brain-shaped phantom, made of Polyvinyl Alcohol Cryogel.
Motion analysis of artery pulsation in neonatal cranial ultrasonogram
Author(s):
Masayuki Fukuzawa;
Hiroki Kubo;
Yoshiki Kitsunezuka;
Masayoshi Yamada
Show Abstract
Using an optical-flow technique, we have quantitatively analyzed tissue motion due to artery pulsation accompanied with blood flow in a neonatal cranial ultrasonogram. The tissue motion vector was successfully calculated at each pixel in a series of echo images (32 frames, 640 X 480 pixels/frame, 8 bits/pixel, 33 ms/frame) taken in the brightness mode by using an ultrasound probe of 5.0 MHz. The optical-flow technique used was a gradient method combined with local optimization for 3 X 3 neighbors. From 2D mappings of tissue motion vectors and their time-sequence variations, it was found that the tissue motion due to artery pulsation revealed periodic to-and-fro motion synchronized with heartbeat (300 - 500 ms), clearly distinguishing from unwanted non-periodic motion due to the sway of neonatal head during diagnosis.
Full-leg/full-spine image stitching: a new and accurate CR-based imaging technique
Author(s):
Piet Dewaele;
Pieter Vuylsteke;
S. Van de Velde;
Emile P. Schoeters
Show Abstract
This paper introduces a new imaging modality based on existing computed radiography technology to form a total body part image from a series of overlapping subimages. The subimages are exposed simultaneously but digitized individually as normal single-exposure examinations. During exposure, a rectangular grid of attenuating lines is present in the X-ray path to aid in the reconstruction process. A digital image-processing algorithm has been developed to assemble a composite image from the subimages, showing perfect geometric continuity of body parts, a technique termed image `stitching'.
Weighted least squares for point-based registration in digital subtraction angiography (DSA)
Author(s):
Thorsten M. Buzug;
Juergen Weese;
Cristian Lorenz
Show Abstract
Four main problems have to be solved for template matching based motion compensation in digital subtraction angiography. All the problems are concerned with the similarity measure that is the objective function to be optimized within the template matching procedure: (1) Due to the injection of contrast agent, mask and contrast image are dissimilar, which degrades the quality of some similarity measures. (2) Homogeneous areas in the fluoroscopic images lead to an insufficient quality of the similarity measure. (3) Shift invariant structures in fluoroscopic images (e.g. straight lines or edges) lead to a ridge-like objective function that potentially gives wrong results from the optimization procedure: If a ridge-like structure of the objective function is present, movements along this direction cannot be detected. Therefore, the local accuracy of the estimated motion component parallel to these directions must be ranked low while the corresponding orthogonal direction must be ranked high. We here present a technique to obtain local, directional rankings from the shape of the objective function which especially improves the quality of DSA images obtained from peripheral areas like the shinbone. (4) Inhomogeneous movements inside a single template lead to ambiguous or even irrelevant optima of the objective function: This problem is out of the scope of the present paper and will therefore not be addressed here. The performance of the point-based registration using different weightings in the least squares procedure (equally, isotropic and anisotropic weighting) has been compared. The isotropic and anisotropic weighting turned out to be superior to the equally weighted least squares procedure.
Bayesian inference and Markov chain Monte Carlo in imaging
Author(s):
David M. Higdon;
James E. Bowsher
Show Abstract
Over the past 20 years, many problems in Bayesian inference that were previously intractable can now be fairly routinely dealt with using a computationally intensive technique for exploring the posterior distribution called Markov chain Monte Carlo (MCMC). Primarily because of insufficient computing capabilities, most MCMC applications have been limited to rather standard statistical models. However, with the computing power of modern workstations, a fully Bayesian approach with MCMC, is now possible for many imaging applications. Such an approach can be quite useful because it leads not only to `point' estimates of an underlying image or emission source, but it also gives a means for quantifying uncertainties regarding the image. This paper gives an overview of Bayesian image analysis and focuses on applications relevant to medical imaging. Particular focus is on prior image models and outlining MCMC methods for these models.
Fast automatic segmentation of the brain in T1-weighted volume MRI data
Author(s):
Louis Lemieux;
Georg Hagemann;
Karsten Krakow;
Friedrich G. Woermann
Show Abstract
A fully automated algorithm was developed to segment the brain from T1-weighted volume MR images. Automatic non- uniformity correction is performed prior to segmentation. The segmentation algorithm is based on automatic thresholding and morphological operations. It is fully 3D and therefore independent of scan orientation. The validity and performance of the algorithm were evaluated by comparing the automatically calculated brain volume with semi- automated measurements in 10 subjects. The amount of non- brain tissue included in the automatic segmentation was calculated. To test reproducibility, the brain volume was calculated in repeated scans in another 10 subjects. The mean and standard deviation of the difference between the semi-automated and automated measurements were 0.6% and 2.8% of the mean brain volume, respectively, which is within the inter-observer variability of the semi-automatic method. The mean amount of non-brain tissue contained in the segmented brain mask was 0.3% of the mean brain volume, with a standard deviation of 0.2%. The mean and standard deviation of the difference between the total volumes calculated from repeated scans were 0.4% and 1.2% of the mean brain volume, respectively.
Hierarchical Markov random field modeling for mammographic structure segmentation using multiple spatial and intensity image resolutions
Author(s):
Rene Vargas-Voracek;
Carey E. Floyd Jr.
Show Abstract
A hierarchical Markov random field (MRF) model for mammographic structure segmentation using multiple spatial and intensity image resolutions is proposed. The general image model is formed by a sequence of representations at different spatial and intensity scales. Through the hierarchical structure of the MRF model, components at different local spatial resolutions are used to condition the corresponding intensity resolution and the spatial distribution of the intensity components. As a first step, only breast skin edge and non fat breast parenchyma (Cooper's ligaments, blood vessels and fibroglandular tissue) have been included into the model and implemented. Three basic priors for the local spatial intensity distribution (texture) are defined. An iterated conditional mode (ICM) optimization procedure is implemented, the lower resolution representations are used sequentially to form the initial image configurations for the ICM procedure. The proposed approach was tested using 100 digitized mammograms (at a resolution of 100 microns and 12 bits per pixel). The mammograms are from three different views and different breast parenchyma densities. Results for breast skin edge and breast parenchyma were obtained and evaluated visually. For all cases, the location of the three possible structures (skin, parenchyma and background) was identified correctly.
Probabilistic multiobject deformable model for MR/SPECT brain image registration and segmentation
Author(s):
Christophoros Nikou;
Fabrice Heitz;
Jean-Paul Armspach
Show Abstract
A probabilistic deformable model for the representation of brain structures is described. The statistically learned deformable model represents the relative location of head (skull and scalp) and brain surfaces in MR/SPECT images pairs and accommodates the significant variability of these anatomical structures across different individuals. To provide a training set, a representative collection of 3D MRI volumes of different patients have first been registered to a reference image. The head and brain surfaces of each volume are parameterized by the amplitudes of the vibration modes of a deformable spherical mesh. For a given MR image in the training set, a vector containing the largest vibration modes describing the head and the brain is created. This random vector is statistically constrained by retaining the most significant variations modes of its Karhunen-Loeve expansion on the training population. By these means, both head and brain surfaces are deformed according to the anatomical variability observed in the training set. Two applications of the probabilistic deformable model are presented: the deformable model-based registration of 3D multimodal (MR/SPECT) brain images and the segmentation of the brain from MRI using the probabilistic constraints embedded in the deformable model. The multi-object deformable model may be considered as a first step towards the development of a general purpose probabilistic anatomical atlas of the brain.
Ultrafast user-steered image segmentation paradigm: live-wire-on-the-fly
Author(s):
Alexandre Xavier Falcao;
Jayaram K. Udupa;
Flavio K. Miyazawa
Show Abstract
In the past, we have presented three user-steered image segmentation paradigms: live wire, live lane, and the 3D extension of the live-wire method. In this paper, we introduce an ultra-fast live-wire method, referred to as live-wire-on-the-fly, for further reducing user's time compared to live wire. For both approaches, given a slice and a 2D boundary of interest in this slice, we translate the problem of finding the best boundary segment between any two points specified by the user on this boundary to the problem of finding the minimum-cost path between two vertices in a weighted and directed graph. The entire 2D boundary is identified as a set of consecutive boundary segments, each specified and detected in this fashion. A drawback in live wire is that the speed for optimal path computation depends on image size, compromising the overall segmentation efficiency. In this work, we solve this problem by exploiting some properties of graph theory to avoid unnecessary minimum-cost path computation during segmentation. Based on 164 segmentation experiments from an actual medical application, we demonstrate that live-wire- on-the-fly is about 1.5 to 33 times faster than live wire for actual segmentation, although the pure computational part alone is found to be over a hundred times faster.
Statistical shape description using Gaussian Markov random fields and its application to medical image segmentation
Author(s):
Anke Neumann;
Cristian Lorenz
Show Abstract
This paper introduces global shape modeling by means of Markov random fields and describes its use in medical image segmentation. The key point positions representing the shape of an object are assumed to be multivariate Gaussian distributed with a certain covariance structure which relates to the Markov property with respect to some neighborhood system. Since the neighborhood of a key point potentially contains both nearby and long distant key points, global key point interaction is not only realized by propagated local key point interaction, but also directly by long distant key point interaction. We restrict ourselves to the subclass of decomposable models, since a closed form expression for the maximum likelihood estimate of the covariance matrix from a set of training shapes is available in this case. The neighborhood system is either a priori defined or estimated. Our model building procedure is demonstrated for the 2D shape of spinal vertebra. The suitability of the derived shape models is investigated by generating new shape samples according to the models. Finding the object's boundary in a grey value image is formulated as maximum a posteriori estimation incorporating the shape model as a priori model. Our model-based segmentation procedure includes an easy and effective interactive improvement of the segmentation outcome.
Topological refinement of volumetric data
Author(s):
David W. Shattuck;
Richard M. Leahy
Show Abstract
We present a method for enforcing a topological constraint, homeomorphism to a sphere, on a set of volumetric data. A graph-based topological representation is created from voxel connectivity within the volume and automatically edited to have the desired topology. The volume is forced to match this structure, resulting in a topologically spherical surface. The method is fully automated and has the advantage of operating on the volume data prior to tessellation, significantly reducing computational costs compared to mesh- based methods. We demonstrate the method on a simple test volume and on the surface of a cerebral cortex obtained from a magnetic resonance image volume.
Development and evaluation of a semiautomatic 3D segmentation technique of the carotid arteries from 3D ultrasound images
Author(s):
Jeremy D. Gill;
Hanif M. Ladak;
David A. Steinman;
Aaron Fenster
Show Abstract
In this paper, we report on a semi-automatic approach to segmentation of carotid arteries from 3D ultrasound (US) images. Our method uses a deformable model which first is rapidly inflated to approximately find the boundary of the artery, then is further deformed using image-based forces to better localize the boundary. An operator is required to initialize the model by selecting a position in the 3D US image, which is within the carotid vessel. Since the choice of position is user-defined, and therefore arbitrary, there is an inherent variability in the position and shape of the final segmented boundary. We have assessed the performance of our segmentation method by examining the local variability in boundary shape as the initial selected position is varied in a freehand 3D US image of a human carotid bifurcation. Our results indicate that high variability in boundary position occurs in regions where either the segmented boundary is highly curved, or the 3D US image has poorly defined vessel edges.
Multiresolution segmentation of medical images using shape-restricted snakes
Author(s):
Christian Juan Knoll;
Mariano Luis Alcaniz-Raya;
Carlos Monserrat;
Vincente Grau Colomer;
M. Carmen Juan
Show Abstract
We propose a new technique for elastic deformation restriction of active contour models to particular object shapes. For this purpose we apply localized multi-scale contour parametrization based on the 1D dyadic Wavelet Transform (WT) as a multi-scale boundary curve analysis tool. Our approach determines the WT-coefficients within a certain scale range, which differ significantly from the correspondent WT-coefficients of the most similar model in a training set. Those WT-coefficients are replaced by the correspondent model WT-coefficients to perform the reconstruction of the contour. The difference of the original deformed contour and the reconstructed contour is used as inner snake forces. By this technique it can be avoided, that the deformable contour is trapped into disturbing local minima of the snakes potential due to noise or irrelevant image features. The contour deformation method is integrated in a coarse to fine segmentation frame based on a multiscale image edge representation using the local modulus maxima of the dyadic Wavelet Transform. For detection of the object's position and initialization of the snake we apply a multiresolution binary matched filter at a coarse scale containing few detail information.
Fuzzy connected object definition in images with respect to co-objects
Author(s):
Jayaram K. Udupa;
Punam K. Saha;
Roberto Alencar Lotufo
Show Abstract
Tangible solutions to practical image segmentation are vital to ensure progress in many applications of medical imaging. Toward this goal, we previously proposed a theory and algorithms for fuzzy connected object definition in n- dimensional images. Their effectiveness has been demonstrated in several applications including multiple sclerosis lesion detection/delineation, MR Angiography, and craniofacial imaging. The purpose of this work is to extend the earlier theory and algorithms to fuzzy connected object definition that considers all relevant objects in the image simultaneously. In the previous theory, delineation of the final object from the fuzzy connectivity scene required the selection of a threshold that specifies the weakest `hanging-togetherness' of image elements relative to each other in the object. Selection of such a threshold was not trivial and has been an active research area. In the proposed method of relative fuzzy connectivity, instead of defining an object on its own based on the strength of connectedness, all co-objects of importance that are present in the image are also considered and the objects are let to compete among themselves in having image elements as their members. In this competition, every pair of elements in the image will have a strength of connectedness in each object. The object in which this strength is highest will claim membership of the elements. This approach to fuzzy object definition using a relative strength of connectedness eliminates the need for a threshold of strength of connectedness that was part of the previous definition. It seems to be more natural since it relies on the fact that an object gets defined in an image by the presence of other objects that coexist in the image. All specified objects are defined simultaneously in this approach. The concept of iterative relative fuzzy connectivity has also been introduced. Robustness of relative fuzzy objects with respect to selection of reference image elements has been established. The effectiveness of the proposed method has been demonstrated using a patient's 3D contrast enhanced MR angiogram and a 2D phantom scene.
Scale-based fuzzy connectivity: a novel image segmentation methodology and its validation
Author(s):
Punam K. Saha;
Jayaram K. Udupa
Show Abstract
This paper extends a previously reported theory and algorithms for fuzzy connected object definition. It introduces `object scale' for determining the neighborhood size for defining affinity, the degree of local hanging togetherness between image elements. Object scale allows us to use a varying neighborhood size in different parts of the image. This paper argues that scale-based fuzzy connectivity is natural in object definition and demonstrates that this leads to a more effective object segmentation than without using scale in fuzzy concentrations. Affinity is described as consisting of a homogeneity-based and an object-feature- based component. Families of non scale-based and scale-based affinity relations are constructed. An effective method for giving a rough estimate of scale at different locations in the image is presented. The original theoretical and algorithmic framework remains more-or-less the same but considerably improved segmentations result. A quantitative statistical comparison between the non scale-based and the scale-based methods was made based on phantom images generated from patient MR brain studies by first segmenting the objects, and then by adding noise and blurring, and background component. Both the statistical and the subjective tests clearly indicate the superiority of scale- based method in capturing details and in robustness to noise.
Fuzzy rule-based approach to segment the menisci regions from MR images
Author(s):
Takashi Sasaki;
Yutaka Hata;
Yoshiro Ando;
Makato Ishikawa;
Hitoshi Ishikawa
Show Abstract
Injuries of the menisci are one of the most common internal derangement of the knee. To examine them with noninvasive, we propose an automated segmentation method of the menisci region from MR image. The method is composed of two steps based on fuzzy logic. First, we segment the cartilage region by thresholding of the intensity. We then extract the candidate region of the menisci as the region between the cartilages. Second, we segment the menisci voxels from the candidate region based on fuzzy if-then rules obtained from knowledge of location and intensity. We applied our method to five MR data sets. Three of them are the normal knees and the others are with some injuries. Quantitative evaluation by a physician shows that this method can successfully segment the menisci for the all. The generated visualizations will help medical doctor to diagnose the menisci with noninvasive.
Near-automatic quantification of breast tissue glandularity via digitized mammograms
Author(s):
Punam K. Saha;
Jayaram K. Udupa;
Emily F. Conant;
Dev Prasad Chakraborty
Show Abstract
Studies reported in the literature indicate that breast cancer risk is associated with mammographic densities. Although, an objective, repeatable quantitative measure of risk derived from mammographic densities will be of great use in recommending alternative screening paradigms and/or preventive measures, image processing efforts toward this goal seem to very sparse in the literature, and automatic and efficient methods do not seem to exist. In this paper, we describe and validate an automatic and reproducible method to segment glandular tissue regions from fat within breasts from digitized mammograms using scale-based fuzzy connectivity methods. Different measures for characterizing density are computed from the segmented regions and their accuracies in terms of their linear correlation across two different projections (CC and MLO) are studied. It is shown that quantization of glandularity taking into account the original intensities is more accurate than just considering the segmented areas. This makes the quantification less dependent on the shape of the glandular regions and the angle of projection. A simple phantom experiment is done that supports this observation.
Three-dimensional segmentation of bone structures in CT images
Author(s):
Guenther Boehm;
Christian Juan Knoll;
Vincente Grau Colomer;
Mariano Luis Alcaniz-Raya;
Salvador Estela Albalat
Show Abstract
This work is concerned with the implementation of a fully 3D-consistent, automatic segmentation of bone structures in CT images. The morphological watersheds algorithm has been chosen as the base of the low-level segmentation. The over- segmentation, a phenomenon normally involved with this transformation, has been sorted out successfully by inserting modifying modules that act already within the algorithm. When dealing with a maxillofacial image, this approach also includes the possibility to provide two different divisions of the image: a fine-grained tessellation geared to the following high-level segmentation and a more coarse-grained one for the segmentation of the teeth. In the knowledge-based high-level segmentation, probabilistic considerations make use of specific properties of the 3D low-level regions to find the most probable tissue for each region. Low-level regions that cannot be classified with the necessary certainty are passed to a second stage, where--embedded in their respective environment--they are compared with structural patterns deduced from anatomical knowledge. The tooth segmentation takes the coarse-grained tessellation as its starting point. The few regions making up each tooth are grouped to 3D envelopes--one envelope per tooth. Matched filtering detects the bases of these envelopes. After a refinement they are fitted into the fine- grained, high-level segmented image.
Segmentation of the skull in MRI volumes using a deformable model and taking the partial volume effect into account
Author(s):
Hilmi Rifai;
Isabelle Bloch;
Seth A. Hutchinson;
Joe Wiart;
Line Garnero
Show Abstract
In this paper, we present a new approach for segmenting regions of bone in MRI volumes using a deformable model. Our method takes into account the partial volume effects that occur with MRI data, thus permitting a precise segmentation of these bone regions. Partial volume is estimated, in a narrow band around the deformable model, at each iteration of the propagation of the model. Segmentation of the skull in medical imagery is an important stage in applications that require the construction of realistic models of the head. Such models are used, for example, to simulate the behavior of electro-magnetic fields in the head and to model the electrical activity of the cortex in EEG and MEG data.
Unsupervised statistical segmentation of multispectral volumetric MRI images
Author(s):
Jose Gerardo Tamez-Pena;
Saara Totterman;
Kevin J. Parker
Show Abstract
This work presents a reliable automatic segmentation algorithm for multispectral MRI data sets. We propose the use of an automatic statistical region growing algorithm based on a robust estimation of local region mean and variance for every voxel on the image. The best region growing parameters are automatically found via the minimization of a cost functional. Furthermore, we propose a hierarchical use of relaxation labeling, region splitting, and constrained region merging to improve the quality of the MRI segmentation. We applied this approach to the segmentation of MRI images of anatomically complex structures which suffer signal fading and noise degradations.
Statistical analysis of brain sulci based on active ribbon modeling
Author(s):
Christian Barillot;
Georges Le Goualher;
Pierre Hellier;
Bernard Gibaud
Show Abstract
This paper presents a general statistical framework for modeling deformable object. This model is devoted being used in digital brain atlases. We first present a numerical modeling of brain sulci. We present also a method to characterize the high inter-individual variability of basic cortical structures on which the description of the cerebral cortex is based. The aimed applications use numerical modeling of brain sulci to assist non-linear registration of human brains by inter-individual anatomical matching or to better compare neuro-functional recordings performed on a series of individuals. The utilization of these methods is illustrated using a few examples.
Identification of vessel contours from three-dimensional magnetic resonance angiograms
Author(s):
Yung-Nien Sun;
Shu-Chien Huang;
Fwn-Jeng Chen;
Chin-Yin Yu;
Tong-Yee Lee
Show Abstract
3D MR angiogram offers a means for visualizing the entire cerebral vessels from any orientation and for detecting the stenotic lesions. In this paper we proposed a new 3D tracking method for reconstructing cerebral vessels from MR angiograms. Then, based on the image analysis and visualization results of the vessel trees, various important information can be computed. In vessel extraction, an urchin tracking technique is proposed to extract each segment of the vessel trees. The behaviors of the urchin including pinpricks growing, moving, and sub-urchin generation are designed and implement. Our scheme is also capable of handling the problem of the vessel bifurcation. In vessel display, semi-boundary technique was employed. The system performed in the vessel extraction takes roughly 15 seconds on a PC Pentium 166 for a 256 X 256 X 60 8-bits 3D angiogram. Compared with conventional methods, we proposed a fast and stable method to obtain the 3D vessel trees.
Model-based reconstruction of organ surfaces from two-dimensional CT or MRT data of the head
Author(s):
Sebastian von Klinski;
Andreas Glausch;
Thomas Tolxdorff
Show Abstract
Surface-based interpolation and registration, radiation treatment, and 3D visualization of 2D sliced data from CT or MRT require a precise reconstruction of 3D organ surfaces from 2D segmentation results. Current surface-reconstruction algorithms are based on surface triangulations using heuristics to correlate and connect adjustment object slices. The approaches described in the literature can be divided into triangulations using optimization procedures, Delauny triangulations, and topology-based correlations. All approaches assume a global and invariant vertically oriented correlation strategy that can be applied equally to every organ and every slice. Surface and correlation characteristics vary greatly among bony structures and organs such as the eyes and the brain. An adjusted reconstruction of each organ according to its individual tissue characteristics is necessary to avoid errors in following processing steps such as interpolation, registration, and radiation treatment. To this end, we have designed a model-based surface-reconstruction algorithm that takes individual surface characteristics into account and allows the integration of anatomical knowledge. 3D surface models are generated from sliced data or any other source of anatomical knowledge. These models are later adjusted to the segmentations, compensating for artifacts and incomplete data.
Fast computation of the covariance of MAP reconstructions of PET images
Author(s):
Jinyi Qi;
Richard M. Leahy
Show Abstract
We develop an approximate theoretical formula for fast computation of the covariance of PET images reconstructed using maximum a posteriori (MAP) estimation. The results assume a Poisson likelihood for the data and a quadratic prior on the image. The covariance for each voxel is computed using 2D FFTs and is a function of a single data dependent parameter. This parameter is computed using a modified backprojection. For a small region of interest (ROI), the correlation can be assumed to be locally stationary so that computation of the variance of an ROI can be performed very rapidly. Previous approximate formulae for the variance of MAP estimators have performed poorly in areas of low activity since they do not account for the non- negativity constraints that are routinely used in MAP algorithms. Here a `truncated Gaussian' model is used to compensate for the effect of the non-negativity constraints. Accuracy of the theoretical expressions is evaluated using both Monte Carlo simulations and a multiple-frame 15O- water brain study. The Monte Carlo studies show that the truncated Gaussian model is effective in compensating for the effect of the non-negativity constraint. These results also show good agreement between Monte Carlo covariances and the theoretical approximations. The 15O-water brain study further confirms the accuracy of the theoretical approximations.
Tomographic reconstruction using free-form deformation models
Author(s):
Xavier L. Battle;
Yves J. Bizais;
Catherine Le Rest;
A. Turzo
Show Abstract
We address the issue of using deformable models to reconstruct the shape of unknown objects in the context of 3D tomography. We focus on the reconstruction of piecewise- uniform radioactive distributions such as in blood pool or lung imaging. We represent the unknown distribution by a set of closed surfaces defining uniformly emitting regions in space. The methods implemented so far tend to directly deform the surfaces. Rather than deforming the surface models themselves, we explore the deformation of the space in which the surfaces are contained to match a set of scintigraphic measurements. We focus on the use of free-form deformations to describe the continuous transformation of space. We illustrate this approach by reconstructing simulated scintigraphic data of the lungs.
Automatic motion correction of clinical shoulder MR images
Author(s):
Armando Manduca;
Kiaran P. McGee;
Edward B. Welch;
Joel P. Felmlee;
Richard L. Ehman
Show Abstract
A technique for the automatic correction of motion artifacts in MR images was developed. The algorithm uses only the raw (complex) data from the MR scanner, and requires no knowledge of the patient motion during the acquisition. It operates by searching over the space of possible patient motions and determining the motion which, when used to correct the image, optimizes the image quality. The performance of this algorithm was tested in coronal images of the rotator cuff in a series of 144 patients. A four observer comparison of the autocorrelated images with the uncorrected images demonstrated that motion artifacts were significantly reduced in 48% of the cases. The improvements in image quality were similar to those achieved with a previously reported navigator echo-based adaptive motion correction. The results demonstrate that autocorrelation is a practical technique for retrospectively reducing motion artifacts in a demanding clinical MRI application. It achieves performance comparable to a navigator based correction technique, which is significant because autocorrection does not require an imaging sequence that has been modified to explicitly track motion during acquisition. The approach is flexible and should be readily extensible to other types of MR acquisitions that are corrupted by global motion.
Truncated projection computer tomography and objects of annular support
Author(s):
William J. Dallas
Show Abstract
Truncated Projection Computer Tomography (TPCT) is an imaging method that, in contrast to conventional CT, collects projection data along only a segment of each integration path. We examine three aspects of TPCT in this article. First, we present computer simulations of data acquisition and image reconstruction for TPCT. The reconstruction algorithm is iterative and sometimes referred to as the Algebraic Reconstruction Technique or ART. Failings of the technique, when used in this application, motivate our examination of objects with annular support. We find that, for a rect truncation window, TPCT naturally segments an object whose support is a disk into annular sub- objects. Finally, we introduce an analytical TPCT reconstruction formula. This reconstruction formula amounts to inversion of the incomplete Radon transform.
Wavelet-based multiresolution expectation maximization image reconstruction algorithm for positron emission tomography (PET)
Author(s):
Amar Raheja;
Atam P. Dhawan
Show Abstract
Maximum Likelihood estimation based Expectation Maximization (EM) reconstruction algorithm has shown to provide good quality reconstruction for PET. Our previous work introduced the multigrid and multiresolution concept for PET image reconstruction using EM. This work transforms the MGEM and MREM algorithm to a Wavelet based Multiresolution EM algorithm by extending the concept of switching resolutions in both image and data spaces. The multiresolution data space is generated by performing a 2D-wavelet transform on the acquired tube data that is used to reconstruct images at different spatial resolutions. Wavelet transform is used for multiresolution reconstruction as well as adapted in the criterion for switching resolution levels. The advantage of the wavelet transform is that it provides very good frequency and spatial (time) localization and allows the use of these coarse resolution data spaces in the EM estimation process. The multiresolution algorithm recovers low frequency components of the reconstructed image at coarser resolutions in fewer iterations, reducing the number of iterations required at finer resolution to recover high frequency components. This paper also presents the design of customized biorthogonal wavelet filters using the lifting method, which are used for data decomposition and image reconstruction.
Comparison of angular interpolation approaches in few-view tomography using statistical hypothesis testing
Author(s):
Patrick J. La Riviere;
Xiaochuan Pan
Show Abstract
In this work we examine the accuracy of four periodic interpolation methods--circular sampling theorem interpolation, zero-padding interpolation, periodic spline interpolation, and linear interpolation with periodic boundary conditions--for the task of interpolating additional projections in a few-view sinogram. We generated 100 different realizations each of two types of numerical phantom--Shepp-Logan and breast--by randomly choosing the parameters that specify their constituent ellipses. Corresponding sinograms of 128 bins X 1024 angles were computed analytically and subsampled to 16, 32, 64, 128, 256, and 512 views. Each subsampled sinogram was interpolated to 1024 views by each of the methods under consideration and the normalized root-mean-square-error (NRMSE) with respect to the true 1024-view sinogram computed. In addition, images were reconstructed from the interpolated sinograms by FBP and the NRMSE with respect to the true phantom computed. The non-parametric signed rank test was then used to assess the statistical significance of the pairwise differences in mean NRMSE among the interpolation methods for the various conditions: phantom family (Shepp-logan or breast), number of measured views (16, 32, 64, 128, 256, or 512), and endpoint (sinogram or image). Periodic spline interpolation was found to be superior to the others in a statistically significant way for virtually every condition.
Bayesian inference for neural electromagnetic source localization: analysis of MEG visual evoked activity
Author(s):
David M. Schmidt;
John S. George;
C. C. Wood
Show Abstract
We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sampling the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surface, within a sphere centered on some location in cortex. The number and radii of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented.
High-temporal-resolution volume heart imaging with multirow computed tomography
Author(s):
Herbert Bruder;
Stefan Schaller;
Bernd Ohnesorge;
Thomas Mertelmeier
Show Abstract
Functional cardiac imaging with 3rd generation CT scanners is challenging, because the temporal resolution seems to be limited to approximately 2/3 of the rotation time of the gantry. We propose a new method for high temporal resolution volume heart imaging with multirow detectors based on a retrospective electrocardiogram-gated rebinning procedure. The limited time resolution is overcome using time consistent projection data retrieved from more than one cardiac cycle. In principle the method provides volume heart imaging with adjustable time resolution at arbitrary cardiac phases. It can be applied both for spiral and axial scan imaging. The presented study is based on computer simulations incorporating a model of the human heart taking into account anatomy, motion and heart rate variability. For multirow detectors we were able to show that good image quality can be obtained even during systole with temporal resolution which even exceeds that provided by an Electron Beam Scanner in standard mode of operation. Using an area detector with detector height > 3 cm (center of rotation) the total measurement time is within one breathhold for complete volume imaging of the heart. Furthermore, freezing motion of the coronary arteries during enddiastole allows high quality 3D display of coronary anatomy.
Back-projection spiral scan region-of-interest cone beam CT
Author(s):
Kwok C. Tam;
B. Ladendorf;
Frank Sauer;
Guenter Lauritsch;
Andreas Steinmetz
Show Abstract
We present a spiral scan cone beam reconstruction algorithm in which image reconstruction proceeds via backprojection in the object space. In principle the algorithm can reconstruct sectional ROI in a long object. The approach is a generalization of the cone beam backprojection technique developed by Kudo and Saito in two aspects: the resource- demanding normalization step in the Kudo and Saito's algorithm is eliminated through the technique of data combination which we published earlier, and the elimination of the restriction that the detector be big enough to capture the entire image of the ROI. Restricting the projection data to the appropriate angular range required by data combination can be accomplished by a masking process. The mask consists of a top curve and a bottom curve formed by projecting the spiral turn above and the turn below from the current source position. Because of the simplification resulting from the elimination of the normalization step, the most time-consuming operations of the algorithm can be approximated by the efficient step of line-by-line ramp filtering the cone beam image in the direction of the scan path, plus a correction term. The correction term is needed because data combination is not properly matched at the mask boundary when ramp filtering is involved. This correction term to the mask boundary effect can be computed exactly. The results of testing the algorithm on simulated phantoms are presented.
Reconstruction bias resulting from weighted projection and iso-center misalignment
Author(s):
Jiang Hsieh
Show Abstract
The use of computer graphic techniques to produce 3D and reformatted images from a set of axial computed tomography (CT) images has generated significant interest in recent years. The axial CT images are generated with the projection data set weighted, prior to the reconstruction, to combat motion artifacts, data inconsistency, or redundant data samples. In this paper, we investigate the potential bias introduced to the reconstruction as a result of the interaction of the projection weights and the iso-center misalignment (ISM). Although the error is not easily detected in axial CT images, it can be quite visible in 3D or multi-planar reformed images. We first present a theoretical framework to analyze and predict the bias. The theoretical model quantitatively links the amount of object shift in the reconstructed images with the ISM and projection weighting function. The theoretical prediction is validated by both computer simulations and phantom experiments. Based on the analytical model, several schemes to combat this artifact are subsequently presented.
Helical CT imaging performance of a new multislice scanner
Author(s):
Hui Hu;
H. David He;
W. Dennis Foley;
Stanley H. Fox
Show Abstract
The principles and operations of 4-slice helical CT are discussed. The SSP, image noise and artifacts of a 4-slice scanner are measured by phantom scans and compared with theoretical predictions and the measurements of single-slice CT. The evaluation of these physical attributes for all helical imaging modes of 4-slice scanner is summarized in an operation chart. The preliminary studies indicate that comparing with single-slice helical CT, the volume coverage speed of 4-slice helical CT can be at least twice as fast with fully comparable image quality, or, in many cases, 3 times as fast with diagnostically comparable image quality. Two examples are given to illustrate the clinical benefit of the speed performance improvement (and operation flexibility) provided by 4-slice helical CT.
Hybrid unsupervised-supervised approach for computerized classification of malignant and benign masses on mammograms
Author(s):
Lubomir M. Hadjiiski;
Berkman Sahiner;
Heang-Ping Chan;
Nicholas Petrick;
Mark A. Helvie M.D.
Show Abstract
A hybrid classifier which combines an unsupervised adaptive resonance network (ARTs) and a supervised linear discriminant classifier (LDA) was developed for analysis of mammographic masses. Initially the ART2 network separates the masses into different classes based on the similarity of the input feature vectors. The resulting classes are subsequently divided into two groups: (1) classes containing only malignant masses and (2) classes containing both malignant and benign or only benign masses. All masses belonging to the second group are used to formulate a single LDA model to classify them as malignant and benign. In this approach, the ART2 network identifies the highly suspicious malignant cases and removes them from a training set, thereby facilitating the formulation of the LDA model. In order to examine the utility of this approach, a data set of 348 regions of interest (ROIs) containing biopsy-proven masses (169 benign and 179 malignant) were used. Ten different partitions of training and test groups were randomly generated using 73% of ROIs for training and 27% for testing. Classifier design including feature selection and weight optimization was performed with the training group. The test group was kept independent of the training group. The performance of the hybrid classifier was compared to that of an LDA classifier alone. Receiver Operating Characteristics (ROC) analysis was used to evaluate the accuracy of the classifier. The average area under the ROC curve (Az) for the hybrid classifier was 0.81 as compared to 0.78 for LDA. The Az values for the partial areas above a true positive fraction of 0.9 were 0.34 and 0.27 for the hybrid and the LDA classifier, respectively. These results indicate that the hybrid classifier is a promising approach for improving the accuracy of classification in CAD applications.
Multi-image CAD employing features derived from ipsilateral mammographic views
Author(s):
Walter F. Good;
Bin Zheng;
Yuan-Hsiang Chang;
Xiao Hui Wang;
Glenn S. Maitz;
David Gur
Show Abstract
On mammograms, certain kinds of features related to masses (e.g., location, texture, degree of spiculation, and integrated density difference) tend to be relatively invariant, or at last predictable, with respect to breast compression. Thus, ipsilateral pairs of mammograms may contain information not available from analyzing single views separately. To demonstrate the feasibility of incorporating multi-view features into CAD algorithm, `single-image' CAD was applied to each individual image in a set of 60 ipsilateral studies, after which all possible pairs of suspicious regions, consisting of one from each view, were formed. For these 402 pairs we defined and evaluated `multi-view' features such as: (1) relative position of centers of regions; (2) ratio of lengths of region projections parallel to nipple axis lines; (3) ratio of integrated contrast difference; (4) ratio of the sizes of the suspicious regions; and (5) measure of relative complexity of region boundaries. Each pair was identified as either a `true positive/true positive' (T) pair (i.e., two regions which are projections of the same actual mass), or as a falsely associated pair (F). Distributions for each feature were calculated. A Bayesian network was trained and tested to classify pairs of suspicious regions based exclusively on the multi-view features described above. Distributions for all features were significantly difference for T versus F pairs as indicated by likelihood ratios. Performance of the Bayesian network, which was measured by ROC analysis, indicates a significant ability to distinguish between T pairs and F pairs (Az equals 0.82 +/- 0.03), using information that is attributed to the multi-view content. This study is the first demonstration that there is a significant amount of spatial information that can be derived from ipsilateral pairs of mammograms.
Case-based reasoning as a computer aid to diagnosis
Author(s):
Carey E. Floyd Jr.;
Joseph Y. Lo;
Georgia D. Tourassi
Show Abstract
A Case-Based Reasoning (CBR) system has been developed to predict the outcome of excisional biopsy from mammographic findings. CBR is implemented by comparing the current case to all previous cases and examining the outcomes for those previous cases that match the current case. Patients from breast screening who have suspicious findings on their diagnostic mammogram, are candidates for biopsy. The false positive rate for the decision to biopsy is currently between 66% and 90%. The CBR system is designed to support the decision to biopsy. The mammograms are read by clinicians using a standard reporting lexicon (BI- RADSTM). These findings are compared to a database of findings from cases with known outcomes (from biopsy). The fraction of similar cases that were malignant is returned. The clinician can then consider this result when making the decision regarding biopsy. The system was evaluated using round-robin sampling scheme and performed with a Receiver Operating Characteristic area of 0.77.
Improving mass detection by adaptive and multiscale processing in digitized mammograms
Author(s):
Lihua Li;
Wei Qian;
Laurence P. Clarke;
Robert A. Clark M.D.;
Jerry A. Thomas
Show Abstract
A new CAD mass detection system was developed using adaptive and multi-scale processing methods for improving detection sensitivity/specificity, and its robustness to the variation in mammograms. The major techniques developed in system design include: (1) image standardization by applying a series of preprocessing to remove extrinsic signal, extract breast area, and normalize the image intensity; (2) multi- mode processing by decomposing image features using directional wavelet transform and non-linear multi-scale representation using anisotropic diffusion; (3) adaptive processing in image segmentation using localized adaptive thresholding and adaptive clustering; and (4) combined `hard'-`soft' classification by using a modified fuzzy decision tree and committee decision-making method. Evaluations and comparisons were taken with a training dataset containing 30 normal and 47 abnormal mammograms with totally 70 masses, and an independent testing dataset consisting of 100 normal images, 39 images with 48 minimal cancers and 25 images with 25 benign masses. A high detection performance of sensitivity TP equals 93% with false positive rate FP equals 3.1 per image and a good generalizability with TP equals 80% and FP equals 2.0 per image are obtained.
Stepwise linear discriminant analysis in computer-aided diagnosis: the effect of finite sample size
Author(s):
Berkman Sahiner;
Heang-Ping Chan;
Nicholas Petrick;
Robert F. Wagner;
Lubomir M. Hadjiiski
Show Abstract
In computer-aided diagnosis, a frequently-used approach is to first extract several potentially useful features from a data set. Effective features are then selected from this feature space, and a classifier is designed using the selected features. In this study, we investigated the effect of finite sample size on classifier accuracy when classifier design involves feature selection. The feature selection and classifier coefficient estimation stages of classifier design were implemented using stepwise feature selection and Fisher's linear discriminant analysis, respectively. The two classes used in our simulation study were assumed to have multidimensional Gaussian distributions, with a large number of features available for feature selection. We investigated the effect of different covariance matrices and means for the two classes on feature selection performance, and compared two strategies for sample space partitioning for classifier design and testing. Our results indicated that the resubstitution estimate was always optimistically biased, except in cases where too few features were selected by the stepwise procedure. When feature selection was performed using only the design samples, the hold-out estimate was always pessimistically biased. When feature selection was performed using the entire finite sample space, and the data was subsequently partitioned into design and test groups, the hold-out estimates could be pessimistically or optimistically biased, depending on the number of features available for selection, number of available samples, and their statistical distribution. All hold-out estimates exhibited a pessimistic bias when the parameters of the simulation were obtained from texture features extracted from mammograms in a previous study.
Improved method for detection of microcalcification clusters in digital mammograms
Author(s):
Wouter J. H. Veldkamp;
Nico Karssemeijer
Show Abstract
In this study it is shown that the performance of a statistical method for detection of microcalcification clusters in digital mammograms, can be improved substantially by using a second step of classification. During this second step, detected clusters are automatically classified into true positive and false positive detected clusters. For classification the k-nearest neighbor method was used in a leave-one-patient-out procedure. The sensitivity level of the method was adjusted both in the first detection step as in the second classification step. The Mahalanobis distance was used as criterion in the sequential forward selection procedure for selection of features. This primary feature selection method was combined with a classification performance criterion for the final feature selection. By applying the initial detection at various levels of sensitivity, various sets of false and true positive detected clusters were created. At each of these sets the classification ca be performed. Results show that the overall best FROC performance after secondary classification is obtained by varying sensitivity levels in both the first and second step. Furthermore, it was shown that performing a new feature selection for each different set of false and true positives is essential. A large database of 245 digitized mammograms with 341 clusters was used for evaluation of the method.
Components of variance in ROC analysis of CADx classifier performance: II. Applications of the bootstrap
Author(s):
Robert F. Wagner;
Heang-Ping Chan;
Berkman Sahiner;
Nicholas Petrick;
Joseph T. Mossoba
Show Abstract
We review components-of-variance models for the uncertainty in estimates of the area under the ROC curve, Az, for the case of classical discriminants where we wish the uncertainty to generalize to a population of training cases as well as to a population of testing cases. A key observation from our previous work facilitates the use of resampling strategies to analyze a finite data set and classifier in terms of the components-of-variance models. In particular, we demonstrate the use of the statistical bootstrap in combination with a four-term variance model to solve for the contributions of the uncertainty in Az that result from a given finite training sample, a given finite test sample, and their interaction. At the same time one obtains an expression from which one can predict the change in uncertainty in estimates of Az that would result from a given change in the number of training samples and change in the number of test samples. This expression provides a quantitative design tool for estimating the size that would be required in a larger pivotal study from the results of a smaller pilot study for the purpose of achieving a desired precision in Az and the desired generalizability.
Statistical fractal border features for mammographic breast mass analysis
Author(s):
Alan I. Penn;
Scott F. Thompson;
Murray H. Loew;
Radhika Sivaramakrishna;
Kimerly Powell
Show Abstract
We present preliminary results of a study in which Fractal Interpolation Function Models (FIFM) are used to generate a fractal dimension (fd) feature to discriminate between benign and malignant masses on digitized mammograms. The FIFM method identifies boundary segments that are approximately self-affine and can be accurately modeled with multiple fractal interpolation functions (FIF). The fd of a segment is estimated to be the mean of the fds from the FIF models of that segment. An overall fd feature is computed as the mean of multiple segment fds. The statistical approach provides a stability to the overall fd feature. The FIFM feature may be useful in improving the performance of computer-assisted-diagnosis systems.
Curvature-based characterization of shape and internal intensity structure for classification of pulmonary nodules using thin-section CT images
Author(s):
Yoshiki Kawata;
Noboru Niki;
Hironobu Ohmatsu;
Masahiko Kusumoto;
Ryutaro Kakinuma;
Kiyoshi Mori;
Hiroyuki Nishiyama;
Kenji Eguchi;
Masahiro Kaneko;
Noriyuki Moriyama
Show Abstract
This paper presents a curvature based approach to characterize the internal intensity structure of pulmonary nodules in thin-section CT images. This approach makes use of shape index, curvedness, and CT value to represent locally each voxel in a 3D pulmonary nodule image. From the distribution of shape index, curvedness, and CT value over the 3D pulmonary nodule image a set of 3D moment features, histogram features, and 3D texture features is computed to classify benign and malignant pulmonary nodules. Linear discriminant analysis is used for classification and a receiver operating characteristic (ROC) analysis is used to evaluate the classification accuracy. The potential usefulness of the curvature based features in the computer- aided differential diagnosis is demonstrated by using ROC curves as the performance measure.
Three-dimensional approach to lung nodule detection in helical CT
Author(s):
Samuel G. Armato III;
Maryellen Lissak Giger;
James T. Blackburn;
Kunio Doi;
Heber MacMahon
Show Abstract
We are developing an automated method for the detection of lung nodules in helical computed tomography (CT) images. This technique incorporates 2D and 3D analyses to exploit the volumetric image data acquired during a CT examination. Gray-level thresholding is used to segment the lungs within the thorax. A rolling ball algorithm is applied to more accurately define the segmented lung regions. The set of segmented CT sections, which represents the complete lung volume, is iteratively thresholded, and a 10-point connectivity scheme is used to identify contiguous 3D structures. Structures with volumes less than a predefined maximum value comprise the set of nodule candidates, which is then subjected to 2- and 3-D feature analysis. To distinguish between candidates representing nodule and non- nodule structures, the values of the features are merged through linear discriminant analysis. When applied to a database of 17 helical thoracic CT cases, gray-level thresholding combined with the volume criterion detected 82% of the lung nodules. Linear discriminant analysis yielded an area under the receiver operating characteristic curve of 0.93 in the task of distinguishing between nodule and non- nodule structures within this set of nodule candidates.
Bayesian estimation of regularization parameters for deformable surface models
Author(s):
Gregory S. Cunningham;
Andre Lehovich;
Kenneth M. Hanson
Show Abstract
In this article we build on our past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi- dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. We demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.
Synthesizing average 3D anatomical shapes using deformable templates
Author(s):
Gary E. Christensen;
Hans J. Johnson;
John W. Haller;
Jenny Melloy;
Michael W. Vannier M.D.;
Jeffrey L. Marsh M.D.
Show Abstract
A major task in diagnostic medicine is to determine whether or not an individual has a normal or abnormal anatomy by examining medical images such as MRI, CT, etc. Unfortunately, there are few quantitative measures that a physician can use to discriminate between normal and abnormal besides a couple of length, width, height, and volume measurements. In fact, there is no definition/picture of what normal anatomical structures--such as the brain-- look like let alone normal anatomical variation. The goal of this work is to synthesize average 3D anatomical shapes using deformable templates. We present a method for empirically estimating the average shape and variation of a set of 3D medical image data sets collected from a homogeneous population of topologically similar anatomies. Results are shown for synthesizing the average brain image volume from a set of six normal adults and synthesizing the average skull/head image volume from a set of five 3 - 4 month old infants with sagittal synostosis.
Nonrigid matching of tomographic images based on a biomechanical model of the human head
Author(s):
Alexander Hagemann;
Karl Rohr;
H. Siegfried Stiehl;
Uwe Spetzger;
Joachim M. Gilsbach
Show Abstract
The accuracy of image-guided neurosurgery generally suffers from brain deformations due to intraoperative changes, e.g., brain shift or tumor resection. In order to improve the accuracy, we developed a biomechanical model of the human head which can be employed for the correction of preoperative images. By now, the model comprises two different materials. The correction of the preoperative image is driven by a set of given landmark correspondences. Our approach has been tested using synthetic images and yields physically plausible results. Additionally, we carried out registration experiments with a preoperative MR image and a corresponding postoperative image simulating an intra-operative image. We found, that our approach yields good prediction results, even in the case when correspondences are given in a small area of the image only.
Likelihood estimation in image warping
Author(s):
Alexei Manso Correa Machado;
Mario F.M. Campos;
James C. Gee
Show Abstract
The problem of matching two images can be posed as the search for a displacement field which assigns each point of one image to a point in the second image in such a way that a likelihood function is maximized ruled by topological constraints. Since the images may be acquired by different scanners, the intensity relationship between intensity levels is generally unknown. The matching problem is usually solved iteratively by optimization methods. The evaluation of each candidate solution is based on an objective function which favors smooth displacements that yield likely intensity matches. This paper is concerned with the construction of a likelihood function that is derived from the information contained in the data and is thus applicable to data acquired from an arbitrary scanner. The basic assumption of the method is that the pair of images to be matched is assumed to contain roughly the same proportion of tissues, which will be reflected in their gray-level histograms. Experiments with MRI images corrupted with strong non-linear intensity shading show the method's effectiveness for modeling intensity artifacts. Image matching can thus be made robust to a wide range of intensity degradations.
New experimental results in atlas-based brain morphometry
Author(s):
James C. Gee;
Brian A. Fabella;
Siddharth E. Fernandes;
Bruce I. Turetsky;
Ruben C. Gur;
Raquel E. Gur
Show Abstract
In a previous meeting, we described a computational approach to MRI morphometry, in which a spatial warp mapping a reference or atlas image into anatomic alignment with the subject is first inferred. Shape differences with respect to the atlas are then studied by calculating the pointwise Jacobian determinant for the warp, which provides a measure of the change in differential volume about a point in the reference as it transforms to its corresponding position in the subject. In this paper, the method is used to analyze sex differences in the shape and size of the corpus callosum in an ongoing study of a large population of normal controls. The preliminary results of the current analysis support findings in the literature that have observed the splenium to be larger in females than in males.
Midsagittal surface measurement of the head: an assessment of craniofacial asymmetry
Author(s):
Gary E. Christensen;
Hans J. Johnson;
Tron Darvann;
Nuno Hermann;
Jeffrey L. Marsh M.D.
Show Abstract
Left/right craniofacial asymmetry is typically measured by comparing distances between standard anatomical landmarks. However, these measurements are of limited use for visualizing and quantifying the asymmetry at non-landmark locations. This work presents a method for calculating, measuring and visualizing the planar deviation of the midsagittal surface for the purpose of craniofacial dysmorphology assessment, pre-operative corrective surgery planning, and post-operative evaluation. A set of midsagittal landmarks are used to define a reference midsagittal plane and to define a non-planar surface that passes through the landmarks. The surface is modeled as a thin-plate spline that can be visualized in 3D using a virtual reality markup language browser and it can be fused with the original volume rendered CT data using VoxelViewTM.
Development of a point-based shape representation of arbitrary three-dimensional medical objects suitable for statistical shape modeling
Author(s):
Nils Krahnstoever;
Cristian Lorenz
Show Abstract
A novel method that allows the development of surface point based 3D statistical shape models is presented. Fourier decomposition and multiple 2D contours have previously been proposed for the development of statistical shape models of 3D medical objects. Unlike Fourier decomposition the presented method can be applied to shapes of arbitrary topology. Furthermore, the method described here results in a true 3D shape model, independent, for example, from slice orientations of contour images. Given a set of medical objects, a statistical shape model can be obtained by Principal Component Analysis. This technique requires that a set of complex shaped objects is represented as a set of vectors that on the one hand uniquely determine the shapes of the objects and on the other hand are suited for a statistical analysis. The correspondence between the vector components and the respective shape features has to be the same for all shape parameter vectors to be considered. We present a novel approach to the correspondence problem for complex 3D objects. The underlying idea is to develop a template shape and to fit this template to all objects to be analyzed. Although we used surface triangulation to represent the shape the method can easily be adapted to work with other representations. The method is successfully applied to obtain a statistical shape model for the lumbar vertebrae.
Automatic landmark identification in 3D image volumes by topography conserving approximation of contour data
Author(s):
Heinrich Martin Overhoff;
Andre Mastmeyer;
Jan Ehrhardt
Show Abstract
If organs are represented by a compact and non-ambiguous mathematical function, many actually interactive diagnostics on tomographic images can be performed automated. Such a representation is constructed from the organ's surface by mapping characteristic topographic structures (landmarks) onto identical function variables. Organs scanned by tomographic images series--periacetabular region of 14 hip joints imaged by X-ray CT--may be segmented. Their surface's contour points are approximated by tensor-product-B-splines (TPBSs). In a reference TPBS surface model landmarks are denoted interactively to define a mapping `variable-pair of the TPBS vs. landmark'. The patient TPBS models are mapped onto the reference model by fitting the model function values. The fit, and so the landmark identification, is performed by a homology function, which is applied to the patient model's variable plane. For simply shaped organs, the transformation of the tomographic to the topographic representation was possible using only the values and first order derivatives of the TPBSs. The presented landmark identification method avoids unnecessary assumptions of model deformation mechanism and has low computational costs.
Mammographic structure: data preparation and spatial statistics analysis
Author(s):
Arthur E. Burgess
Show Abstract
Detection of tumors in mammograms is limited by the very marked statistical variability of normal structure rather than image noise. This presentation reports investigation of the statistical properties of patient tissue structures in digitized x-ray projection mammograms, using a database of 105 normal pairs of craniocaudal images. The goal is to understand statistical properties of patient structure, and their effects on lesion detection, rather than the statistics of the images per se, so it was necessary to remove effects of the x-ray imaging and film digitizing procedures. Work is based on the log-exposure scale. Several algorithms were developed to estimate the breast image region corresponding to a constant thickness between the mammographic compression plates. Several analysis methods suggest that the tissue within that region, assuming second- order stationarity, is described by a power law spectrum of the form P(f) equals A/f(beta ), where f is radial spatial frequency and (beta) is about 3. There is no evidence of a flattening of the spectrum at low frequencies. Power law processes can have a variety statistical properties that seem surprising to an intuition gained using mildly random processes such as smoothed Gaussian or Poisson noise. Some of these will be mentioned. Since P(f) is approximately a 3rd order pole at zero frequency, spectral estimation is challenging.
Using phase information to characterize coherent scattering from regular structures in ultrasound signals
Author(s):
Rashidus S. Mia;
Murray H. Loew;
Keith A. Wear;
Robert F. Wagner;
Brian S. Garra
Show Abstract
The detection of sources responsible for coherent components in ultrasound signals, has been a difficult task. In this work, we explore the idea of using phase coherence as a measure of the level of structured regularity present in the scattering medium. If the scattering sites are located randomly, then the reflected signal should be incoherent. This is the case of purely diffuse scattering. If, however, there is structure in the scattering medium, then the reflections from those sites will have some non-random phase relationship. In this work, the phase distribution is characterized as follows. For each demodulation frequency, we plot the power as a function of phase. This is computed for all frequencies in the usable bandwidth of the transducer. For each frequency, the power is uniformly distributed across phase from 0 to 2 (pi) for a purely incoherent signal. Systematic deviations from the uniform distribution may indicate the presence of coherent scattering components. This approach was first verified using simulation data, then applied to two sets of clinical ultrasound data. We have achieved good classification performance (area under the ROC curve, Az equals 0.86 +/- 0.04) using two features extracted from this analysis of phase.
Comparison of contrast-to-noise ratios for Bayesian processing and grids in digital chest radiography
Author(s):
Alan H. Baydush;
Wendy C. Gehm;
Carey E. Floyd Jr.
Show Abstract
Previously, we have demonstrated the ability of Bayesian image estimation (BIE) to reduce scatter and improve image contrast to noise ratio (CNR) in chest radiography without degradation of resolution. Here, we compare the effectiveness of BIE to a standard 12:1 grid. Images of a geometric phantom with two inches of added polystyrene were obtained both with and without a 12:1, 150 lp/mm grid. Images were acquired with standard protocols: 120 kVp, 72 inch source to image distance, and PA positioning. Images were acquired on a calibrated photostimulable phosphor system. An image exposure was used corresponding to the same patient dose as when acquiring film/screen chest images using a phototimer. The image acquired without the grid was processed by BIE for 6 iterations. Contrast, noise, and CNR were calculated and compared for the image acquired with the grid and the BIE processed image in different regions. BIE processing improved image CNR by 200 to 350% over that provided by the anti-scatter grid for the different regions. BIE provides higher CNR than that of a 12:1 grid. Because of this increase in CNR, Bayesian processed images will show an increase in detectability of low contrast objects, such as subtle lung nodules.
Transform neural network for Fourier detection task
Author(s):
David G. Brown;
Mary S. Pastel;
Kyle J. Myers
Show Abstract
Complex-valued weights are used in the first layer of a feed forward neural network to produce a `transform' neural network. This network was applied to a phase-uncertain sine wave detection task against a Gaussian white noise background. When compared with results of a human observer study on this task by Burgess et al., performance of the transform network was found to be nearly equal to that of an ideal observer and far superior to that of the human observers. Performance was found to be dramatically affected by initial values of the weights, which is explained in terms of concepts from statistical decision theory.
Automatic classification of urinary sediment images by using a hierarchical modular neural network
Author(s):
Satoshi Mitsuyama;
Jun Motoike;
Hitoshi Matsuo
Show Abstract
We have developed an automated image-classification method for the examination of urinary sediment. Urine contains many kinds of particles of various colors and sizes. To classify these particles automatically, we developed a hierarchical modular neural network (HMNN) to enable accurate classification of urinary-sediment images. Simulations results showed that a neural network with a modular structure can classify artificially generated patterns more accurately than a single neural network (SNN). By using a HMNN, any kind of particle contained in urine can be automatically classified. We compared the classification accuracy when using the HMNN to that with a SNN and found that the classification accuracy for some classes of particles when using the HMNN was 25% to 30% higher than when using the SNN. With the HMNN, the examination accuracy was sufficient to allow automation of the examination process.
Noise reduction in x-ray microtomographic images by anisotropic diffusion filtration of the scanner projection images
Author(s):
Omer Demirkaya;
Erik Leo Ritman M.D.
Show Abstract
In this study, we investigated the efficacy of an anisotropic diffusion filter in reducing noise and suppressing and/or removing image artifacts with minimal degradation of image resolution in the 3D reconstructed microtomography (micro-CT) images. The preprocessed projection images of the micro-CT scanner were filtered using the anisotropic affine filter. The tomographic images of test phantoms and of real tissue specimens were reconstructed using conebeam filter backprojection technique from the filtered projections, and compared to the images reconstructed from original projection images. The image variance in 3D reconstructed image slices was estimated by computing the spatial variance inside a selected region of interest in the tomographic image of a plexiglass cylinder. The computed tomography (CT) image grayscale profiles of glass microspheres and of rat kidneys in 3D reconstructed image slices were compared to show the effect of the filtering on image resolution. The affine anisotropic filtering reduced the variance in the selected region of the micro-CT image. The comparison of intensity profiles across the glass spheres in the tomographic slices indicated that the filtering did not result in any significant loss of resolution. The filtering either reduced the magnitude of the streak artifacts or removed them completely while suppressing the ring artifacts substantially.
Image deconvolution as an aid to mammographic artifact identification: I. Basic techniques
Author(s):
Phillip Abbott;
Andrew Shearer;
Triona O'Doherty;
Wil van der Putten
Show Abstract
Digital mammography has the potential to provide radiologists with a tool which can detect tumors earlier and with greater accuracy then film based systems. Although a digital mammography system can provide much greater contrast when compared with a conventional film system, the ability to detect small artifacts associated with breast cancer is limited by a reduced spatial resolution due to screen unsharpness and scatter induced fog. In this paper we model the radiological image formation process as the convolution of a linear shift invariant point spread function (PSF) with the projected tissue density source function. We model the PSF as consisting of two components--screen unsharpness and scatter. We present results from a method designed to compensate for screen unsharpness. The screen PSF was measured and subsequently used in an iterative deconvolution algorithm which incorporated wavelet based de-noising between steps in order to reduce noise amplification. When applied to a University of Leeds TORMAX breast phantom the results show as much as a two-fold improvement in resolution at the 50 percent MTF level. Our results show that the regularized deconvolution algorithm significantly improves the signal-to-noise ratio in the restored image.
Landmark matching on brain surfaces via large deformation diffeomorphisms on the sphere
Author(s):
Muge M. Bakircioglu;
Sarang C. Joshi;
Michael I. Miller
Show Abstract
This paper extends the diffeomorphic landmark matching as described by Joshi in IR2 and IR3 to spherical geometries. Spherical maps, which are in one-to-one correspondence with a cortical surface, are useful in visualization since they bring the buried cortex into full view and preserve topology.
Coordinate system for hexagonal pixels
Author(s):
Wesley E. Snyder;
Hairong Qi;
William A. Sander
Show Abstract
A coordinate system is described which provides a natural means for representing hexagonally-organized pixels. The coordinate system has the unusual property that its basis vectors are not orthogonal. Vector-space properties and operations are described in this coordinate system, and shown to be straightforward computations. Some image processing algorithms, including calculations of image gradients and variable-conductance diffusion, are expressed and analyzed.
Subpixel shift with Fourier transform to achieve efficient and high-quality image interpolation
Author(s):
Qin-Sheng Chen;
Martin S. Weinhous
Show Abstract
A new approach to image interpolation is proposed. Different from the conventional scheme, the interpolation of a digital image is achieved with a sub-unity coordinate shift technique. In the approach, the original image is first shifted by sub-unity distances matching the locations where the image values need to be restored. The original and the shifted images are then interspersed together, yielding an interpolated image. High quality sub-unity image shift which is crucial to the approach is accomplished by implementing the shift theorem of Fourier transformation. It is well known that under the Nyquist sampling criterion, the most accurate image interpolation can be achieved with the interpolating function (sinc function). A major drawback is its computation efficiency. The present approach can achieve an interpolation quality as good as that with the sinc function since the sub-unity shift in Fourier domain is equivalent to shifting the sinc function in spatial domain, while the efficiency, thanks to the fast Fourier transform, is very much improved. In comparison to the conventional interpolation techniques such as linear or cubic B-spline interpolation, the interpolation accuracy is significantly enhanced. In order to compensate for the under-sampling effects in the interpolation of 3D medical images owing to a larger inter-slice distance, proper window functions were recommended. The application of the approach to 2- and 3-D CT and MRI images produced satisfactory interpolation results.
Is there texture information in standard brain MRI?
Author(s):
Hamid Soltanian-Zadeh;
Reza Nezafat;
Joe P. Windham
Show Abstract
We have developed a texture feature extraction method for MRI utilizing the recently developed multiwavelet theory. Texture based features are used in Eigenimage Filtering to enhance analysis results of tumor patient MRI studies. The steps of the proposed method are as follows: (1) Each original image is convolved with a Gaussian filter. This step suppresses the image noise. (2) Each of the resulting images is convolved with eight multiwavelet coefficient matrices. (3) The output of each filter is stored in a separate image (feature plane). This step generates features (images) in which texture information is enhanced. (4) Local energy of each feature is calculated by squaring the feature values. This step converts variance disparities into mean value differences and transforms large values of local pass- band energy into large image gray levels. (5) Eigenimage filter is applied to different sets of MRI images and the results are compared. First, it is applied to the conventional MRI images (T1-weighted, T2-weighted, and proton density weighted). Then, it is applied to the set consisting of these images and the texture feature images generated in the previous step for them. Finally, it is applied to four original images (three conventional and a non-conventional). (6) The eigenimages obtained in the previous step are compared. This step illustrates presence and significance of the texture information present in MRI and role of the proposed method in extracting these features. Applications of the proposed method to MRI studies of brain tumor patients illustrate that the method successfully extracts texture features which are useful in tumor segmentation and characterization.
Hierarchical watershed transformation based on a-priori information for spot detection in 2D gel electrophoresis images
Author(s):
Susan Wegner;
Klaus-Peter Pleissner;
Helmut Oswald;
Eckart Fleck
Show Abstract
For the spot detection in 2D electrophoresis images an approach which is based on the combination of the watershed transformation (WST) with a-priori knowledge is presented. To identify spot regions in the over segmented result of the WST two types of regions have to be found: Regions that correspond to a complete spot and regions that cover only a part of a spot. The first localization step, the gray value analysis, is based on the assumptions that spot regions have significantly higher gray values than the background and that they border on a background region. Since not all remaining regions are spot or partial spot regions, additionally a curvature analysis is done. Here the a-priori knowledge is used that regarding a gel image as a surface, the shape of a spot is obviously convex. Consequently, considering the second derivative all required spot and partial spot regions can be obtained by the regions of convex curvature. In a final merging step all partial spot regions covering one spot have to be combined to only one spot region. As merging criterion two spot characteristics are used. A spot should have an approximately elliptical shape and partial spot regions of one spot should have a local convex curvature in a small neighborhood along their boundary.
Advanced 3D image processing techniques for liver and hepatic tumor location and volumetry
Author(s):
Stephane Chemouny;
Henri Joyeux;
Bruno Masson;
Frederic Borne;
Marc Jaeger;
Olivier Monga
Show Abstract
To assist radiologists and physicians in diagnosing, and in treatment planning and evaluating in liver oncology, we have developed a fast and accurate segmentation of the liver and its lesions within CT-scan exams. The first step of our method is to reduce spatial resolution of CT images. This will have two effects: obtain near isotropic 3D data space and drastically decrease computational time for further processing. On a second step a 3D non-linear `edge- preserving' smoothing filtering is performed throughout the entire exam. On a third step the 3D regions coming out from the second step are homogeneous enough to allow a quite simple segmentation process, based on morphological operations, under supervisor control, ending up with accurate 3D regions of interest (ROI) of the liver and all the hepatic tumors. On a fourth step the ROIs are eventually set back into the original images, features like volume and location are immediately computed and displayed. The segmentation we get is as precise as a manual one but is much faster.
Nonlinear registration of medical images using Cauchy-Navier spline transformation
Author(s):
Mie Sato;
Aboul Ella Hassanien;
Masayuki Nakajima
Show Abstract
In this paper a new image registration methodology for matching anatomical medical images is presented. It is based on a point-to-point matching methodology that uses the Cauchy- Navier splines transformation to model the deformable anatomical behavior associated with non-rigid body medical image registration. These transformations are illustrated by matching corresponding CT and MR images of the Thorax. By applying Cauchy-Navier splines transformation to landmarks created using segmentation method, an improved performance is achieved. The Cauchy-Navier spline is compared to thin plate spline and multi-quadratic methods.
Elastic registration of MRA brain images using salient blood vessel features
Author(s):
Kit-Cheng Ng;
Timothy S. Newman
Show Abstract
In this paper, a new local registration method is introduced. The method enables regional refinements to an initial coarse global alignment of two magnetic resonance angiography (MRA) volume datasets through the use of snake-based local elastic deformations on blood vessel curves. The curves are deformed based on corresponding salient features (points of bifurcation and high curvature) that are extracted using a multi-stage method. The extraction involves first tracing blood vessel curves from depth-enhanced maximal intensity projections of the original volume data and then fitting B-splines to the traced structures. The framework for a new approach to volume warping which completes the refinement process for all points in the datasets is also introduced. The local registration method offers the promise registering data collected over time from a patient.
Statistical analysis of structural changes in a whole brain based on nonlinear image registration
Author(s):
Christian Gaser;
Stefan Kiebel;
Stefan Riehemann;
Hans-Peter Volz;
Heinrich Sauer
Show Abstract
This paper describes a new method for detecting structural brain differences based on the analysis of deformation fields. Deformations are obtained by an intensity-based nonlinear registration routine which transforms one brain onto another one. We present a general multivariate statistical approach to analyze deformation fields in different subjects. This multivariate general linear model provides the implementation of most forms of experimental designs. We apply our method to the brains of 85 schizophrenic patients and 75 healthy volunteers to examine, whether low frequency deformations are sufficiently sensitive to detect regional deviations in the brains of both groups. We demonstrate the application of the multivariate general linear model to a subtractive (modeling group differences) and a parametric design (testing a linear relationship between one variable and the deformation field).
Fast voxel-based 2D/3D registration algorithm using a volume rendering method based on the shear-warp factorization
Author(s):
Juergen Weese;
Roland Goecke;
Graeme P. Penney;
Paul Desmedt;
Thorsten M. Buzug;
Heidrun Schumann
Show Abstract
2D/3D registration makes it possible to use pre-operative CT scans for navigation purposes during X-ray fluoroscopy guided interventions. We present a fast voxel-based method for this registration task, which uses a recently introduced similarity measure (pattern intensity). This measure is especially suitable for 2D/3D registration, because it is robust with respect to structures such as a stent visible in the X-ray fluoroscopy image but not in the CT scan. The method uses only a part of the CT scan for the generation of digitally reconstructed radiographs (DRRs) to accelerate their computation. Nevertheless, computation time is crucial for intra-operative application and a further speed-up is required, because numerous DRRs must be computed. For that reason, the suitability of different volume rendering methods for 2D/3D registration has been investigated. A method based on the shear-warp factorization of the viewing transformation turned out to be especially suitable and builds the basis of the registration algorithm. The algorithm has been applied to images of a spine phantom and to clinical images. For comparison, registration results have been calculated using ray-casting. The shear-warp factorization based rendering method accelerates registration by a factor of up to seven compared to ray-casting without degrading registration accuracy. Using a vertebra as feature for registration, computation time is in the range of 3-4s (Sun UltraSparc, 300 MHz) which is acceptable for intra-operative application.
Multimodality image registration based on hierarchical shape representation
Author(s):
Li-Yueh Hsu;
Murray H. Loew;
John Ostuni
Show Abstract
Image registration is a correlation procedure that allows either the complementary study of images obtained from different modalities, or enables the analysis of images obtained by the same modality but at different times. Applied to a variety of clinical and investigational problems, image registration can offer a major advance in diagnostic imaging. In this paper, we present an automated multi-modality registration algorithm based on hierarchical feature extraction. Two kinds of shape representations -- edges and surfaces (skin surface, inner skull surface, and outer brain surface) -- are extracted hierarchically from different image modalities. The registration is then performed using the user- specified (but automatically extracted) corresponding features. Both the robustness of the algorithm and the registration accuracy using different registration features are compared in this paper. The preliminary results show that the use of edge and surface features could succeed over a large range of geometric displacements. The results also indicate that neither the edge nor the surface feature is clearly superior in terms of registration accuracy. Using the edge feature could, however, have the advantage of eliminating the surface segmentation step which requires extra complexity, variability, and time cost. We have shown the proposed 3D registration algorithm provides a simple and fast method for automatic registration of CT and MR image modalities. Preliminary results using our registration algorithm are comparable to those obtained by other techniques.
Optimization of image quality for DSA warping registration
Author(s):
Ashoke S. Talukdar;
Arun Krishnan;
David L. Wilson
Show Abstract
X-ray digital subtraction angiography (DSA) images frequently suffer from misregistration artifacts. Commercial systems use whole-image manual and semi-automated registration techniques that can be tedious to use. Frequently, patient motion leads to complex artifacts that whole-image registration cannot remove. Available computer technology makes warping registration feasible and timely. We evaluated 6 different warping registration algorithms using 10 subjects. Image quality of subtracted images was evaluated using numerical scoring to specific image quality questions. To aid image quality comparison, images were displayed side-by-side on a single 21-inch monitor. The case mix consisted of 15 DSA images, with significant substraction artifacts, taken from the feet, legs, abdomen, chest and head. In 92% of cases, warping registration dramatically improved subtraction image quality while whole-image translation methods showed little or no improvement. It was also found that the most successful warping method varied from case to case. Based on this study, we propose a combination of warping registration techniques.
Effect of ultrasonic transducer frequency on the registration of ultrasound to CT vertebral images
Author(s):
Diane M. Muratore;
Jeannette L. Herring;
Benoit M. Dawant;
Robert L. Galloway Jr.
Show Abstract
Researchers of computer-assisted surgical systems are seeking to reduce the invasiveness of spinal procedures through the use of intra-operative ultrasound (US). Given a favorable registration of vertebral US images to pre-operative CT scans, the individual vertebrae in physical space would be mapped to the patient's corresponding image space. In this work a method is proposed for transcutaneous localization of a lumbar vertebra in US images and a subsequent registration of vertebral surfaces from US and CT. In this study, US scans of a life-size plastic spine phantom were obtained using B-mode transducers of frequencies 3.5 and 4.5 MHz. The spine was immersed in a water tank and images from the L2 vertebra were captured in the transverse plane. A point-to-surface registration that is a modification of the Besl/McKay algorithm was applied to extracted US vertebral surface points and a triangulated surface representation of corresponding CT scans. The results of this registration have been qualitatively assessed, and both data sets visually algin along the entire L2 vertebra. Presently, more than 250,000 lumbo-sacral spinal surgeries are performed annually; consequently, minimizing the intervention in this region could have an extensive positive effect for both the procedure and the patient.
Registration and superimposition of a coronary arterial tree onto a bull's eye map of a myocardial SPECT
Author(s):
Naozo Sugimoto;
Ryo Haraguchi;
Shigeru Eiho;
Chikao Uyama;
Yoshio Ishida
Show Abstract
To assist in understanding a relationship between stenotic change in coronary arteries and change in myocardial function, we are developing a method of registration and superimposed display of the coronary arterial tree on a myocardial SPECT (Single Photon Emission Tomography). Firstly we introduce a fully 3-dimensional registration and display method. A 3- dimensionally reconstructed coronary arterial tree data is also transferred to a bull's eye map representation. We also introduce a semi 3-dimensional method. By this method, original 2-dimensional coronary angiograms are directly, i.e. without 3-dimensional reconstruction, transferred to a bull's eye map representation by referring a manually defined left ventricular long axis on angiograms and referring an epicardial surface data obtained from SPECT.
Brain imaging registration by correlation of first-order geometry
Author(s):
Diego A. Socolinsky;
Aswin Krishnamoorthy;
Lawrence B. Wolff
Show Abstract
A new method is introduced for the registration of MRI and CT scans of the head, based on the first order geometry of the images. Registration is accomplished by optimal alignment of gradient vector fields between respective MRI and CT images. We show that the summation of the squared inner products of gradient vectors between images is well-behaved, having a strongly peaked maximum when images are exactly registered. This supports our premise that both magnitude and orientation of edge information are important features for image registration. A number of experimental results are presented demonstrating the accuracy of our performance.
Assessing the registration of CT-scan data to intraoperative x rays by fusing x rays and preoperative information
Author(s):
Andre P. Gueziec
Show Abstract
This paper addresses a key issue of providing clinicians with visual feedback to validate a computer-generated registration of pre-operative and intra-operative data. With this feedback information, the clinician may decide to proceed with a computer-assisted intervention, revert to a manual intervention, or potentially provide information to the computer system to improve the registration. The paper focuses on total hip replacement (THR) surgery, but similar techniques could be applied to other types of interventions or therapy, including orthopedics, neurosurgery, and radiation therapy. Pre-operative CT data is used to plane the surgery (select an implant type, size and precise position), and is registered to intra-operative X-ray images, allowing to execute the plan: mill a cavity with the implant's shape. (Intra-operative X-ray images must be calibrated with respect to the surgical device executing the plan). One novel technique presented in this paper consists of simulating a post-operative X-ray image of the tissue of interest before doing the procedure, by projecting the registered implant onto an intra-operative X- ray image (corrected for distortion or not), providing clinicians with familiar and easy to interpret images. As an additional benefit, this method provides new means for comparing various strategies for registering pre-operative data to the physical space of the operating room.
MRI segmentation based on region growing with robust estimation
Author(s):
Leopoldo Gonzalez-Santos;
Rafael R. Rojas;
Juan H. Sossa-Azuela;
Fernando A. Barrios
Show Abstract
An image segmentation algorithm based on region growing with robust estimation, using the mode of the region elements, is presented. With this algorithm it is possible to do brain MRI segmentation with reasonable results and speed, specially in connected regions in which region growing methods are well suited. Head MRI from eight normal volunteers were used to run the segmentation program and the result was validated by an MR neuroradiologist. Border elements can be included doing a small correction, white and gray matter were segmented easily and more complex functional areas can be obtained too.
Quantitative intracerebral brain hemorrhage analysis
Author(s):
Sven Loncaric;
Atam P. Dhawan;
Dubravko Cosic;
Domagoj Kovacevic;
Joseph Broderick;
Thomas Brott
Show Abstract
In this paper a system for 3-D quantitative analysis of human spontaneous intracerebral brain hemorrhage (ICH) is described. The purpose of the developed system is to perform quantitative 3-D measurements of the parameters of ICH region and from computed tomography (CT) images. The measured parameter in this phase of the system development is volume of the hemorrhage region. The goal of the project is to measure parameters for a large number of patients having ICH and to correlate measured parameters to patient morbidity and mortality.
Quantification of the progression of CMV infection as observed from retinal angiograms in patients with AIDS
Author(s):
Djamel Brahmi;
Nathalie Cassoux;
Camille Serruys;
Alain Giron;
Phuc Lehoang;
Bernard Fertil
Show Abstract
To support ophthalmologists in their daily routine and enable the quantitative assessment of progression of Cytomegalovirus infection as observed on series of retinal angiograms, a methodology allowing an accurate comparison of retinal borders has been developed. In order to evaluate accuracy of borders, ophthalmologists have been asked to repeatedly outline boundaries between infected and noninfected areas. As a matter of fact, accuracy of drawing relies on local features such as contrast, quality of image, background..., all factors which make the boundaries more or less perceptible from one part of an image to another. In order to directly estimate accuracy of retinal border from image analysis, an artificial neural network (a succession of unsupervised and supervised neural networks) has been designed to correlate accuracy of drawing (as calculated form ophthalmologists' hand-outlines) with local features of the underlying image. Our method has been applied to the quantification of CMV retinitis. It is shown that accuracy of border is properly predicted and characterized by a confident envelope that allows, after a registration phase based on fixed landmarks such as vessel forks, to accurately assess the evolution of CMV infection.
Semantic object segmentation scheme for x-ray body images
Author(s):
Jaeyoun Yi;
Hyun Sang Park;
Jong Beom Ra
Show Abstract
In the segmentation process based on a watershed algorithm, a proper seed extraction is very important for segmentation quality because improper seeds can produce undesirable results such as over-segmentation or under-segmentation. Especially, an appropriate seed-extraction algorithm is indispensable in segmenting XCT body images where many organs, except lungs and bones, are in very narrow gray-level ranges with very low contrasts. In the proposed scheme, we divide an image into 4 sub-images by windowing its gray-level histogram, and extract proper seeds from each sub-image by different method according to its characteristic. Then, by using all the seeds obtained from the four separated sub-images, we perform the watershed algorithm to complete the image segmentation. The proposed segmentation method has been successfully applied to X-ray CT body images.
Discrete dynamic contour model for mass segmentation in digital mammograms
Author(s):
Guido M. te Brake;
Mark J. Stoutjesdijk;
Nico Karssemeijer
Show Abstract
In recent years, deformable models have become popular in the field of medical image analysis. We have applied a member of this family, a discrete dynamic contour model, to the task of mass segmentation in digital mammograms. The method was compared to a recently published region growing method on a dataset of 214 mammograms. Both methods need a starting point. In a first experiment, for each mass the center of gravity of the annotation was used. In a second experiment, a pixel-based initial detection step was used to generate starting points. The latter starting points are often located less proper for good segmentation, requiring the methods to be robust. The performance was measured using an overlap criterion based on the annotation made by an experienced radiologist and the segmented region. The discrete contour model proved to be a robust method to segment masses, and outperformed a probabilistic region growing method. However, just like for the region growing methods, a good choice for the seed point appeared to be of great importance.
Automatic segmentation and measurement of axons in microscopic images
Author(s):
Olivier Cuisenaire;
Eduardo Romero;
C. Veraart;
Benoit M. M. Macq
Show Abstract
We propose a method for the automatic segmentation, recognition and measurement of neuronal fibers in microscopic images of nerves. This permits a quantitative analysis of the distribution of the areas of the fibers, while nowadays such morphometrical methods are limited by the practical impossibility to process large amounts of fibers in histological routine. First, the image is thresholded to provide a coarse classification between myelin (black) and non-myelin (white) pixels. The resulting binary image is simplified using connected morphological operators. These operators simplify the zonal graph, whose vertices are the connected areas of the binary image. An appropriate set of semantic rules allow us to identify a number of white areas as axon candidates, some of which are isolated, some of which are connected. To separate connected fibers -- candidates sharing the same neighboring black area -- we evaluate the thickness of the myelin ring around each candidate area through Euclidean distance transformation by propagation with a stopping criterion on the pixels in the propagation front. Finally, properties of each detected fibers are computed and false alarms are suppressed. The computational cost of the method is evaluated and the robustness of the method is assessed by comparison to the manual procedure. We conclude that the method is fast and accurate for our purpose.
Segmentation and representation of lesions in MRI brain images
Author(s):
Yi Tao;
William I. Grosky;
Lucia J. Zamorano;
Zhaowei Jiang;
JianXing Gong
Show Abstract
In this paper, we address the NSPS (a Neurological Surgery Planning System developed at the Neurological Surgery Department of Wayne State University) approaches for segmenting and representing lesions in MRI brain images. Initially, the 2D segmentation algorithm requires the input of a seed (an individual pixel or a small region) and a threshold to control the formation of a lesion region. The 3D segmentation algorithm requires the input of a seed, along with the threshold computed automatically from the corresponding three sample thresholds of lesion regions in sagittal, coronal, and axial views, to form a lesion volume. Then, a novel method is developed to represent the segmented lesion regions with feature point histograms, obtained by discretizing and counting the angles produced from the resulting Delaunay triangulation of a set of feature points which characterize the shape of the lesion region. The proposed shape representation technique is translation, scale, and rotation independent. Through various experiment results, we demonstrate the efficacy of the NSPS methodologies. Finally, based on the lesion representation scheme, we present a prototype system architecture for neurological surgery training. The implemented system will work in a Web-based environment, allowing neurosurgeons to query and browse various patients-related medical records in an effective and efficient way.
Effects of image resolution and segmentation method on automated mammographic mass shape classification
Author(s):
Lori Mann Bruce;
Maria Kallergi
Show Abstract
This article investigates the effects of resolution on the automated segmentation and classification of mammographic masses. A set of 39 mammographic images containing 40 masses are digitized at two resolutions: 220 micrometer and 8 bits per pixel and 180 micrometer and 16 bits per pixel. An expert mammographer classified the shape of all 40 masses as round, lobular, or irregular, and manually segmented the masses from the lower resolution images. The masses in both sets are automatically segmented with a Markov Random Field-based method. Two groups of shape features are extracted from the segmented masses in each set of images: (1) compactness, radial distance mean, standard deviation, entropy, zero- crossing count, and roughness index, and (2) wavelet-based scalar-energy features. Linear discriminant analysis and a minimum Euclidean distance classifier are used to automatically separate the mass shapes into the three classes determined by the expert. The effects of the resolution and method of segmentation on the classification process are analyzed for both groups of shape features.
3D ultrasound image segmentation using multiple incomplete feature sets
Author(s):
Liexiang Fan;
David M. Herrington;
Peter Santago II
Show Abstract
We use three features, the intensity, texture and motion to obtain robust results for segmentation of intracoronary ultrasound images. Using a parameterized equation to describe the lumen-plaque and media-adventitia boundaries, we formulate the segmentation as a parameter estimation through a cost functional based on the posterior probability, which can handle the incompleteness of the features in ultrasound images by employing outlier detection.
3D segmentation of a medical image using the geometric active contour model
Author(s):
Dong-pyo Jang;
Yong-ho Cho;
Sun Il Kim
Show Abstract
Accurate segmentation is a key issue in medical image analysis. Segmentation aids the tasks of object representation and structure quantification. Currently, many methods are introduced in 3D slice medical image segmentation. Conventionally, each slice image was segmented in 2D space and reconstructed in 3D space for 3D analysis, so it required more time and effort. Thus, this paper presents a modified geometric active contour model in 3D domain for reducing user's efforts, shortening the processing time and detecting an exact edge of object. This method uses a new speed function based on the level set approach developed by Sethian and the fast marching method for its fast implementation. The main idea of the level set methodology is to embed the propagating interface as the particular level set of a higher dimensional function (hypersurface) flowing along gradient force and curvature force. This technique retains the attractive feature which is its topological and geometric flexibility of the contour in recovering objects with complex shapes and unknown topologies. And the other feature is that this method is easily applied to the 3D domain easily. The fast marching method is a fast algorithm using the fact that the interface in the level set method is flowing in one direction. We apply the proposed model to various 2D, 3D images such as synthetic image, CT and MR Angiogram. From the results, the presented model confirms that it works very naturally and efficiently with the desired features of 2D and 3D medical images.
Automatic segmentation of blood vessels from MR angiography volume data by using fuzzy logic technique
Author(s):
Syoji Kobashi;
Yutaka Hata;
Yasuhiro Tokimoto;
Makato Ishikawa
Show Abstract
This paper shows a novel medical image segmentation method applied to blood vessel segmentation from magnetic resonance angiography volume data. The principle idea of the method is fuzzy information granulation concept. The method consists of 2 parts: (1) quantization and feature extraction, (2) iterative fuzzy synthesis. In the first part, volume quantization is performed with watershed segmentation technique. Each quantum is represented by three features, vascularity, narrowness and histogram consistency. Using these features, we estimate the fuzzy degrees of each quantum for knowledge models about MRA volume data. In the second part, the method increases the fuzzy degrees by selectively synthesizing neighboring quantums. As a result, we obtain some synthesized quantums. We regard them as fuzzy granules and classify them into blood vessel or fat by evaluating the fuzzy degrees. In the experimental result, three dimensional images are generated using target maximum intensity projection (MIP) and surface shaded display. The comparison with conventional MIP images shows that the unclarity region in conventional images are clearly depict in our images. The qualitative evaluation done by a physician shows that our method can extract blood vessel region and that the results are useful to diagnose the cerebral diseases.
Segmentation of CT images using the watershed transformation on graphs with a-priori tissue models
Author(s):
Susan Wegner;
Ralf Huetter;
Helmut Oswald;
Eckart Fleck
Show Abstract
The combination of the watershed transformation on graphs with a tissue classification is presented. The watershed transformation on graphs results in hierarchical segmentation volumes that differ in region number and size. If an anatomical object corresponds to a region in a segmentation volume it has to be selected, since the region would be merged with the most similar neighbor region in the following volume. Such an object selection can be done using object descriptions. A possible approach is presented in this paper.
Multiobject segmentation of brain structures in 3D MRI using a computerized atlas
Author(s):
Matthieu Ferrant;
Olivier Cuisenaire;
Benoit M. M. Macq
Show Abstract
We present a hierarchical multi-object surface-based deformable atlas for the automatic localization and identification of brain structures in MR images. The atlas is a multi-object mesh of 3D fully connected surfaces built upon a face centered cubic grid. The registration of the atlas to a patient's MR image is done in two steps: a global registration followed by a multi-object active surface deformation. First, the cortical surface and the ventricular system are segmented using directional watersheds. The global registration is a second degree transformation whose coefficients minimize a distance measure between these surfaces and the equivalent surfaces in the atlas. As a refinement step, the globally registered atlas surfaces are locally deformed using multi-object active surfaces. The external force driving the surfaces towards the edges in the image is a decreasing function of the gradient, and includes prior image information. The active surface equations are then solved using the finite element method. The surfaces of the multi-object mesh are deformed in a hierarchical way, starting with objects exhibiting very well defined features in the image to objects showing less obvious features. Experiments involving several sub-cortical atlas objects are presented.
Hierarchical approach for automated segmentation of the brain volume from MR images
Author(s):
Li-Yueh Hsu;
Murray H. Loew;
Reza Momenan
Show Abstract
Image segmentation is considered one of the essential steps in medical image analysis. Cases such as classification of tissue structures for quantitative analysis, reconstruction of anatomical volumes for visualization, and registration of multi-modality images for complementary study often require the segmentation of the brain to accomplish the task. In many clinical applications, parts of this task are performed either manually or interactively. Not only is this proces often tedious and time-consuming, it introduces additional external factors of inter- and intra-rater variability. In this paper, we present a 3D automated algorithm for segmenting the brain from various MR images. This algorithm consists of a sequence of pre-determined steps: First, an intensity window for initial separation of the brain volume from the background and non-brain structures is selected by using probability curves fitting on the intensity histogram. Next, a 3D isotropic volume is interpolated and an optimal threshold value is determined to construct a binary brain mask. The morphological and connectivity processes are then applied on this 3D mask for eliminating the non-brain structures. Finally, a surface extraction kernel is applied to extract the 3D brain surface. Preliminary results from the same subjects with different pulse sequences are compared with the manual segmentation. The automatically segmented brain volumes are compared with the manual results using the correlation coefficient and percentage overlay. Then the automatically detected surfaces are measured with the manual contouring in terms of RMS distance. The introduced automatic segmentation algorithm is effective on different sequences of MR data sets without any parameter tuning. It requires no user interaction so variability introduced by manual tracing or interactive thresholding can be eliminated. Currently, the introduced segmentation algorithm is applied in the automated inter- and intra-modality image registration. It will furthermore be used in different applications such as quantitative analysis of normal and abnormal brain tissues.
Three-dimensional active surface approach to lymph node segmentation
Author(s):
David M. Honea;
Wesley E. Snyder
Show Abstract
A three-dimensional active surface model has been suggested as a possible solution to the difficult problem of segmenting lymph nodes in CT x-rays. In this paper, a computationally simple active surface, or balloon, is proposed which requires minimal user interaction or a priori shape knowledge. A structure is proposed for the balloon model in which each balloon point is guaranteed a fixed number of neighbors, and a method is provided for adding points to the model while still maintaining that structural regularity. Equations are provided for deriving surface energy from 3D shape and 3D image data. Minimal user interaction is required, with only a single point somewhere inside the node needed to initialize the algorithm. The balloon naturally inflates to find the correct surface due to a unique candidate point selection algorithm that is biased in favor of outward moves. Preliminary results show the model to be successful in finding boundaries in synthetic 3-D images.
Consistent segmentation of repeat CT scans for growth assessment in pulmonary nodules
Author(s):
Binsheng Zhao;
William Kostis;
Anthony P. Reeves;
David Yankelevitz;
Claudia I. Henschke
Show Abstract
Nodule growth is a key characteristic of malignancy. The measurement of nodule diameter on chest radiographs has been unsatisfactory due to insufficient accuracy and reproducibility. Additionally, the frequent use of high resolution CT scanners has increased the detection rate of very small nodules. On one hand, the small nodules present even greater diagnostic difficulties and, on the other hand, are more frequently benign, resulting in higher rates of unnecessary surgery. In this paper we present a 3-D algorithm to improve the consistency of nodule segmentation on multiple scans. The multi-criterion, multi-scan segmentation algorithm has been developed based on the fact that a typical small pulmonary nodule has distinct difference in density at the boundary and relatively compact shape, and that other tissues in the lung do not change in size over time. Our preliminary results with in-vivo nodules have shown the potential of applying this practical 3-D segmentation algorithm to clinical settings.
Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis
Author(s):
Chia-Hsiang Wu;
Yung-Nien Sun;
Nan-Tsing Chiu
Show Abstract
Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.
Technique for evaluation of semiautomatic segmentation methods
Author(s):
Fei Mao;
Jeremy D. Gill;
Aaron Fenster
Show Abstract
In this paper we describe an evaluation technique that quantifies both the accuracy and variability in semiautomatic segmentation algorithms. The particular interest of the study is the evaluation of an active contour method for 2-D carotid artery lumen segmentation in ultrasound images. The active contour method used is known as the Geometrically Deformed Model (GDM). This segmentation method to be evaluated requires a single seed to be placed in the target region by the operator. The evaluation approach is based on the contour probability distribution (CPD), which is obtained by generating contours of the object using a set of possible seed locations. A contour matching procedure provides local displacement measures between any two contours, which in turn allow the calculation of the local CPD of a group of contours. The mean contour can be compared to an operator defined contour to provide accuracy measurements, and the variance can provide measures of local and global variability. The evaluation results from multiple images can be pooled to generate statistics for a more complete evaluation of a semi- automatic segmentation method.
Segmentation and feature extraction of cervical spine x-ray images
Author(s):
L. Rodney Long;
George R. Thoma
Show Abstract
As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.
3D MR image segmentation of prostate gland using two scan planes and Fourier descriptor technique
Author(s):
Pi Chih Wang;
Kang-Ping Lin;
Shyhliang A. Lou;
Hong-Dun Lin;
Te-Shin Chen
Show Abstract
The purpose of this paper is to develop a method for prostate gland segmentation in MR image by combining the two scan planes and Fourier descriptor technique. Because not all of the prostate gland in MR image slices have clear boundary that between a prostate gland and its surrounding soft tissues. In this study, the paper improves the problems by using the two scan planes method. And, reconstructing the prostate gland by the image deformable model that integrates the Fourier descriptor method and energy continuity concept. The two scan planes method uses two MR prostate image sets; one is axial and the other is coronal. The coronal image set is for supplementing the base and the apex regions of the prostate gland in the axial image set. By this method, the prostate gland segmentation in MR images will be obtained more correctly. Then the known boundary images of the axial and coronal images can be reconstructed to 3-D image by the Fourier descriptor technique. The technique of this study integrates the spatial coordinate, energy continuity concept and Fourier descriptor to describe the objects for time sequence or spatial related images. Therefore, the model of this work can estimates the interpolation or exterpolational images that from the known images and reconstructs them to three-dimensional object efficiently and accurately.
Texture analysis and tissue segmentation of cryosection images
Author(s):
Tamara S. Williams;
Jennifer L. Casper
Show Abstract
This paper outlines the exploration of two methods to detect texture in a digital cryosection image from the Visible Human Project. For the purpose of this research, texture is defined as a regular or irregular placement of color in an image. A higher-level decision-making algorithm was employed to extract different body tissues: fat, muscle, and bone. This algorithm was designed on the premise that each body tissue has a different visible texture. Another method utilized an artificial intelligence approach, a neural net, to extract textured tissues. Each problem demands a unique neural net; hence, this neural net is customized in terms of the image dataset and the goal of texture detection.
Interactive tools for image segmentation
Author(s):
Marcel P. Jackowski;
Ardeshir Goshtasby;
Martin Satter
Show Abstract
Interactive tools for segmenting 2-D and 3-D images are presented. These tools allow a user to quickly revise a segmentation result obtained from an automatic method. A thresholding technique is described that finds a unique threshold value for each homogeneous region in an image. The threshold value is found such that variance in the region is minimized under change in the threshold value. Curve- and surface-fitting methods are described that can accurately represent a region boundary in 2-D or 3-D with a parametric curve or a surface, respectively. A curve or a surface is optimized to minimize the number of control points representing a region with a prescribed accuracy. The optimized curve or surface is then revised by moving its control points interactively. Once a curve or a surface is found to accurately enclose a region of interest, it is quantized to produce the final 2-D region contour or 3-D region surface. These interactive tools can be used to revise unsatisfactory results obtained from any automatic segmentation method.
Fuzzy fusion of results of medical image segmentation
Author(s):
Denise Guliato;
Rangaraj M. Rangayyan;
Walter A. Carnielli;
Joao Antonio Zuffo;
J. E. Leo Desautels
Show Abstract
We propose an abstract concept of data fusion based on finite automata and fuzzy sets to integrate and evaluate different sources of information, in particular results of multiple image segmentation procedures. We give an example of how the method may be applied to the problem of mammographic image segmentation to combine results of region growing and closed- contour detection techniques. We further propose a measure of fuzziness to assess the agreement between a segmented region and a reference contour. Results of application to breast tumor detection in mammograms indicate that the fusion results agree with reference contours provided by a radiologist to a higher extent than the results of the individual methods.
Iterative method for automatic detection of masses in digital mammograms for computer-aided diagnosis
Author(s):
Victor Gimenez Martinez;
Daniel Manrique Gamo;
Juan Rios;
Amparo Vilarrasa
Show Abstract
An iterative algorithm has been developed for automatic detection of breast masses from digitalized mammograms. The procedure has been divided in two stages. The first one based on the histogram analysis of the input image. The second one employs a topological analysis from the results obtained in the first stage. The final output is a set of interest regions that are defined as suspicious areas by the system. These suspicious regions should be harder studied in order to present a final diagnosis. The developed system may be used together with any other suspicious area diagnosis algorithms. In this way a computer assisted diagnosis (CAD) program to assist radiologists in his mammography interpretation task could be easy developed.
Automated calculation of the axial orientation of intravascular ultrasound images by fusion with biplane angiography
Author(s):
Andreas Wahle;
Guido P. M. Prause;
Clemens von Birgelen;
Raimund Erbel;
Milan Sonka
Show Abstract
This paper presents an approach for fusion of the two major cardiovascular imaging modalities, angiography and intravascular ultrasound (IVUS). While the path of the IVUS catheter, which follows the vessel curvature during pullback, is reconstructed from biplane angiograms, cross-sectional information about the vessel is derived from IVUS. However, after mapping of the IVUS frames into their correct 3-D locations along the catheter path, their orientations remain ambiguous. We determine the relative catheter twisting analytically, followed by a statistical method for finding the absolute orientation from the out-of-center position of the IVUS catheter. Our results as obtained from studies with cadaveric pig hearts and from three patients undergoing routine coronary intervention showed a good match of the absolute orientation by the algorithm. In all tested cases, the method determined the visually correct orientations of the IVUS frames. Local distortions were reliably identified and discarded.
Application of image processing techniques for contrast enhancement in dense breast digital mammograms
Author(s):
Fatima de Lourdes dos Santos Nunes;
Homero Schiabel;
Rodrigo Henrique Benatti
Show Abstract
Dense breasts, that usually are characteristic of women less than 40 years old, difficult many times early detection of breast cancer. In this work we present the application of some image processing techniques intended to enhance the contrast in dense breast images, regarding the detection of clustered microcalcifications. The procedure was, firstly, determining in the literature the main techniques used for mammographic images contrast enhancement. The results indicate that, in general: (1) as expected, the overall performance of the CAD scheme for clusters detection decreased when applied exclusively to dense breast images, compared to the application to a set of images without this characteristic; (2) most of the techniques for contrast enhancement used successfully in generic mammography images databases are not able to enhance structures of athirst in databases formed only by dense breasts images, due to the very poor contrast between microcalcifications, for example, and other tissues. These features should stress, therefore, the need of developing a methodology specifically for this type of images in order to provide better conditions to the detection of breast suspicious structures in these group of women.
Characterization of skin tumors in dermatoscopic images
Author(s):
Camille Serruys;
Djamel Brahmi;
Alain Giron;
Joseph Vilain;
Raoul Triller;
Bernard Fertil
Show Abstract
Purpose: The prognosis of melanoma, an invasive and malignant skin tumor, strongly relies on early detection. Unfortunately differentiating early melanomas from other less dangerous pigmented lesions is a difficult task even for trained observers since they may have near physical characteristics. Dermatoscopy, a new non-invasive technique which makes subsurface structures of skin accessible to in vivo examination provides standardized images of black tumors that seem convenient for numerical analysis. The objective of this project is to develop a computer-based diagnostic system which takes advantage of dermatoscopic images to characterize black tumors and help to detect melanoma. Methods: Dermatologists ground their diagnosis on the observation of some characteristic features in images of black tumors. Similarly, our approach consists in classifying parts of images of skin tumors (called windows thereafter) by a two-stage procedure. First, a contextual coding of widows is achieved by GHA network (Generalized Hebbian Algorithm). The second stage involves a classical feedforward network (a multilayer perceptron) which performs a classification of coded windows. Both stages rely on learning to achieve their task. The GHA network operates a Principal Component-like analysis of windows. During that phase, sets of primitive images fitted to various contexts are constituted, each set being appropriate for the description of some aspects of the windows (contrast, texture, border, color, ...). Windows can subsequently be coded by projection on these bases. Finally, a supervised learning is carried out to build up the classifier, using parts of characterized images with respect to the features under consideration. Results: Most of the interesting features detectable in black tumors can be observed in 16*16 pixel windows, providing resolution is properly chosen. The analysis of such windows by our system shows that classification is properly achieved when 20 primitive windows at least are considered for windows coding. Preliminary results dealing with the detection of several features of lesions have been found encouraging. Conclusion: The use of model-free approach, directly applied on image and based on learning by sample has been found efficient. Adapting this approach to the detection of melanoma appears a promising way.
Oral lesion classification using true-color images
Author(s):
Artur Chodorowski;
Ulf Mattsson;
Tomas Gustavsson
Show Abstract
The aim of the study was to investigate effective image analysis methods for the discrimination of two oral lesions, oral lichenoid reactions and oral leukoplakia, using only color information. Five different color representations (RGB, Irg, HSI, I1I2I3 and La*b*) were studied and their use for color analysis of mucosal images evaluated. Four common classifiers (Fisher's linear discriminant, Gaussian quadratic, kNN-Nearest Neighbor and Multilayer Perceptron) were chosen for the evaluation of classification performance. The feature vector consisted of the mean color difference between abnormal and normal regions extracted from digital color images. Classification accuracy was estimated using resubstitution and 5-fold crossvalidation methods. The best classification results were achieved in HSI color system and using linear discriminant function. In total, 70 out of 74 (94.6%) lichenoid reactions and 14 out of 20 (70.0%) of leukoplakia were correctly classified using only color information.
Multiresolution simulated annealing for brain image analysis
Author(s):
Sven Loncaric;
Zoran Majcenic
Show Abstract
Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.
Results of wavelet image compression on CT-based clinical radiation oncology treatment planning
Author(s):
Charles L. Smith;
Wei-Kom Chu;
Randy Wobig;
Hong-Yang Chao;
Charles Enke
Show Abstract
Computed Tomography (CT) images have become an essential element in the derivation of a clinical radiation oncology treatment plan. The intent of this study was to assess how wavelet compression influences calculated dose distribution in radiotherapy treatment planning due to changes in pixel values in the CT. Chest CT images for radiotherapy were put into a 2D wavelet compression engine and compressed to a ratio of 30:1. A radiotherapy treatment plan was constructed to generate a dose distribution within the CT image. Images subjected to compression were analyzed using Dose Volume Histograms (DVHs) and compared to the DVHs generated for the uncompressed chest CT. The lossy wavelet compression operation irreversibly changes pixel values in the CT. These changes in the CT can give rise to errors in the dose calculations performed by treatment planning systems that account for tissue inhomogeneities in the image. The DVHs for 30:1 compression using this wavelet engine were highly similar to the DVHs obtained using the uncompressed image. A paired comparison test was used to compare the DVH data. Image compression of CT for radiation therapy treatment planning results in changes in the dose distribution within the patient.
Random sets technique for information fusion applied to estimation of brain functional images
Author(s):
Therese M. Smith;
Patrick A. Kelly
Show Abstract
A new mathematical technique for information fusion based on random sets, developed and described by Goodman, Mahler and Nguyen (The Mathematics of Data Fusion, Kluwer, 1997) can be useful for estimation of functional brian images. Many image estimation algorithms employ prior models that incorporate general knowledge about sizes, shapes and locations of brain regions. Recently, algorithms have been proposed using specific prior knowledge obtained from other imaging modalities (for example, Bowsher, et al., IEEE Trans. Medical Imaging, 1996). However, there is more relevant information than is presently used. A technique that permits use of additional prior information about activity levels would improve the quality of prior models, and hence, of the resulting image estimate. The use of random sets provides this capability because it allows seemingly non-statistical (or ambiguous) information such as that contained in inference rules to be represented and combined with observations in a single statistical model, corresponding to a global joint density. This paper illustrates the use of this approach by constructing an example global joint density function for brain functional activity from measurements of functional activity, anatomical information, clinical observations and inference rules. The estimation procedure is tested on a data phantom with Poisson noise.
Describing the structural shape of melanocytic lesions
Author(s):
Tim Kam Lee;
M. Stella Atkins;
Richard P. Gallagher;
Calum E. MacAulay;
Andy Coldman;
David I. McLean M.D.
Show Abstract
This paper presents an automatic computer system for analyzing the structural shape of cutaneous melanocytic lesion borders. The computer system consists of two steps: pre-preprocessing the skin lesion images and lesion border shape analysis. In the preprocessing step, the lesion border is extracted from the skin images after the dark thick hairs are removed by a program called DullRazor. The second step analyzes the structural shape of the lesion border using a new measure called sigma-ratio. The new measure is derived from scale- space filtering technique with an extended scale-space image. When comparing the new measure with other common shape descriptors, such as compactness index and fractal dimensional, sigma-ratio is more sensitive to the structural protrusions and indentations. In addition, the extended scale- space image can be used to pinpoint the locations of the structural indentations and protrusions, the potential problem areas of the lesion.
Multiscale image restoration for photon imaging systems
Author(s):
Ghada Jammal;
Albert Bijaoui
Show Abstract
Nuclear medicine imaging is a widely used commercial imaging modality which relies on photon detection as the basis of image formation. As a diagnosis tool, it is unique in that it documents organ function and structure. It is a way to gather information that may be otherwise unavailable or require surgery. Practical limitations on imaging time and the amount of activity that can be administered safely to patients are serious impediments to substantial further improvements in nuclear medicine imaging. Hence, improvements of image quality via optimized image processing represent a significant opportunity to advance the state-of-the-art int his field. We present in this paper a new multiscale image restoration method that is concerned with eliminating one of the major sources of error in nuclear medicine imaging, namely Poisson noise, which degrades images in both quantitative and qualitative senses and hinders image analysis and interpretation. The paper then quantitatively evaluates the performances of the proposed method.
Analysis of the effects of discrete wavelet compression on automated mammographic mass shape classification
Author(s):
Lori Mann Bruce;
Ravi Kalluri
Show Abstract
This pilot study investigates the effect of discrete wavelet compression on automated mammographic mass shape classification. Commonly used shape features are extracted from masses for uncompressed and compressed images. These features include radial distance mean, standard deviation, entropy, zero-crossing count, roughness index, area-ratio, and compactness. The effects of the compression on these features are analyzed. Next, linear discriminant analysis is used to appropriately weight the features, and a minimum Euclidean distance classifier is used to separate the mass shapes into three classes: round, nodular, and stellate. The classification results are compared between the uncompressed and compressed images.
CT projection estimation and applications to fast and local reconstruction
Author(s):
Guy M. Besson
Show Abstract
In this paper, a straightforward method of estimating the CT projections is applied to simplified pre-processing, simplified reconstruction filtering, and to low-dose and local CT image reconstruction. The method relies on the projection- to-projection data redundancy that is shown to exist in CT. In the pre-processing application, the output of a few, angularly sparse fully pre-processed projections, is utilized in a linearization model to estimate directly the output of pre- processing for all the other projections. In the reconstruction filtering application, and with projection i and k being fully filtered, intermediate projection j low frequency components are estimated by a linear combination of projections i and k. That estimate is then subtracted from projection j, and the resulting high-frequency components are then filtered without zeropadding. By linearity the same combination of fully filtered projections i and k is added back to projection j. A factor two simplification is obtained, that can be leveraged for reconstruction speed or cost reduction. The local reconstruction application builds on the filtering method, by showing that truncated data is sufficient for calculating a filtered projection high-frequencies, while a very simple projection completion model is shown to be effective in estimating the low frequencies. Image quality comparisons are described.
Morphological texture-based classification of abnormalities in mammograms
Author(s):
S. Baeg;
Nasser Kehtarnavaz;
Edward R. Dougherty
Show Abstract
This paper presents a computer-based classification scheme for masses in mammograms. The developed scheme is based on an introduced measure of surface fluctuation that captures texture roughness associated with the surface of an abnormality mass area. First local maxima/minima along rows and columns of a marked abnormality area in a mammogram are located. Morphological erosion is then applied to these maxima/minima to obtain the degree of surface roughness or coarseness within this area. The erosion is done for many sizes of a structuring element. This process is similar to the texture 'feeling' one gets by moving a finger horizontally and vertically on a surface. The developed scheme was tested on 108 mammograms with pathologically proven results; 55 benign and 53 malignant masses. All mammograms were digitized at 50- micron resolution. The Receiver Operating Characteristic (ROC) curves for different sizes of a structuring element were plotted. The average area underneath these curves was obtained to be 0.92. The corresponding clinical evaluation by the radiologist gave an area of 0.86. The results obtained indicate the potential of using this classification scheme as an electronic second opinion to lower the number of unnecessary biopsies.
Evaluation of cerebral 31-P chemical shift images utilizing statistical parametric mapping
Author(s):
Stefan Riehemann;
Christian Gaser;
Hans-Peter Volz;
Heinrich Sauer
Show Abstract
We present an evaluation technique of two dimensional (2D) nuclear magnetic resonance (NMR) chemical shift images (CSI) to analyze spatial differences of metabolite distributions and/or concentrations between groups of probands. Thus, chemical shift imaging is not only used as localization technique for NMR-spectroscopy, but the information of the complete spectroscopic image is used for the evaluation process. 31P CSI of the human brain were acquired with a Philips Gyroscan ACSII whole-body scanner at 1.5 T. CSI for different phosphorus metabolites were generated, all representing the same anatomical location. For each metabolite the CSI of two groups of subjects were compared with each other using the general linear model implemented in the widely distributed SPM96 software package. With this approach, even covariates or confounding variables like age or medication can be considered. As an example for the application of this technique, variations in the distribution of the 31P metabolite phosphocreatin between unmedicated schizophrenic patients and healthy controls were visualized. To our knowledge, this is the first approach to analyze spatial variations in metabolite concentrations between groups of subjects on the basis of chemical shift images. The presented technique opens a new perspective in the evaluation of 2D NMR spectroscopic data.
Hierarchical automated clustering of cloud point set by ellipsoidal skeleton: application to organ geometric modeling from CT-scan images
Author(s):
Frederic Banegas;
Dominique Michelucci;
Marc Roelens;
Marc Jaeger
Show Abstract
We present a robust method for automatically constructing an ellipsoidal skeleton (e-skeleton) from a set of 3D points taken from NMR or TDM images. To ensure steadiness and accuracy, all points of the objects are taken into account, including the inner ones, which is different from the existing techniques. This skeleton will be essentially useful for object characterization, for comparisons between various measurements and as a basis for deformable models. It also provides good initial guess for surface reconstruction algorithms. On output of the entire process, we obtain an analytical description of the chosen entity, semantically zoomable (local features only or reconstructed surfaces), with any level of detail (LOD) by discretization step control in voxel or polygon format. This capability allows us to handle objects at interactive frame rates once the e-skeleton is computed. Each e-skeleton is stored as a multiscale CSG implicit tree.
Physiologic classification of bovine ovarian follicles with wavelet packet texture analysis
Author(s):
Ujwala Mannivannan;
Gordon E. Sarty;
Heather Sirounis;
Jaswant Singh;
Gregg P. Adams;
Roger A. Pierson
Show Abstract
The purpose of the study was to develop a computer tool that can distinguish between atretic (non-ovulatory) and viable (ovulatory) bovine ovarian follicles on the basis of ultrasonographic image texture. Ovarian follicles of heifers (n equals 14) were removed at four physiologically important time points during the estrous cycle. Regions of interest (ROI) in the center of the follicular antrum of the largest follicle in in vitro images were selected manually. The follicles were classified as viable or atretic on the basis of texture quantified with two metrics: standard deviation (D) and energy (E) of gray-scale values in the wavelet transformed ROI images. The sensitivities S and specificities Sp varied between S equals 0.30 and Sp equals 1.00 and S equals 0.93, Sp equals 0.67 when a minimum distance classifier was used. The computer algorithm was able to distinguish between atretic and viable ovarian follicles based ultrasonographic textures of follicular fluid that were not discernible to the human eye. The ROC analysis demonstrated that the texture classification of follicular fluid may be developed into a clinically useful diagnostic tool. It is anticipated that the computer tool will allow diagnostic assessment of reproductive competence of ovarian follicles in women.
Source detection and separation of tomographic artifacts in nuclear medicine imaging using the concept of weighted scaling indices
Author(s):
Bjoern Poppe;
Gerald Kirchner;
Helmut Fischer
Show Abstract
The Scaling-Index-Method (SIM) has been extended by applying weighting functions which take into account the information available on the noise characteristics. The transformation of an image by applying the weighted Scaling-Index-Method yields a measure which depends on the local correlation of the image pixels and the noise structure in the image. In nuclear medicine imaging, the methods based on the SIM enables to extract sources superimposed by strong noise or to discriminate real sources from tomographic artifacts.
Hand radiograph analysis for fully automatic bone age assessment
Author(s):
Philippe Chassignet;
Teodor Nitescu;
Max Hassan;
Ruxandra Stanescu
Show Abstract
This paper describes a method for the fully automatic and reliable segmentation of the bones in a radiograph of the child's hand. The problem consists in identifying the contours of the bones and the difficulty lies in the large variability of the anatomical structures, according to age, hand pose or individual. The model shall not force any standard interpretation, hence we use a simple hierarchical geometric model that provides the only information required for the identification of the chunks of contours. The phalangeal and metacarpal resulting segmentation is proved robust over a set of many hundred of images and measurements of shapes, sizes, areas, ..., are now quite allowed. The next step consists in extending the model for more accurate measurements and also for the localization of the carpal bones.
Adaptive median filter algorithm to remove impulse noise in x-ray and CT images and speckle in ultrasound images
Author(s):
Amit R. Sawant;
Herbert D. Zeman;
Diane M. Muratore;
Sanjiv S. Samant;
Frank A. DiBianca
Show Abstract
An adaptive median filter algorithm to remove impulse noise in x-ray images and speckle in ultrasound images is presented. The ordinary median filter tends to distort or lose fine details in an image. Also, a significant amount of the original information in the image is altered. The proposed algorithm considers the local variability over the entire image to ensure that the fine details are preserved and more than 90 percent of the original information is retained. The robustness of the algorithm is demonstrated by applying it to images from different modalities like diagnostic x-ray, CT, portal imaging and ultrasound.
Voxel clustering for visible human data
Author(s):
Zhongke Wu;
Edmond C. Prakhash
Show Abstract
In this paper we describe the design and implementation of a new approach for voxel clustering. Clustering helps to group voxels with similar properties to enable manipulation of voxels as a single cluster. This new algorithm performs a clustering based on a marching slice segmentation algorithm. We propose to use this clustering for volume deformation, volume analysis and volume morphing. The algorithm has been implemented and tested using the visible human dataset.
Progressive multiresolution reconstruction in MRI
Author(s):
Yong Man Ro
Show Abstract
A progressive reconstruction technique is presented in magnetic resonance (MR) imaging. To do so, matching pursuit algorithm is applied to MR Imaging. For MR imaging, Fourier basis set is used as a dictionary of MP algorithm. In the progressive reconstruction algorithm proposed in this paper, the inner products of object signal and basis functions are utilized instead of those of residual signals and basis functions for original matching pursuit algorithm. By reconstructing the object hierarchically, one can achieve a quick recognition of image during data acquisition, i.e., at each phase encoding step, the image is reconstructed and updated progressively while, in conventional, image reconstruction is performed after all phase encoding steps. To verify the proposed technique, computer simulations and experiments with whole body MRI system are performed.
Pattern-histogram-based temporal change detection using personal chest radiographs
Author(s):
Yucel Ugurlu;
Takashi Obi;
Akira Hasegawa;
Masahiro Yamaguchi;
Nagaaki Ohyama
Show Abstract
An accurate and reliable detection of temporal changes from a pair of images has considerable interest in the medical science. Traditional registration and subtraction techniques can be applied to extract temporal differences when,the object is rigid or corresponding points are obvious. However, in radiological imaging, loss of the depth information, the elasticity of object, the absence of clearly defined landmarks and three-dimensional positioning differences constraint the performance of conventional registration techniques. In this paper, we propose a new method in order to detect interval changes accurately without using an image registration technique. The method is based on construction of so-called pattern histogram and comparison procedure. The pattern histogram is a graphic representation of the frequency counts of all allowable patterns in the multi-dimensional pattern vector space. K-means algorithm is employed to partition pattern vector space successively. Any differences in the pattern histograms imply that different patterns are involved in the scenes. In our experiment, a pair of chest radiographs of pneumoconiosis is employed and the changing histogram bins are visualized on both of the images. We found that the method can be used as an alternative way of temporal change detection, particularly when the precise image registration is not available.
Pulmonary organ analysis method and its evaluation based on thoracic thin-section CT images
Author(s):
Akira Tanaka;
Tetsuya Tozaki;
Yoshiki Kawata;
Noboru Niki;
Hironobu Ohmatsu;
Ryutaro Kakinuma;
Masahiro Kaneko;
Kenji Eguchi;
Noriyuki Moriyama
Show Abstract
To diagnosis the lung cancer as to determine if it has malignant or benign nature, it is important to understand the spatial relationship among the abnormal nodule and other pulmonary organs. But the lung field has very complicated structure, so it is difficult to understand the connectivity of the pulmonary organs using Thin-section CT images. This method consists of two parts. The first is the classification of the pulmonary structure based on the anatomical information. The second is the quantitative analysis that is then applicable to differential diagnosis, such as differentiation of malignant or benign abnormal tissue.
Lung cancer detection based on helical CT images using curved-surface morphology analysis
Author(s):
Hiroshi Taguchi;
Yoshiki Kawata;
Noboru Niki;
Hitoshi Satoh;
Hironobu Ohmatsu;
Ryutaro Kakinuma;
Kenji Eguchi;
Masahiro Kaneko;
Noriyuki Moriyama
Show Abstract
Lung cancer is known as one of the most difficult cancers to cure. The detection of lung cancer in its early stage can be helpful for medical treatment to limit the danger. A conventional technique that assists the detection uses helical CT, which provides information of 3D cross sectional images of the lung. We expect that the proposed technique will increase diagnostic confidence. However, mass screening based on helical CT images leads to a considerable number of images for the diagnosis, this time-consuming fact makes it difficult to be used in the clinic. To increase the efficiency of the mass screening process, we had proposed a computer-aided diagnosis (CAD). In this paper, we describe lung cancer detection based on helical CT Images using curved surface morphology analysis. Firstly, we extract the lung area from the original image. Secondly, we compute shape index value of the lung area. Thirdly, we extract the ROI (Region Of Interest) from the computed shape index value. Finally, we apply the diagnosis rule using neural network and detect the suspicious regions. We show here the result of our algorithm which is applied to helical CT images of 390 patients.
CAD system for coronary calcifications based on helical CT images
Author(s):
Yuji Ukai;
Noboru Niki;
Hitoshi Satoh;
Sigeru Watanabe;
Hironobu Ohmatsu;
Kenji Eguchi;
Noriyuki Moriyama
Show Abstract
In this paper, we describe a computer assisted diagnosis algorithm of coronary calcifications based on helical X-ray CT images which are used at the mass screening process for lung cancer diagnosis. Our diagnostic algorithm consists of four processes: Firstly, we choose the heart slices from the CT images which was taken from the mass screening, we classify the heart slices to three section. Second, we extract the heart region on each slice by the shape of the lung area and the body of vertebra. Third, the candidate regions of the coronary calcifications are detected by the difference calculus and thresholding process. Finally, to increase the effectiveness of the diagnostic processing, we cancel the artifacts included in the candidate regions by the diagnostic rules defined by us. We show here the result of our algorithm which is applied to helical CT images of 462 patients analyzed for lung cancer screening.
Computer-aided diagnosis system for lung cancer based on retrospective helical CT images
Author(s):
Hitoshi Satoh;
Yuji Ukai;
Noboru Niki;
Kenji Eguchi;
Kiyoshi Mori;
Hironobu Ohmatsu;
Ryutaro Kakinuma;
Masahiro Kaneko;
Noriyuki Moriyama
Show Abstract
In this paper, we present a computer-aided diagnosis (CAD) system for lung cancer to detect nodule candidates at an early stage from the present and the early helical CT screening of the thorax. We developed an algorithm that can compare automatically the slice images of present and early CT scans for the assistance of comparative reading in retrospect. The algorithm consists of the ROI detection and shape analysis based on comparison of each slice image in the present and the early CT scans. The slice images of present and early CT scans are both displayed in parallel and analyzed quantitatively in order to detect the changes in size and intensity affection. We validated the efficiency of this algorithm by application to image data for mass screening of 50 subjects (total: 150 CT scans). The algorithm could compare the slice images correctly in most combinations with respect to physician's point of view. We validated the efficiency of the algorithm which automatically detect lung nodule candidates using CAD system. The system was applied to the helical CT images of 450 subjects. Currently, we are carrying out the clinical field test program using the CAD system. The results of our CAD system have indicated good performance when compared with physician's diagnosis. The experimental results of the algorithm indicate that our CAD system is useful to increase the efficiency of the mass screening process. CT screening of thorax will be performed by using the CAD system as a counterpart to the double reading technique actually used in herical CT screening program, not by using the film display.
Color adjustment techniques to improve utility of stereo flicker chronoscopy and chronometry assessment of serial optic disk photographs in glaucoma patients
Author(s):
Robert H. Eikelboom;
Kanagasingam Yogesan;
Christopher J. Barry;
Ludmila Jitskaia;
Phillip H. House;
William H. Morgan
Show Abstract
The aim of this study was to develop a computerized stereo- flicker chronoscopy and chronometry system to improve the technique of neuroretinal optic disc rim assessment. Digitized stereo photographs of 22 eyes of glaucoma patients were analyzed subjectively by computerized flickering of serial images, and objectively by measuring the width of the neuroretinal rim at 18 positions around the optic disc. A major source of error was identified as color changes in the images over time. Color adjustment algorithms were developed and the assessments and measurements were repeated. For chronometry after color adjustment there was improvement to most of the tests on the data: agreement (50% to 73%), specificity (45% to 84%), positive agreement (50% to 71%) and negative agreement (50% to 73%). Sensitivity remained constant at about 55%.
Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling
Author(s):
Christiaan Schiepers;
Carl K. Hoh;
Magnus Dahlbom;
Hsiao-Ming Wu;
Michael E. Phelps
Show Abstract
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
Computerized diagnosis of breast calcifications using specimen radiography and simulated calcifications
Author(s):
Janne J. Naeppi;
Peter B. Dean;
Olli Nevalainen;
Sakari Toikkanen
Show Abstract
Several image-degrading factors limit the diagnosis of breast calcification from preoperative mammograms. We use specimen radiographs and algorithmically 3D-simulated calcification to produce high-quality data for testing a computerized differential diagnosis system. The preliminary results show that a computer can indeed differentiate explicitly between several mammographically characteristic calcification types. For a radiologist, such a detailed diagnosis could be more useful than simply characterizing the malignancy of the calcification. The results show new possibilities for the future diagnosis systems based on direct digital preoperative imaging.
3D reconstruction of clustered microcalcifications from two mammograms: information preservation
Author(s):
Rainer Stotzka;
Juergen Haase;
Tim Oliver Mueller
Show Abstract
This work describes the three-dimensional reconstruction of clustered microcalcifications based on only two digitized mammograms. First, the mammograms are examined separately to detect suspicious areas automatically. A further investigation separates microcalcifications from other structures. Based on an optimized region matching and on a specially adapted inverse discrete radon-transformation the corresponding volume is estimated from two projections and visualized by a continuously rotating object. But do two projections of a cluster carry enough information to reconstruct its three- dimensional arrangement sufficiently? We use Shannon's definition of information to estimate a lower bound of preserved information, described as the ratio of average information contained in the projections and average information contained in the volume, for simplified scenarios. Assuming two orthogonal projections of a cubic volume containing k binary representations of microcalcification positions the average information in the projections is determined by the combinatorial quantity of admissible arrangements and the size n3 of the volume. The combinatorial quantity of legal three-dimensional arrangements of microcalcification positions describes the average information carried by the volume. We showed that the amount of preserved information in the projections is more than 95% if k equals n/2 positions are found in both projections; it will exceed 98% if k equals n/4 positions are set.
Measurement of hippocampal volume changes in serial MRI scans
Author(s):
Julia Anne Schnabel;
Louis Lemieux;
U. C. Wieshmann;
Simon Robert Arridge
Show Abstract
We present a new method for the detection and measurement of volume changes in human hippocampi in serial Magnetic Resonance Imaging (MRI). The method follows a two-stage approach: (1) precise co-registration and intensity matching of the initial (baseline) and follow-up scan, and (2) refinement and segmentation propagation of the hippocampi outlines drawn in the baseline scan by an expert observer to the matched scan (co-registered and intensity matched follow- up scan of the time series). The first step is performed using MRreg, a rigid registration tool based on cross-correlation and intensity matching, and the second step makes use of the concept of active contour models for tracking the hippocampi outlines in the time series.
Reliable identification of sphere-shaped femoral heads in 3D image data
Author(s):
Heinrich Martin Overhoff;
Sven Ehrich;
Ute von Jan
Show Abstract
A new method is presented, which enables the reliable measurement of the femoral head sphere parameters (center coordinates and diameter) from tomographic image data, even when the raw data are erroneous. The hip joints of 13 newborns were scanned by a self developed 3-D ultrasound system. After automatic image segmentation, the femoral head is represented by spatially arranged voxel clouds. The 3-D image data are substantially corrupted by different types of errors. Moreover, the data only describe a segment of a sphere, whose area is about 10% of the sphere. This circumstance increases the identification problem. The problem of fitting the sphere parameters is solved by a robust technique based on rejection strategies for irrelevant points and data sets. The method was applicable in 21 of 26 cases. Substantial differences between automatically and expert determined sphere parameters were only observed for highly corrupted data sets, where the identification problem is inherently unstable. The identification method yielded correct and reliable identification of geometric measures from 3-D ultrasound image volumes and promises to be applicable also for other parameterized geometries and other tomographic image modalities as X-ray CT or MRI.
Visualization of a newborn's hip joint using 3D ultrasound and automatic image processing
Author(s):
Heinrich Martin Overhoff;
Djordje Lazovic;
Ute von Jan
Show Abstract
Graf's method is a successful procedure for the diagnostic screening of developmental dysplasia of the hip. In a defined 2-D ultrasound (US) scan, which virtually cuts the hip joint, landmarks are interactively identified to derive congruence indicators. As the indicators do not reflect the spatial joint structure, and the femoral head is not clearly visible in the US scan, here 3-D US is used to gain insight to the hip joint in its spatial form. Hip joints of newborns were free-hand scanned using a conventional ultrasound transducer and a localizer system fixed on the scanhead. To overcome examiner- dependent findings the landmarks were detected by automatic segmentation of the image volume. The landmark image volumes and an automatically determined virtual sphere approximating the femoral head were visualized color-coded on a computer screen. The visualization was found to be intuitive and to simplify the diagnostic substantially. By the visualization of the 3-D relations between acetabulum and femoral head the reliability of diagnostics is improved by finding the entire joint geometry.
Motion-compensated digital subraction angiography
Author(s):
Magnus Hemmendorff;
Hans Knutsson;
Mats T. Andersson;
Torbjorn Kronander
Show Abstract
Digital subtraction angiography, whether based on traditional X-ray or MR, suffers from patient motion artifacts. Until now, the usual remedy is to pixel shift by hand, or in some cases performing a global pixel shift semi-automatically. This is time consuming, and cannot handle rotations or local varying deformations over the image. We have developed a fully automatic algorithm that provides for motion compensation in the presence of large local deformations. Our motion compensation is very accurate for ordinary motions, including large rotations and deformations. It does not matter if the motions are irregular over time. For most images, it takes about a second per image to get adequate accuracy. The method is based on using the phase from filter banks of quadrature filters tuned in different directions and frequencies. Unlike traditional methods for optical flow and correlation, our method is more accurate and less susceptible to disturbing changes in the image, e.g. a moving contrast bolus. The implications for common practice are that radiologists' time can be significantly reduced in ordinary peripheral angiographies and that the number of retakes due to large or local motion artifacts will be much reduced.
Noise-resistant weak-structure enhancement for digital radiography
Author(s):
Martin Stahl;
Til Aach;
Thorsten M. Buzug;
Sabine Dippel;
Ulrich Neitzel
Show Abstract
Today's digital radiography systems mostly use unsharp masking-like image enhancement techniques based on splitting input images into two or three frequency channels. This method allows to enhance very small structures (edge enhancement) as well as enhancement of global contrast (harmonization). However, structures of medium size are not accessible by such enhancement. We develop and test a nonlinear enhancement algorithm based on hierarchically repeated unsharp masking, resulting in a multiscale architecture allowing consistent access to structures of all sizes. The algorithm is noise- resistant in the sense that it prevents unacceptable noise amplification. Clinical tests performed in the radiology departments of two major German hospitals so far strongly indicate the superior performance and high acceptance of the new processing.
Adaptive anisotropic noise filtering for magnitude MR data
Author(s):
Jan Sijbers;
Arnold Jan den Dekker;
Marleen Verhoye;
Anne-Marie Van der Linden;
Dirk Van Dyck
Show Abstract
In general, conventional noise filtering schemes applied to magnitude magnetic resonance (MR) images are based on Gauss distributed noise. Magnitude MR data, however, are Rice distributed. Not incorporating this knowledge leads inevitably to biased results, in particular when applying those filters in regions with low signal-to-noise ratio. In this work, we show how the Rice data probability distribution can be incorporated to construct a noise filter that is far less biased.
Microcalcification texture analysis in a hybrid system for computer-aided mammography
Author(s):
Galina L. Rogova;
Paul C. Stomper;
Chih-Chung Ke
Show Abstract
Characterization of microcalcifications with high level of confidence is a very challenging problem since microcalcifications are very small and the difference between benign and malignant clusters is often very subtle. The overall goal of the presented research is to develop a hybrid evidential system for characterization of microcalcifications in order to provide radiologists with a computerized decision aid. The hybrid system intelligently combines a domain knowledge based subsystem with a computer vision subsystem to improve the confidence level of microcalcification characterization. This paper is mainly devoted to the description of the developed computer vision part of the hybrid system. The computer vision subsystem is represented by a hierarchical evidential classifier that computes evidences about the class membership of individual microcalcifications based on their texture and then uses these evidences in a neural network for clusters characterization. The texture of each individual classification is represented by two features: the fractal dimension and a four dimension vector defined by coefficients of the Gabor expansion of a microcalcification image. The results obtained in our experiment prove the feasibility of using this method in the hybrid system.
Automatic detection of pulmonary nodules in low-dose screening thoracic CT examinations
Author(s):
Martin Fiebich;
Christian Wietholt;
Bernhard C. Renger;
Samuel G. Armato III;
Kenneth R. Hoffmann;
Dag Wormanns;
Stefan Diederich
Show Abstract
Computed tomography of the chest can be used as a screening method for lung cancer in a high-risk population. However, the detection of lung nodules is a difficult and time-consuming task for radiologists. The developed technique should improve the sensitivity of the detection of lung nodules without showing too many false positive nodules. In a study, which should evaluate the feasibility of screening lung cancer, about 1400 thoracic studies were acquired. Scanning parameters were 120 kVp, 5 mm collimation pitch of 2, and a reconstruction index of 5 mm. This results in a data set of about 60 to 70 images per exam. In the images the detection technique first eliminates all air outside the patient, then soft tissue and bony structures are removed. In the remaining lung fields a three-dimensional region detection is performed and rule-based analysis is used to detect possible lung nodules. This technique was applied to a small subset (n equals 17) of above studies. Computation time is about 5 min on an O2 workstation. The use of low-dose exams proved not be a hindrance in the detection of lung nodules. All of the nodules (n equals 23), except one with a size of 3 mm, were detected. The false positive rate was less than 0.3 per image. We have developed a technique, which might help the radiologist in the detection of pulmonary nodules in CT exams of the chest.
Feature choice for detection of cancerous masses by constrained optimization
Author(s):
Galina L. Rogova;
Chih-Chung Ke;
Raj S. Acharya;
Paul C. Stomper
Show Abstract
This paper reports a progress in the research on detection of cancerous masses with an evidential constrained optimization method. This method performs unsupervised partitioning mammograms into homogeneous regions by using 'generic' labels. Domain knowledge is employed to forbid certain configurations of regions during segmentation to reduce the false alarm rate. A constrained stochastic relaxation algorithm is used for building an optimal label map to separate tissue and masses. At the heart of this mammogram partitioning procedure is an evidential disparity measure function that estimates the similarity of two blocks of pixels in the feature space. The specific objective of the research described in this paper is the selection of independent features that represent the difference between tissue and masses texture more adequately for any type of the lesions and give the best segmentation result being combined in the disparity measure. Three types of features have been selected as the result of our experiments: the fractal dimension, a vector computed from pixel values, and a vector computed from the coefficients of Gabor expansion of the pixel block. Experiments with the MIAS database have been conducted and shown the feasibility of utilization of these features.
Homothetical warping of brain MR images
Author(s):
Amir Ghanei;
Hamid Soltanian-Zadeh
Show Abstract
We have developed a new method for warping MR brain images to other brain images or to the atlas data. We first used a deformable contour model to extract and warp the boundaries of the two brain images. We use a balloon force in this stage to ensure good matching of the final contour to the brain boundaries regardless of the initial contour. The applied deformable contour model captures the general shape of each brain from its boundary contour, after which, the outer boundaries can be mapped to each other. A mesh grid coordinate system is constructed for each brain thereafter, by applying a distance transformation to the resulting contours. The first image is mapped to the other image based on a one-to-one mapping between different layers defined by a mesh grid coordinate system.
Improving x-ray image resolution using subpixel shifts of the detector
Author(s):
Jean-Pierre Bruandet;
Jean-Marc Dinten
Show Abstract
The resolution of digitized images is linked to the detector array pixel size. Aliasing effects result from a non- adequation between the detector sampling and the signal bandwidths. The aim of this study is to develop a super- resolution algorithm for X-ray images. Our technique uses controlled horizontal and vertical subpixel shifts. Generalized sampling theorem of Papoulis, based on a multichannel approach, is the theoretical justification for the recovery of a high resolution image thanks to a set of low resolution ones. A higher resolution image is recovered by a minimization of a quadratic criterion. An iterative relaxation method is used to compute the minimum. To regularize, a priori data about the signal are introduced in order to fight against noise effects. Because of the opposite effects of regularization and super-resolution an adapted regularization that preserves discontinuities has to be used. Results obtained show that our algorithm recovers high frequency components on X-ray images without noise amplification. An analysis of real acquisitions in terms of modulation transfer function (MTF) shows that we obtain, thanks to this method, a 'virtual' detector better than a low resolution one, and equivalent to a real high resolution one.
The need to develop guidelines for the evaluation of medical image processing procedures
Author(s):
Irene Buvat;
Virginie Chameroy;
Florent Aubry;
Melanie Pelegrini;
Georges El Fakhri;
Celine Huguenin;
Habib Benali;
Andrew Todd-Pokropek;
Robert Di Paola
Show Abstract
Evaluations of procedures in medical image processing are notoriously difficult and often unconvincing. From a detailed bibliographic study, we analyzed the way evaluation studies are conducted and extracted a number of entities common to any evaluation protocol. From this analysis, we propose here a generic evaluation model (GEM). The GEM includes the notion of hierarchical evaluation, identifies the components which have always to be defined when designing an evaluation protocol and shows the relationships that exist between these components. By suggesting rules applying to the different components of the GEM, we also show how this model can be used as a first step towards guidelines for evaluation.
3D image display of fetal ultrasonic images by thin shell
Author(s):
Shyh-Roei Wang;
Yung-Nien Sun;
Fong-Ming Chang;
Ching-Fen Jiang
Show Abstract
Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.
3D segmentation and quantification of magnetic resonance data: application to the osteonecrosis of the femoral head
Author(s):
Catherine S. Klifa;
John A. Lynch;
Souhil Zaim;
Harry K. Genant
Show Abstract
The general objective of our study is the development of a clinically robust three-dimensional segmentation and quantification technique of Magnetic Resonance (MR) data, for the objective and quantitative evaluation of the osteonecrosis (ON) of the femoral head. This method will help evaluate the effects of joint preserving treatments for femoral head osteonecrosis from MR data. The disease is characterized by tissue changes (death of bone and marrow cells) within the weight-bearing portion of the femoral head. Due to the fuzzy appearance of lesion tissues and their different intensity patterns in various MR sequences, we proposed a semi-automatic multispectral segmentation of MR data introducing data constraints (anatomical and geometrical) and using a classical K-means unsupervised clustering algorithm. The method was applied on ON patient data. Results of volumetric measurements and configuration of various tissues obtained with the semi- automatic method were compared with quantitative results delineated by a trained radiologist.
Vertebral surface extraction from ultrasound images for technology-guided therapy
Author(s):
Diane M. Muratore;
Benoit M. Dawant;
Robert L. Galloway Jr.
Show Abstract
The use of intra-operative spinal ultrasound (US) for surgical navigation of the spine has been proposed as a novel extension of interactive image-guided surgery (IIGS) procedures. For the proposed spinal applications, a surface-based registration of US images to preoperative CT or MR scans will be applied. This type of registration requires the extraction of numerous vertebral surface points from each slice in a series of US images. A plastic spine phantom immersed in a water tank was scanned posteriorly with one of several US transducers -- at frequencies 3.5, 4.5, and 7.5 MHz. Images from the lumbar region of the spine were captured in the transverse plane and were processed for surface detection of the spinous process, transverse processes and laminae. Steps in image processing included application of a morphological open operator, a linear threshold, a ray-tracing algorithm, and a 3-D point identifier. Despite the generally noisy environment of US, the three regions of interest from the lumbar vertebrae were successfully extracted from all three sets of images. With a successful segmentation of corresponding CT or MR images, registration of these modalities to intra-operative US images will provide surgeons with a step towards a less invasive method of spinal therapy.
Java interface to a computer-aided diagnosis system for acute pulmonary embolism using PIOPED findings
Author(s):
Erik D. Frederick;
Georgia D. Tourassi;
Matthew Gauger;
Carey E. Floyd Jr.
Show Abstract
An interface to a Computer Aided Diagnosis (CAD) system for diagnosis of Acute Pulmonary Embolism (PE) from PIOPED radiographic findings was developed. The interface is based on Internet technology which is user-friendly and available on a broad range of computing platforms. It was designed to be used as a research tool and as a data collection tool, allowing researchers to observe the behavior of a CAD system and to collect radiographic findings on ventilation-perfusion lung scans and chest radiographs. The interface collects findings from physicians in the PIOPED reporting format, processes those findings and presents them as inputs to an artificial neural network (ANN) previously trained on findings from 1,064 patients from the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) study. The likelihood of PE predicted by the ANN and by the physician using the system is then saved for later analysis.
Decomposition of coronary angiograms into nonrigid moving layers
Author(s):
Robert A. Close;
James Stuart Whiting
Show Abstract
We present a method for decomposition of angiographic image sequences into moving layers undergoing translation, rotation, and scaling. We first describe a regularization method for scatter-glare correction which can be used to obtain good estimates of projected x-ray attenuation coefficient. We then compute a set of weighted correlation functions to determine the motion of each layer, and compute the layer densities in the spatial domain by averaging along moving trajectories. We demonstrate the utility of our method by successfully decomposing simulated angiograms into moving layers. We also demonstrate visually acceptable layer decomposition of actual angiograms.
Lesion detection and characterization in digital mammography by Bezier histograms
Author(s):
Hairong Qi;
Wesley E. Snyder
Show Abstract
Due to some important properties of Bezier splines, they have great potential use in computer-aided mammogram diagnosis. In this paper, Bezier splines are applied in both lesion detection and characterization processes, where lesion detection is achieved by segmentation using a natural threshold computed from Bezier smoothed histogram; and lesion characterization is achieved by measuring the fitness between Gaussian and Bezier histograms of data projected on principal components of the segmented lesions. Experimental results show that this approach is efficient, easy to use, an can achieve high sensitivity.
Evaluation of renal function with contrast MRI: mathematical modeling and error analysis
Author(s):
Roza Rusinek
Show Abstract
Dynamic MR imaging with contrast media is increasingly used to provide a safe and noninvasive assessment of renal function. Following intravenous injection of a paramagnetic tracer such as Gd-DPTA, the time course of MR signal is measured in arterial blood and in the kidneys. We use mathematical modeling and Monte Carlo trials to evaluate errors in computed renal parameters such as mean transit time (sigma) m as a function of injected dose. The model assumes that tracer concentration in the renal compartments is the result of convolution of the arterial curve and unit response functions. Results indicate that (sigma) m is not a monotonic function of the dose: instead it reaches a minimum for 2.5 - 3.5 ml of 500 mmol/l solution of Gd-DPTA and it rapidly increases for doses lower than 1 ml. These results can help optimize MR protocol and establish the feasibility of MR measurements using reduced doses of Gd-DPTA.
Front-end data reduction in computer-aided diagnosis of mammograms: a pilot study
Author(s):
Hamed Sari-Sarraf;
Shaun S. Gleason;
Robert M. Nishikawa
Show Abstract
This paper presents the results of a pilot study whose primary objective was to further substantiate the efficacy of front- end data reduction in computer-aided diagnosis (CAD) of mammograms. This concept is realized by a preprocessing module that can be utilized at the front-end of most mammographic CAD systems. Based on fractal encoding, this module takes a mammographic image as its input and generates, as its output, a collection of subregions called focus-of-attention regions (FARs). These FARs contain all structures in the input image that appear to be different from the normal background tissue. Subsequently, the CAD systems need only to process the presented FARs, rather than the entire input image. This accomplishes two objectives simultaneously: (1) an increase in throughput via a reduction in the input data, and (2) a reduction in false detections by limiting the scope of the detection algorithms to FARs only. The pilot study consisted of using the preprocessing module to analyze 80 mammographic images. The results were an average data reduction of 83% over all 80 images and an average false detection reduction of 86%. Furthermore, out of a total of 507 marked microcalcifications, 467 fell within FARs, representing a coverage rate of 92%.
Object-based deformation technique for 3D CT lung nodule detection
Author(s):
Shyhliang A. Lou;
Chun-Long Chang;
Kang-Ping Lin;
Te-Shin Chen
Show Abstract
Helical CT scans have shown effectiveness in detecting lung nodules compared with the convention thoracic radiography. However, in a two-dimensional (2-D) image slice, it is difficult to differentiate nodules from the vertically oriented pulmonary blood vessels. This paper reports an object-based deformation method to detect lung nodules from CT images in three-dimension (3-D). Object-based deformation method in this paper consists of preprocessing and nodule detection. CT numbers are used to identify the pulmonary region and the objects of nodules, blood vessels, and airways. Hough transform is used to identify each circle shape within the pulmonary region. The circles in the different slices are then grouped into the same nodule, airway, or blood to be a target object. To differentiate lung nodules from blood vessels and airways, we use a deformable seed object technique. For a given target object within the pulmonary region, the seed object grows within the target object until it is against the wall of the target object. The seed object is then deformed to match the target object. A cost function is used to match the seed object and the target object. Eight patient cases with 18 nodules were included in this study and the average size of the nodules was 2.4 cm approximately.
Application of a Bayesian belief network in a computer-assisted diagnosis scheme for mass detection
Author(s):
Bin Zheng;
Yuan-Hsiang Chang;
Xiao Hui Wang;
Walter F. Good;
David Gur
Show Abstract
The study is to investigate the use of a Bayesian belief network (BBN) in a computer-assisted diagnosis (CAD) scheme for mass detection in digitized mammograms. Two independent image sets were used in the experiments. After initial processing of image segmentation and adaptive topographic region growth in our CAD scheme, 288 true-positive mass regions and 2,204 false-positive regions were identified in the training image set. In the testing set, 304 true-positive and 1,586 false-positive regions were identified. Fifty features were computed for each region. After using a genetic algorithm search, a BBN was constructed based on 12 local and four global features in order to classify these regions as positive or negative for mass. The performance of the BBN was evaluated using an ROC methodology. The BBN achieved an area under the ROC curve of 0.873 plus or minus 0.009 in classifying the 304 positive and 1,586 negative regions in the testing set. This result was better than using an artificial neural network with the same set of input features. After incorporating the BBN into our CAD scheme as the last classification stage, we detected 80% of 189 positive mass cases (in 433 testing images) with an average detection rate of 0.76 false-positive regions per image. Therefore, this study demonstrated that a BBN approach could yield a comparable performance to that using other classifiers. Using a probabilistic learning concept and interpretable topology, the BBN provides a flexible approach to improving CAD schemes.
Generalized procrustean image deformation for subtraction of mammograms
Author(s):
Walter F. Good;
Bin Zheng;
Yuan-Hsiang Chang;
Xiao Hui Wang;
Glenn S. Maitz
Show Abstract
This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.
Optimizing the feature set for a Bayesian network for breast cancer diagnosis using genetic algorithm techniques
Author(s):
Xiao Hui Wang;
Bin Zheng;
Yuan-Hsiang Chang;
Walter F. Good
Show Abstract
This study investigates the degree to which the performance of Bayesian belief networks (BBNs), for computer-assisted diagnosis of breast cancer, can be improved by optimizing their input feature sets using a genetic algorithm (GA). 421 cases (all women) were used in this study, of which 92 were positive for breast cancer. Each case contained both non-image information and image information derived from mammograms by radiologists. A GA was used to select an optimal subset of features, from a total of 21, to use as the basis for a BBN classifier. The figure-of-merit used in the GA's evaluation of feature subsets was Az, the area under the ROC curve produced by the corresponding BBN classifier. For each feature subset evaluated by the GA, a BBN was developed to classify positive and negative cases. Overall performance of the BBNs was evaluated using a jackknife testing method to calculate Az, for their respective ROC curves. The Az value of the BBN incorporating all 21 features was 0.851 plus or minus 0.012. After a 93 generation search, the GA found an optimal feature set with four non-image and four mammographic features, which achieved an Az value of 0.927 plus or minus 0.009. This study suggests that GAs are a viable means to optimize feature sets, and optimizing feature sets can result in significant performance improvements.
Automatic retinal image quality assessment and enhancement
Author(s):
Samuel C. Lee;
Yiming Wang
Show Abstract
This paper describes a method for machine (computer) assessment of the quality of a retinal image. The method provides an over-all quantitative and objective measure using a quality index Q. The Q of a retinal image is calculated by the convolution of a template intensity histogram obtained from a set of typically good retinal images and the intensity histogram of the retinal image. After normalization, the Q has a maximum value of 1, indicating excellent quality, and a minimum value of 0, indicating bad quality. The paper also presents several application examples of Q in image enhancement. It is shown that the use of Q can help computer scientists evaluate the suitability and effectiveness of image enhancement methods, both quantitatively and objectively. It can further help computer scientists improve retinal image quality on a more scientific basis. Additionally, this machine image quality measure can also help physicians make medical diagnosis with more certainty and higher accuracy. Finally, it should be noted that although retinal images are used in this study, the methodology is applicable to the image quality assessment and enhancement of other types of medical images.
Investigation of a band-pass filter using view-specific image sequences for edge enhancement in quantitative coronary angiography
Author(s):
Craig A. Morioka;
Francois O. Bochud;
Craig K. Abbey;
Miguel P. Eckstein;
James Stuart Whiting
Show Abstract
Quantitative coronary angiography (QCA) diameter measurements are important in determining the extent of coronary artery disease progression and course of treatment in a patient. Traditional QCA techniques filter the X-ray angiographic image in order to enhance the edge profiles. We investigated a new method of obtaining an edge enhancement filter based on the power spectrum of an ensemble of view specific background images for X-ray angiographic images. The band-pass filter is obtained from the power spectra of a particular view imaged with (1) background only and (2) contrast filled arteries plus background. We tested our band-pass filter by measuring the diameters of a coronary artery phantom. The angiograms were filtered with a Sobel kernel to highlight the edges. The same angiograms were band-pass filtered and then Sobel filtered to see if our band-pass filter had any effect on the accuracy of the artery diameter measurement. The mean absolute percent error of the diameter measures decreased with the use of the band-pass filter 14.2% plus or minus 16.5% (n equals 57). Two- way analysis of variance was not statistically significant between the diameter measures (0.5 - 5.0 mm) of the Sobel only filtered image compared to the band-pass edge enhancement plus Sobel filtering.
Feature analysis of lung nodules using sector geometry and multiple circular path neural network
Author(s):
Shih-Chung Benedict Lo;
Matthew T. Freedman M.D.;
Jyh-Shyan Lin;
Xin-Wei Xu;
Andrzej Delegacz
Show Abstract
In this study, each chest radiograph was processed by a three- dimensional Gaussian-like matched filter. Edge tracking and region growing techniques were applied to the filtered image to segment all possible nodules. The area and its boundary were then divided into 36 sectors (i.e., 10 degrees per sector) using 36 equi-angle dividers radiated from the center. For each suspicious area, we computed radius, average gradient within the sector, average gradient near the boundary, and contrast were computed features within each 10 degree sector. A total of 144 computed features for one suspicious area were used as input values for a newly designed three layer neural network to perform pattern recognition studies. The neural network system was constructed to emphasize the correlation information associated with the features. In this part of the research, several circular path neural network connections between the input and the first hidden layers were linked. These included (1) self correlation networking and (2) neighborhood correlation networking. The networks for self correlation and neighborhood correlation were designed to extract the common factors within the sector and between sectors, respectively. In this study, neighborhood correlation across sectors of 20 degrees, 30 degrees, 40 degrees, and 50 degrees were used. We have tested this approach on the JPST chest radiograph database consisting of 154 chest radiographs using the grouped jack-knife method. The performance in detecting medium-sized nodules was 75% in sensitivity at 5.9 false-positives per image. The performance remained the same for large nodules (with 75% sensitivity at 5.6 false-positive per image). This work presents a new and effective way in analyzing tumor objects. Instead of lumping global features for each object and analyzing them by a conventional classifier, the new method computes features in sectors and analyzes them using a fan-oriented neural network. We also found that the MCPNN technique performs slightly more effective in detecting larger nodules than smaller nodules.
Progressive transmission of high-fidelity radiographic images at very low bit rates
Author(s):
Shuyu Yang;
Sunanda Mitra
Show Abstract
Compression of medical images has always been viewed with skepticism since the loss of information involved is thought to affect diagnostic information. Recent reports, however, indicate that some wavelet based compression techniques may not effectively reduce the image quality even when subjected to compression ratios (CRs) up to 30:1. Although generation of minimum distortion at a specific bit rate by vector quantization (VQ) has been theoretically proven from rate distortion theory almost half a century ago, practical implementation of VQ for small sizes and classes of images has been accomplished relatively recently. Many of the earlier algorithms using simple statistical clustering suffer from a number of problems namely lack of convergence, getting trapped in local minima, and inability to handle large datasets. More advanced vector quantization algorithms have eliminated some of the above problems. However, vector quantization of large data sets as encountered in many medical images still remains a challenging problem. We present here an adaptive vector quantization technique including an entropy coding module that is capable of encoding large size radiographic as well as color images with minimum distortion in the decoded images even at CRs above 100:1.
Fast iterative deconvolution technique for echographic imaging
Author(s):
Riccardo Carotenuto;
Giovanni Cardone;
Gabriella Cincotti;
Paola Gori;
Massimo Pappalardo
Show Abstract
Many deconvolution techniques have been proposed in literature based on the knowledge of the Point Spread Function or on its estimation from the observed image. In this paper, we propose an alternative approach, which performs a local inversion through an iterative technique. The proposed iterative deconvolution combines accuracy with fast execution and it is well suited for fast hardware implementation. A discussion on the convergence of the algorithm is also presented in the paper. A novel approach to the deconvolution in medical imaging is proposed. Although its efficiency has been demonstrated in the work only for improving lateral resolution, it can easily be applied to full 2-dimensional deconvolution. The proposed technique has local characteristics and can operate on a limited number of data at a time with great advantage for memory storage requirements; further, it is well suited for a fast hardware implementation because only multiplications and summations are used in the algorithm. The feasibility of an alternative technique for the medical image deconvolution is analyzed theoretically; experimental results are also presented. The results are compared with those obtained by a conventional Fourier-based method.
Method for recognizing multiple radiation fields in computed radiography
Author(s):
Xiaohui Wang;
Jiebo Luo;
Robert A. Senn;
David H. Foos
Show Abstract
An algorithm is developed to detect collimation regions in computed radiography images. Based on a priori knowledge of the collimation process, the algorithm consists of four major stages of operations: (1) pixel-level detection and classification of collimation boundary transition pixels; (2) line-level delineation of candidate collimation blades; (3) estimation of the most likely partitioning; and (4) region- level determination of the collimation configuration. This algorithm has been tested on a set of 8,436 images, which includes 7,703 single-exposure images and 733 multiple- exposure images. An overall success rate in excess of 99% has been achieved.
Accurate tracking of blood vessels and EEG electrodes by consecutive cross-section matching
Author(s):
Herke Jan Noordmans;
Arnold W. M. Smeulders;
Max A. Viergever
Show Abstract
We discuss the quality by which image algorithms are able to characterize 3D line structure, like blood vessels, nerves, chromosomes, electrodes, etc. In the study, the methods are examined how well they can determine the axial position, local intensity, local diameter, or local orientation under conditions of noise, bifurcations and neighbor structures. We present the Consecutive Cross-Section Matching (CCSM) method, and compare it with the global method of Lorentz and the local slice method of Zhou. When applying the methods on a circular test image and 3D phase-contrast MR angiography image, we find that the Lorentz method gives reasonable estimates for the width of the line-structures, but has large difficulties to pass bifurcations. The local slice method is more accurate but is sensitive to noise. As the CCSM method takes more samples along the axis, the CCSM method appears to be much more accurate and robust in characterizing line-structures.
Knowledge-based localization of hippocampus in human brain MRI
Author(s):
Hamid Soltanian-Zadeh;
Mohammad-Reza Siadat
Show Abstract
Hippocampus is an important structure of the human brain limbic system. The variations in the volume and architecture of this structure have been related to certain neurological diseases such as schizophrenia and epilepsy. This paper presents a two-stage method for localizing hippocampus in human brain MRI automatically. The first stage utilizes image processing techniques such as nonlinear filtering and histogram analysis to extract information from MRI. This stage generates binary images, locates lateral and third ventricles, and the inferior limit of Sylvian Fissure. The second stage uses a shell of expert system named VP-EXPERT to analyze the information extracted in the first stage. This stage utilizes absolute and relative spatial rules and spatial symmetry rules to locate the hippocampus. The system has been tested using MRI studies of six epilepsy patients. MRI data consisted of a total of 128 images. The system correctly identified all of the slices without hippocampus, and correctly localized hippocampus is about n 78% of the slices with hippocampus.