Proceedings Volume 4322

Medical Imaging 2001: Image Processing

cover
Proceedings Volume 4322

Medical Imaging 2001: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 3 July 2001
Contents: 17 Sessions, 209 Papers, 0 Presentations
Conference: Medical Imaging 2001 2001
Volume Number: 4322

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Tomographic Reconstruction I
  • Tomographic Reconstruction II
  • Tomographic Reconstruction III/Segmentation I
  • Segmentation II
  • Deformable Geometry I
  • Poster Session: Tomographic Reconstruction, Statistical Methods, Shape, Data Retrieval, Motion, Multiresolution, Preprocessing, Pattern Recognition and Coding
  • Shape
  • Deformable Geometry II
  • Segmentation III
  • Pattern Recognition/Preprocessing
  • Registration I
  • Registration II
  • Computer-Aided Diagnosis I
  • Computer-Aided Diagnosis II
  • Computer-Aided Diagnosis III
  • Restoration/Deblurring
  • Poster Session: Tomographic Reconstruction, Statistical Methods, Shape, Data Retrieval, Motion, Multiresolution, Preprocessing, Pattern Recognition and Coding
  • Deformable Geometry I
  • Poster Session: Tomographic Reconstruction, Statistical Methods, Shape, Data Retrieval, Motion, Multiresolution, Preprocessing, Pattern Recognition and Coding
  • Poster Session: Segmentation, Deformable Geometry, Registration, and Computer-Aided Diagnosis
Tomographic Reconstruction I
icon_mobile_dropdown
Statistical x-ray-computed tomography image reconstruction with beam hardening correction
This paper describes two statistical iterative reconstruction methods for X-ray CT. The first method assumes a mono-energetic model for X-ray attenuation. We approximate the transmission Poisson likelihood by a quadratic cost function and exploit its convexity to derive a separable quadratic surrogate function that is easily minimized using parallelizable algorithms. Ordered subsets are used to accelerate convergence. We apply this mono-energetic algorithm (with edge-preserving regularization) to simulated thorax X-ray CT scans. A few iterations produce reconstructed images with lower noise than conventional FBP images at equivalent resolutions. The second method generalizes the physical model and accounts for the poly-energetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume the object consists of a given number of non-overlapping tissue types. The attenuation coefficient of each tissue is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this poly-energetic model and develop an iterative algorithm for estimating the unknown densities in each voxel. Applying this method to simulated X-ray CT measurements of a phantom containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.
Development of a high-performance noise-reduction filter for tomographic reconstruction
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
Reconstruction of electron paramagnetic resonance images using iterative methods
Electron Paramagnetic Resonance (EPR) allows for the non-invasive imaging of free radicals in biological systems. Although a number of physical factors have hindered the development of EPR as an imaging modality, EPR offers the potential for tissue oxymetry. EPR images are typically reconstructed using a traditional filtered back-projection technique. We are attempting to improve the quality of EPR images by using maximum-entropy based iterative image reconstruction algorithms. Our investigation has so far focused on two methods, the multiplicative algebraic reconstruction technique (MART), and an algorithm that is motivated by interior-point reconstruction. MART is a row-action method that maintains strict equality in the constraints while minimizing the entropy functional. The latter method, which we have named Least-Squares Barrier Entropy (LSBEnt), transforms the constrained problem into an unconstrained problem and maximizes entropy at a prescribed distance from the measured data. EPR studies are frequently characterized by low signal-to-noise ratios and wide line widths. The effect of the backprojection streaking artifact can be quite severe and can seriously compromise a study. We have compared the iterative results with filtered backprojection on two-dimensional (2-D) EPR acquisitions of various phantoms. Encouraging preliminary results have demonstrated that one of the clear advantages of the iterative methods is their lack of streaking artifacts that plague filtered backprojection.
Variational approach to tomographic reconstruction
We formulate the tomographic reconstruction problem in a variational setting. The object to be reconstructed is considered as a continuous density function, unlike in the pixel-based approaches. The measurements are modeled as linear operators (Radon transform), integrating the density function along the ray path. The criterion that we minimize consists of a data term and a regularization term. The data term represents the inconsistency between applying the measurement model to the density function and the real measurements. The regularization term corresponds to the smoothness of the density function. We show that this leads to a solution lying in a finite dimensional vector space which can be expressed as a linear combination of generating functions. The coefficients of this linear combination are determined from a linear equation set, solvable either directly, or by using an iterative approach. Our experiments show that our new variational method gives results comparable to the classical filtered back-projection for high number of measurements (projection angles and sensor resolution). The new method performs better for medium number of measurements. Furthermore, the variational approach gives usable results even with very few measurements when the filtered back-projection fails. Our method reproduces amplitudes more faithfully and can cope with high noise levels; it can be adapted to various characteristics of the acquisition device.
Resolution recovery for list-mode reconstruction in SPECT
Luc Bouwens, Howard C. Gifford, Rik Van de Walle, et al.
We developed an iterative reconstruction method for SPECT which uses list-mode data instead of binned data. It uses a more accurate model of the collimator structure. The purpose of the study was to evaluate the resolution recovery and to compare its performance to other iterative resolution recovery methods in the case of high noise levels The source distribution is projected onto an intermediate layer. Doing this we obtain the complete emission radiance distribution as an angular sinogram. This step is independent of the acquisition system. To incorporate the resolution of the system we project the individual list-mode events over the collimator wells to the intermediate layer. This projection onto the angular sinogram will define the probability a photon from the source distribution will reach this specific location on the surface of the crystal, thus being accepted by the collimator hole. We compared the SPECT list-mode reconstruction to MLEM, OSEM and RBI. We used Gaussian shaped point sources with different FWHM at different noise levels. For these distributions we calculated the reconstructed images at different number of iterations. The modeling of the resolution in this algorithm leads to a better resolution recovery compared to other methods, which tend to overcorrect.
Cone-beam image reconstruction for detectors with nonsquare detector elements
Kwok C. Tam
In many cone beam reconstruction algorithms square geometry for the detector elements and cubic geometry for the reconstruction voxels are assumed. With such geometry the various operations in image reconstruction, notably projection and backprojection, can be carried out in the same manner at all angles, resulting in uniform image quality and ease of operation. However, in current multi-row cone beam CT systems the spacing between the detector rows is typically greater than the lateral spacing between the detector elements for the reconstruction of objects with slice thickness greater than the lateral resolutions. The asymmetric voxel dimensions and detector element dimensions introduce complications in the image reconstruction operations, and potentially non-uniform image quality. We have developed a procedure to preprocess the cone beam data detected on non-square detector elements for compatibility with cone beam reconstruction algorithms in which symmetric voxel dimensions and detector element dimensions are assumed. Complications and potential non-uniformity in image reconstruction operations are eliminated through simple scaling of the input cone beam projection data and decompression/elongation of the intermediate reconstructed image while retaining the symmetric geometry in the reconstruction algorithm. The procedure is a generic solution to cone beam image reconstruction applicable to all types of reconstruction algorithms.
Tomographic Reconstruction II
icon_mobile_dropdown
Correction of geometric distortions in EP images using nonrigid registration to corresponding anatomic images
Darko Skerl, Shiyan Pan, Rui Li, et al.
The spatial resolution of echo planar image (EPI) data acquired for functional MRI (fMRI) studies is low. To facilitate their interpretation, conventional T1-weighted anatomical images are often acquired prior to the acquisition of the EP images. T1-weighted and EP images are then registered and activation patterns computed from the EP images are superimposed on the anatomic images. Registration between the anatomic and the EP images is required to compensate for patient motion between the acquisitions and for geometric distortions affecting EP images. Recently, methods have been proposed to register EP and anatomic images using non-rigid registration techniques. In these approaches, the transformation is parameterized using splines. Here, we propose an alternative solution to this problem based on optical flow with non-stationary stiffness constraint. The approach we propose also includes several preprocessing steps such as automatic skull removal and intensity remapping. Results obtained with eight studies on normal volunteers are presented.
Sources and correction of higher-order geometrical distortion for serial MR brain imaging
Mark Holden, Marcel M. Breeuwer, Kate McLeish, et al.
A specially designed phantom consisting of a 3D array of 427 accurately manufactured spheres together with a point-based registration algorithm was used to detect distortion described by polynomial orders 1-4. More than thirty 3D gradient echo (FFE) and multi-slice spin echo (SE) phantom scans were acquired with a Philips 1.5T Gyroscan. Distortion was measured as a function of: readout gradient strength (0.72<= Gr <=1.7mT/m), TR/TE/flip angle, shim settings, and temporal distortion change for 11 weekly scans for the FFE sequence and TR/TE/slice gap for SE. Precision measurements for linear distortion were: scale <=0.03%, shear <=0.04 degrees. Linear distortion in the readout dependent directions increased with decreased readout strength (r>0.93). There was a significantly higher (p<0.01) sagittal shear for 5 SE scans compared with 5 FFE ones with the same Gr - possibly because of slice selection. Different shim settings produced only linear distortion change: up to 2% scale and 1 degree shear. There was negligible distortion change over time: scale < 0.1%, shear <= 0.05 degrees. There was a decrease in distortion as a function of polynomial order (r>0.9, n=33), 75% of the distortion was either first or second order.
Efficient reconstruction algorithm for CT fluoroscopy
In interventional procedures, computed tomography (CT) has demonstrated its effectiveness in guiding operators through otherwise difficult tasks. By providing near real time feedback, CT fluoroscopy (CTF) enables the operator to dynamically adjust the location and orientation of the biopsy instrument. The most important performance parameter of CTF is the system temporal response. In this paper, we present a mathematical framework that accurately characterizes the system response of a needle type instrument. Based on the model, a modified halfscan reconstruction algorithm is presented. The algorithm makes use of the redundant nature of CTF data processing and enables efficient generation of CT images. The effectiveness of the algorithm is demonstrated by theoretical analysis and phantom experiments.
MRI isotropic resolution reconstruction from two orthogonal scans
An algorithm for the reconstructions of ISO-resolution volumetric MR data sets from two standard orthogonal MR scans having anisotropic resolution has been developed. The reconstruction algorithm starts by registering a pair of orthogonal volumetric MR data sets. The registration is done by maximizing the correlation between the gradient magnitude using a simple translation-rotation model in a multi-resolution approach. Then algorithm assumes that the individual voxels on the MR data are an average of the magnetic resonance properties of an elongated imaging volume. Then, the process is modeled as the projection of MR properties into a single sensor. This model allows the derivation of a set of linear equations that can be used to recover the MR properties of every single voxel in the SO-resolution volume given only two orthogonal MR scans. Projections on convex sets (POCS) was used to solve the set of linear equations. Experimental results show the advantage of having a ISO-resolution reconstructions for the visualization and analysis of small and thin muscular structures.
Reconstruction of 3D angiography data using the algebraic reconstruction technique (ART)
Carnell J. Hampton, Paul F. Hemler
Three-dimensional angiographic reconstrcution has emerged as an alternative to the traditional depiction of aneurysm angioarchitecture provided by 2-D perspective projections acquired by digital subtraction angiography (DSA) and fluoroscopy. One clinical application of research involving 3-D angiographic reconstruction is intraoperative localization and visualization during aneurysm embolization procedures. For this procedure, reconstruction quality is important for the 3-D reconstruction of anatomy as well as for the reconstrucution of intraaneurysm coils imaged endovascularly and subsequently rendered within an existing 3-D anatomic representation. Rotational angiography involves the acquisition of a series of 2-D, cone-beam projections of intracranial anatomy by a rotating x-ray gantry following a single injection of contrast media. Our investigation focuses on the practicality of using methods that employ algebraic reconstruction techniques (ART) to reconstruct 3-D data from 2-D cone-beam projections acquired using rotational angiography during embolization procedures. Important to our investigation are issues that arise within the implementation of the projection, correction and backprojection steps of the reconstruction algorithm that affect reconstruction quality. Several methods are discussed to perform accurate voxel grid projection and backprojection. Various parameters of the reconstruction algorithm implementation are also investigated. Preliminary results indicating that quality 3-D reconstructions from 2-D projections of synthetic volumes are presented. Further modifications to our implementation hold the promise of achieving accurate reconstruction results with a lower computation cost than the algorithm implemention used for this study. We have concluded that methods to extend the traditional ART algorithm for cone-beam projection acquisition produce quality 3-D reconstructions.
Tomographic Reconstruction III/Segmentation I
icon_mobile_dropdown
Frequency-domain compensation scheme for multislice helical CT reconstruction with tilted gantry
Jiang Hsieh, Hui Hu
Multi-slice CT (MCT) is one of the most recent technical advancements in computed tomography. MCT offers high volume coverage, faster scan speed, and reduced x-ray tube loading. When combined with the helical scan mode, MCT provides even higher volume coverage as a result of the elimination of the inter-scan delays. Similar to the single slice CT, the projection data collected in MCT is inherently inconsistent. To combat projection inconsistency, many image reconstruction algorithms were investigated and developed. Recent studies have shown, however, that the image quality of MCT can be significantly degraded when the data is collected in a tilted helical mode. The degradation can be so severe that it is unacceptable for routine clinical usage. In this paper, we present a detailed investigation to show that the root cause of the image quality degradation is the deviation of the reconstruction iso-center from the gantry iso-center. An analytical model is derived to establish a quantitative relationship between the iso0center shift and other system parameters. The model serves as the mathematical basis for the derivation of a frequency-domain correction algorithm. Detailed performance evaluation is provided in terms of spatial resolution, noise, computational efficiency, and image artifacts.
Novel approximate approach for high-quality image reconstruction in helical cone-beam CT at arbitrary pitch
Stefan Schaller, Karl Stierstorfer, Herbert Bruder, et al.
We present a novel approximate image reconstruction technique for helical cone-beam CT, called the Advanced Multiple Plane Reconstruction (AMPR). The method is an extension of the ASSR algorithm presented in Medical Physics vol. 27, no. 4, 2000 by Kachelriess et al. In the ASSR, the pitch is fixed to a certain value and dose usage is not optimum. These limitations have been overcome in the AMPR algorithm by reconstructing several image planes from any given half scan range of projection angles. The image planes are tilted in two orientations so as to optimally use the data available on the detector. After reconstruction of several sets of tilted images, a subsequent interpolation step reformats the oblique image planes to a set of voxels sampled on a cartesian grid. Using our novel approach on a scanner with 16 slices, we can achieve image quality superior to what is currently a standard for four-slice scanners. Dose usage in the order of 95% for all pitch values can be achieved. We present simulations of semi-antropomorphic phantoms using a standard CT scanner geometry and a 16 slice design.
Automatic 3D segmentation of the liver from abdominal CT images: a level-set approach
Shiyan Pan, Benoit M. Dawant
Computer-aided surgery requires the accurate registration of tomographic pre-operative images with intra-operative surface points acquired with 3D spatial localizers. Surface-based registration of these tomographic images with the surface points does, in turn, require a precise and accurate surface representation of the structures to be registered. This paper presents a level set technique for the automatic segmentation of the liver in abdominal CT scans. The main difficulty with level set methods is the design of appropriate speed functions for particular applications. Here we propose a novel speed function that is designed to (1) stop the propagating front at organ boundaries with weak edges, and (2) incorporate a-priori information on the relative position of the liver and other structures. We show that the new speed function we have designed to stop the front a weak edges is superior to other approaches proposed in the literature both on simulated images and on CT images. Results obtained with our approach for the segmentation of the liver in several CT scans are compared to contours obtained manually.
Segmentation of medical images by feature tracing in a self-dual morphological scale-space
The multiscale approach derives a segmentation from the evolution of appropriate signal-descriptive features in scale-space. Features that are stable for a wide range of scales are assumed to belong to visually sensible regions. To compensate the well-known drawbacks of linear scale- spaces, the shape-preserving properties of morphological scale-space filtering are utilized. The limiting duality of morphological filters is overcome by a selfdual morphological approach considering both light and dark structures in either the opening or the closing branch of the scale-space. Reconstructive opening/closing-filters enable the scale=analysis of 2D signals, since they are causal with respect to regional maxima/minima. This allows to identify important regions in scale=space via their extrema. Each extremum is assigned a region by a gradient watershed of the corresponding scale. Due to morphological filtering, the scale behavior of the regions is representable by a tree structure describing the spatial inter- and intra-scale relations among regions. The significance of a watershed region is automatically derived from its scale behavior by considering various attributes describing scale-dependent, morphological, and statistical properties of the region. The most significant regions from the segmentation of the image. The algorithm was verified for various medical image domains, such as cytological micrographs, bone x-rays, and cranial NMR slices.
Autosegmentation of ultrasonic images by the genetic algorithm
Ching-Fen Jiang
The textural-feature-based segmentation methods were widely applied to the segmentation problems of ultrasonic images. However the manual selection of textural features in the previous approaches not only makes these segmentation methods inadaptable but could lead to the results with bias. Herein we propose an auto-feature-selection algorithm to solve the problems. This algorithm includes three steps: The feature library composed of 32 textural features was established at first. The genetic algorithm was then used to auto-select the features and give each of them different weight according to their importance. The fitness of each gene was evaluated by five factors including region dissimilarity, number of edge points, edge fragmentation, edge thickness, and curvature. Finally, K-means process classified the image into 3 different tissues using the selected features with different weights. The segmentation outcomes of various ultrasonic images by this auto-feature selection algorithm have shown better correspondence with human comprehension in comparison with the results of previous works. In addition, it provides a more adaptive way to adjust the weight of the features used for clustering process and therefore to avoid takeover by the big-value features. This problem has been paid little attention in the traditional K-means process in which all the features have the same weight.
Automatic, accurate, and reproducible segmentation of the brain and cerebro-spinal fluid in T1-weighted volume MRI scans and its application to serial cerebral and intracranial volumetry
Louis Lemieux
A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.
Segmentation II
icon_mobile_dropdown
Knowledge-based extraction of cerebral vasculature from anatomical MRI
Lasse Riis Oestergaard, Ole Vilhelm Larsen, Jens Haase, et al.
A vessel extraction approach is presented that permits visualization of the cerebral vasculature in 3D from anatomical proton density (PD) weighted magnetic resonance imaging (MRI) volumes. The approach presented utilizes general knowledge about the shape and size of the cerebral vasculature and is divided into multi-scale vessel enhancement filtering, centre-line extraction, and surface modeling. To improve the discrimination between blood vessels and other tissue a multi-scale filtering method that enhances tubular structures is used as a pre-processing step. Centre-line extraction is applied to roughly estimate the centre-line of the vasculature involving both segmentation and skeletonization. The centre-line is used to initialize an active contour modeling process where cylinders are used to model the 3D surface of the blood vessels. The accuracy and robustness of the vessel extraction approach have been demonstrated on both simulated and real data (1mm3 voxels). On simulated data, the mean error of the estimated radii was found to be less than 0.4mm. On real data, the vasculature was successfully extracted from 20 MRI data sets using the same input parameters. An expert found the extracted vessel surfaces to coincide with the vessel walls in the data. Results from CTA data indicate that the approach will work successfully with other imaging modalities as well.
Statistical segmentation of multidimensional brain datasets
Manuel Desco, Juan D. Gispert, Santiago Reig, et al.
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Automatic segmentation editing for cortical surface reconstruction
Xiao Han, Chenyang Xu, Maryam E. Rettmann, et al.
Segmentation and representation of the human cerebral cortex from magnetic resonance images is an important goal in neuroscience and medicine. Accurate cortical segmentation requires preprocessing of the image data to separate certain subcortical structures from the cortex in order to generate a good initial white-matter/gray-matter interface. This step is typically manual or semi-automatic. In this paper, we propose an automatic procedure that is based on a careful analysis of the brain anatomy. Following a fuzzy segmentation of the brain image, the method first extracts the ventricles using a geometric deformable surface model. A region force, derived from the cerebrospinal membership function, is used to deform the surface towards the boundary of the ventricles, while a curvature force controls the smoothness of the surface and prevents it from growing into the outer pial surface. Next, region-growing identifies and fills the subcortical regions in each cortical slice using the detected ventricles as seeds and the white matter and several automatically determined sealing lines as boundaries. To make the method robust to segmentation artifacts, a putamen mask drawn in the Talairach coordinate system is also used to help the region growing process. Visual inspection and initial results on 15 subjects show the success of the proposed method.
Multiobject relative fuzzy connectedness and its implications in image segmentation
The notion of fuzzy connectedness captures the idea of hanging-togetherness of image elements in an object by assigning a strength of connectedness to every possible path between every possible pair of image elements. This concept leads to powerful image segmentation algorithms based on dynamic programming whose effectiveness has been demonstrated on 1000s of images in a variety of applications. In a previous framework, we introduced the notion of relative fuzzy connectedness for separating a foreground object from a background object. In this framework, an image element c is considered to belong to that among these two objects with respect to whose reference image element c has the higher strength of connectedness. In fuzzy connectedness, a local fuzzy reflation called affinity is used on the image domain. This relation was required for theoretical reasons to be of fixed form in the previous framework. In the present paper, we generalize relative connectedness to multiple objects, allowing all objects (of importance) to compete among themselves to grab membership of image elements based on their relative strength of connectedness to reference elements. We also allow affinity to be tailored to the individual objects. We present a theoretical and algorithmic framework and demonstrate that the objects defined are independent of the reference elements chosen as long as they are not in the fuzzy boundary between objects. Examples from medical imaging are presented to illustrate visually the effectiveness of multiple object relative fuzzy connectedness. A quantitative evaluation based on 160 mathematical phantom images demonstrates objectively the effectiveness of relative fuzzy connectedness with object- tailored affinity relation.
Silver standards obtained from Fourier-based texture synthesis to evaluate segmentation procedures
Segmentation is fundamental for automated analysis of medical images. However, a unified approach for evaluation does not yet exist. Gold standards are often unapplicable because they require invasive preparations or tissue extraction. Empirical evaluations only reflect the conformity of segmentation with the subjective visual expectance of users, which is underlying inter- as well as intra-observer variabilities. This paper presents a consistent approach to create synthetic but realistic images with a-priori known object boundaries (silver standards), which are suitable for optimization nd evaluation of various segmentation algorithms. Rectangular example patches are collected for each tissue (interior, exterior, and a contour zone). Fourier amplitude and phase images are stored together with the mean gray value. For silver standard generation, a reference contour is either manually given or automatically extracted form real data applying the algorithm under evaluation. For each class of tissue, the amplitude of one patch is randomly combined with the perturbed phase of another. A randomly chosen mean from the same class is superimposed to the inverse Fourier transform. Numerous silver standards are obtained form only a few texture patches of each tissue. Based on microscopy, CT, and functional MRI data, the applicability of silver standards is proven in two, three, and four dimensions. They are analyzed with respect to systematic deviations. Minor deviations occur for two dimensional images while those for three or four dimensions are larger but still acceptable.
Validation of semiautomated segmentation algorithm with partial volume redistribution
Purpose - To reduce partial volume contamination, we present a linear interpolation combining quantitative T1 information with segmented base images. In addition, manual segmentation was completed for comparison to both of the techniques. Methods - To quantitatively assess T1, a precise and accurate inversion recovery (PAIR) sequence was acquired. The Kohonen SOM segmentation algorithm used the four base images as inputs and had nine output neurons. The segmented regions were manually classified by an expert for training a multi-layered backpropagation neural network to automate this process. A linear interpolation based on mean T1 relaxivity for each segmented class (regional method) and a pixel by pixel basis (pixel method) was performed. Manual segmentation was performed directly on base images by three observers. Differences between the techniques are reported as percent errors of the mean difference divided by the mean estimates of the manual segmentation. Results and Discussion -Within observer variances for the manual segmentation were less than 5.6% while between observer variances were 11.7% and 7.2% for white and gray matter respectively. The regional method had variances of 4.1% and 1.0% while the pixel method produced variances of 5.8% and 1.5% for white and gray matter, respectively, compared to the manual segmentation.
Deformable Geometry I
icon_mobile_dropdown
Statistical models of appearance for medical image analysis and computer vision
Tim F. Cootes, Christopher J. Taylor
Statistical models of shape and appearance are powerful tools for interpreting medical images. We assume a training set of images in which corresponding landmark points have been marked on every image. From this data we can compute a statistical model of the shape variation, a model of the texture variation and a model of the correlations between shape and texture. With enough training examples such models should be able to synthesize any image of normal anatomy. By finding the parameters which optimize the match between a synthesized model image and a target image we can locate all the structures represented by the model. Two approaches to the matching will be described. The Active Shape Model essentially matches a model to boundaries in an image. The Active Appearance Model finds model parameters which synthesize a complete image which is as similar as possible to the target image. By using a difference decomposition approach the current difference between target image and synthesized model image can be used to update the model parameters, leading to rapid matching of complex models. We will demonstrate the application of such models to a variety of different problems.
Time-continuous segmentation of cardiac MR image sequences using active appearance motion models
S. C. Mitchell, Boudewijn P. F. Lelieveldt, Rob J. van der Geest, et al.
Active Appearance Models (AAMs) are useful for segmentation of static cardiac MR images since they exploit prior knowledge about the cardiac shape and image appearance. However, applying 2D AAMs to full cardiac cycle segmentation would require multiple models for different phases of the cardiac cycle because traditional AAMs account only for the variations within image classes and not temporal classes. This paper presents a novel 2D+time Active Appearance Motion Model (AAMM) that represents the dynamics of the cardiac cycle in combination with shape and image appearance of the heart, ensuring a time-continuous segmentation of a complete cardiac MR sequence. In AAMM, single-beat sequences are phase-normalized into sets of 2D images and the shape points and gray intensities between frames are concatenated into a shape vector and intensity vector. Appearance variations over time are captured using Principal Component Analysis on both vectors in the training set. Time-continuous segmentation is achieved by minimizing the model appearance-to-target differences by adjusting the model eigen-coefficients using gradient descent approach. In matching tests, the model shows to be robust in initial position and approximates the true segmentation very well. Large-scale clinical validation in patients is ongoing.
Active appearance motion models for endocardial contour detection in time sequences of echocardiograms
Active Appearance Models (AAM) are suitable for segmenting 2D images, but for image sequences time-continuous results are desired. Active Appearance-Motion Models (AAMM) model shape and appearance of the heart over the full cardiac cycle. Single-beat sequences are phase-normalized into stacks of 16 2D images. In a training set, corresponding shape points on the endocard are defined for each image based on expert drawn contours. Shape (2D) and intensity vectors are derived similar to AAM. Intensities are normalized non-linearly to handle ultrasound-specific problems. For all time frames, shape vectors are simply concatenated, as well as and intensity vectors. Principal Component Analysis extracts appearance eigenvariations over the cycle, capturing typical motion patterns. AAMMs perform segmentation on complete sequences by minimizing model-to-target differences, adjusting AAMM eigenvariation coefficients using gradient descent minimization. This results in time-continuous segmentation. The method was trained and tested on echocardiographic 4-chamber sequences of 129 unselected patients split randomly into a training set (n=65) and a test set (n=64). In all sequences, an independent expert manually drew endocardial contours. On the test set, fully automated AAMM performed well in 97% of cases (average distance 3.3 mm, 9.3 pixels, comparable to human inter- and intraobserver variabilities).
Streamlined volumetric landmark placement method for building three-dimensional active shape models
Molly M. Dickens, Hamed Sari-Sarraf, Shaun S. Gleason
For the purpose of building a statistical shape model of a 3D object, a landmarked training set must be obtained. The purpose of this work is to develop a streamlined, slice-by-slice, volumetric landmarking scheme that requires no prior segmentation of the object and directly produces a set of corresponding landmark points for each 3D object in the training set. To achieve inter-object correspondence, the procedure's main feature is the establishment of an object-based coordinate system. Minimal user interaction is employed to locate an origin and axes on each volume by presenting the user with appropriate image slices and annotation tools, as dictated by the data under study. The volumes can then be aligned so that locating a physical point, relative to the established coordinate system, is repeatable from one instance of the object to another. For the purpose of verifying and demonstrating the proposed procedure, a set of test data consisting of image volumes of ten mangoes was obtained. Results of the landmarking scheme for this data, as well as for medical image data, demonstrate good object segmentation and landmark point correspondence.
Poster Session: Tomographic Reconstruction, Statistical Methods, Shape, Data Retrieval, Motion, Multiresolution, Preprocessing, Pattern Recognition and Coding
icon_mobile_dropdown
Factor analysis of high-dimensional heterogeneous data for structural characterization
In this work, we present a method for exploring the relationship among morphometric variables and the possible anatomic significance of these relationships. The analysis is based on the Jacobian determinant field resulting from the registration of a template to a set of subjects, which is represented as a factorial analytic model. In addition to morphometric variables, information about medical diagnosis is considered in the analytic model and corroborates to exploratory investigation of the relationship between regions of interest and pathologies. The definition of the number of factors to be considered is based on a robust analysis of the statistical fit of the factor model, instead of using as hoc criteria. The advantages of the proposed methodology are demonstrated in a study of shape differences between the corpora callosa of schizophrenic patients and normal controls. We show that the regions where these differences can occur can be determined by unsupervised analysis, indicating the method's potential for exploratory studies.
Shape
icon_mobile_dropdown
Hemispherical map for the human brain cortex
Duygu Tosun, Jerry L. Prince
Understanding the function of the human brain cortex is a primary goal in human brain mapping. Methods to unfold and flatten the cortical surface for visualization and measurement have been described in previous literature; but comparison across multiple subjects is still difficult because of the lack of a standard mapping technique. We describe a new approach that maps each hemisphere of the cortex to a portion of a sphere in a standard way, making comparison of anatomy and function across different subjects possible. Starting with a three-dimensional magnetic resonance image of the brain, the cortex is segmented and represented as a triangle mesh. Defining a cut around the corpus collosum identifies the left and right hemispheres. Together, the two hemispheres are mapped to the complex plane using a conformal mapping technique. A Mobius transformation, which is conformal, is used to transform the points on the complex plane so that a projective transformation maps each brain hemisphere onto a spherical segment comprising a sphere with a cap removed. We determined the best size of the spherical cap by minimizing the relative area distortion between hemispherical maps and original cortical surfaces. The relative area distortion between the hemispherical maps and the original cortical surfaces for fifteen human brains is analyzed.
Shape-based and texture-based feature extraction for classification of microcalcifications in mammograms
Hamid Soltanian-Zadeh, Siamak Pourabdollah-Nezhad, Farshid Rafiee Rad
This paper presents and compares two image processing methods for differentiating benign from malignant microcalcifications in mammograms. The gold standard method for differentiating benign from malignant microcalcifications is biopsy, which is invasive. The goal of the proposed methods is to reduce rate of biopsies with negative results. In the first method, we extract 17 shape features from each mammogram. These features are related to shapes of individual microcalcifications or to their clusters. In the second method, we extract 44 texture features from each mammogram using co-occurrence method of Haralick. Next, we select best features from each set using a genetic algorithm, to maximize area under ROC curve. This curve is created using a k-nearest neighbor (kNN) classifier and a malignancy criterion. Finally, we evaluate the methods by comparing ROC's with greatest areas obtained using each method. We applied the proposed methods, with different values of k in kNN classifier, to 74 malignant and 29 benign microcalcification clusters. Truth for each mammogram was established based on the biopsy results. We found greatest area under ROC curve for each set of features used in each method. For shape features this area was 0.82 (k = 7) and for Haralick features it was 0.72(k=9).
Automatic generation of object shape models and their application to tomographic image segmentation
We describe a novel method to build 3D statistical shape models for anatomic objects in tomographic images, and demonstrate the use of the model to guide image segmentation. Our method consists of two main steps. In the first step, a statistical shape model is built for a collection of training images. Boundary similarities between adjacent transverse slices are matched to guide inter-slice interpolation. Slice-by-slice correspondences are established between images in the training sets by matching mean boundary curvatures. A statistical shep model is then obtained by principal components analysis. During the second step, the model is used to guide image segmentation. Segmentation is initialized by placing the mean shape into the image under analysis. The model deforms iteratively by updating its shape and pose parameters using the principles of the active shape model. Following the active shape model, an active contour model (snake) is used to refine the object boundary. The proposed methods have been tested using ten volumetric chest HRCT images. The results show that the new method is able to automatically generate 3D object shape models without the need for manual landmark identification. The combination of the active shape model with the active contour model yields a fast, accurate object segmentation.
Fast skeletonization algorithm for 3D elongated objects
Ali Shahrokni, Hamid Soltanian-Zadeh, Reza A. Zoroofi
A novel one-pass 3D thinning algorithm is proposed in this paper. 3D thinning can be regarded as an essential step in many image-processing tasks such as image quantification, image compression, motion tracking, path generation, and object navigation. The proposed algorithm generates both connected and non-connected skeletons. It is faster and more accurate than currently available techniques in the literature. In addition, it adaptively removes spurious branches of the skeleton, and hence, generates a smooth and refined skeleton, representing the essential structure of 3D elongated objects such as vessels, airways, and other similar organic shapes.
Efficient representation of shape variability using surface-based free-vibration modes
Cristian Lorenz, Michael R. Kaus, Vladimir Pekar, et al.
The efficient representation of shape and shape variability is a key issue in computerized 3D image processing. One of the common goals is the ability to express as much shape variability as necessary with as few parameters as possible. In this paper we focus on the capture of shape variability on the basis of free surface vibration modes. We do not model the interior of an elastic object, but rather its triangulated surface. As in the case of 3D statistical point-distribution models (PDM) we assume that the shape of an anatomical object can efficiently be approximated by a weighted sum of a mean shape and a number of variation modes. The variation modes are in our case Eigenvectors of a stiffness-matrix. Based on a given surface triangulation we define a physical model by placing mass points at the vertices and coil- and leaf-spring elements at the edge positions of the triangulation. Ordered by wavelength, the resulting free vibration modes can be used to efficiently approximate shape variability in a coarse to fine manner, similar to a Fourier decomposition. As real-object examples from the medical image-processing domain, we applied the method to triangulated surfaces of segmented lumbar vertebra and femor-head from CT data sets. A comparison to corresponding statistical shape models shows, that natural variability of anatomical shape can efficiently be approximated by free surface vibration modes.
Deformable Geometry II
icon_mobile_dropdown
Integration of multiple segmentation methods using evaluation
Dongsung Kim, Hanyoung Kim, Heung Sik Kang
This paper proposes an approach integrating multiple segmentation methods in a systematic way, which can improve overall accuracy without deteriorating accuracy of highly confident segments of a boundary. A segmentation method produces boundary segments, which are then evaluated with an evaluation function considering pros/cons of the current and next methods to apply. Boundary segments with low confidence are replaced by next method while the other segments are kept. These steps are repeated until all segmentation methods are applied. Coarser and more robust method is applied earlier than the others. The proposed approach is implemented for the segmentation of muscles in the Visible Human color images. A balloon method, a minimum cost path finding method, and a Seeded Region Growing method are integrated. The final segmentation results showed improvements in both overall evaluation and segment-based evaluation.
Unifying approach and interface for spline-based snakes
Mathews Jacob, Thierry Blu, Michael A. Unser
In this paper, we present different solutions for improving spline-based snakes. First, we demonstrate their minimum curvature interpolation property, and use it as an argument to get rid of the explicit smoothness constraint. We also propose a new external energy obtained by integrating a non-linearly pre-processed image in the closed region bounded by the curve. We show that this energy, besides being efficiently computable, is sufficiently general to include the widely used gradient-based schemes, Bayesian schemes, their combinations and discriminant-based approaches. We also introduce two initialization modes and the appropriate constraint energies. We use these ideas to develop a general snake algorithm to track boundaries of closed objects, with a user-friendly interface.
Knowledge-based deformable surface model with application to segmentation of brain structures in MRI
Amir Ghanei, Hamid Soltanian-Zadeh, Kost Elisevich, et al.
We have developed a knowledge-based deformable surface for segmentation of medical images. This work has been done in the context of segmentation of hippocampus from brain MRI, due to its challenge and clinical importance. The model has a polyhedral discrete structure and is initialized automatically by analyzing brain MRI sliced by slice, and finding few landmark features at each slice using an expert system. The expert system decides on the presence of the hippocampus and its general location in each slice. The landmarks found are connected together by a triangulation method, to generate a closed initial surface. The surface deforms under defined internal and external force terms thereafter, to generate an accurate and reproducible boundary for the hippocampus. The anterior and posterior (AP) limits of the hippocampus is estimated by automatic analysis of the location of brain stem, and some of the features extracted in the initialization process. These data are combined together with a priori knowledge using Bayes method to estimate a probability density function (pdf) for the length of the structure in sagittal direction. The hippocampus AP limits are found by optimizing this pdf. The model is tested on real clinical data and the results show very good model performance.
Improved conformal metrics for 3D geometric deformable models in medical images
Christopher L. Wyatt, Yaorong Ge, David J. Vining
The Geometric Deformable Model (GDM) is a useful segmentation method that combines the energy minimization concepts of physically deformable models and the flexible topology of implicit deformable models in a mathematically well-defined framework. The key aspect of the method is the measurement of length and area using a conformal metric derived from the image. This conformal metric, usually a monotonicly decreasing function of the gradient, defines a Riemannian space in which the surface evolves. The success of the GDM for 3D segmentation in medical applications is directly related to the definition of the conformal metric. Like all deformable models, the GDM is susceptible to poor initialization, varying contrast, partial volume, and noise. This paper addresses these difficulties via the definition of the conformal metric and describes a new method for computing the metric in 3D. This method, referred to as a confidence-based mapping, incorporates a new 3D scale selection mechanism and an a-priori image model. A comparison of the confidence-based approach and previous formulations of the conformal metric is presented using computer phantoms. A preliminary application in two clinical examples is given.
Estimation of orientation and position of cervical vertebrae for segmentation with active shape models
Radiologists are always looking for more reliable and robust methods to help them assess, describe and classify bone structures in x-ray images. Although, in the recent years, computer-assisted techniques have proven to be useful in this regard, they still face difficult challenges such as inter-subject variability in shape and a lack of contrast in the digitized images of radiographs. These challenges have focused the attention of the computer vision research community on techniques that employ deformable models. One such technique, i.e., Active Shape Models (ASM), has received significant attention due to its ability to capture the shape variability and to deal with the poor quality of the images in a straightforward manner. However, as is often the case with iterative optimization techniques, success of the ASM search step is highly dependent on the initial positioning of the mean shape on the target image. Within the specific framework of automatic, cervical vertebra segmentation, we have developed and tested an up-front preprocessing algorithm that estimates the orientation and position of the cervical vertebrae in x-ray images and leads to a more accurate, initial placement of the mean shape. The algorithm estimates the orientation of the spine by calculating parallel-beam line integrals of the x-ray images. The position of the spine is estimated by considering the density of edges perpendicular to the line integral that gives the estimate of the orientation. The output of the algorithm is a bounding box surrounding the cervical spine area. Morphometric points placed by expert radiologists on a set of 40, digitized radiographs were used to quantify the efficacy of the estimation. This test yielded acceptable results in estimating the orientation and the locating of the cervical spine.
3D image analysis of abdominal aortic aneurysm
Marko Subasic, Sven Loncaric, Erich Sorantin
In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.
Segmentation III
icon_mobile_dropdown
Semiautomatic aortic endograft localization for postoperative evaluation of endovascular aneurysm treatment
A semi-automatic method for localisation and segmentation of bifurcated aortic endografts in CTA images is presented. The graft position is established through detection of radiopaque markers sewn on the outside of the graft. The user indicates the first and the last marker, whereupon the rest of the markers are detected automatically by second order scaled derivative analysis combined with prior knowledge of graft shape and marker configuration. The marker centres obtained approximate the graft sides and central axis. The graft boundary is determined, either in the original CT slices or in reformatted slices orthogonal to the local graft axis, by maximizing the local gradient in the radial direction along a deformable contour passing through both sides. The method has been applied to ten CTA images. In all cases, an adequate segmentation is obtained. Compared to manual segmentations an average similarity (i.e. relative volume of overlap) of 0.93 +/- 0.02 for the graft body and 0.84 +/- 0.05 for the limbs is found.
Edge surface extraction from 3D images
PhengAnn Heng, Lisheng Wang, TienTsin Wong, et al.
Within 3D image, edge surfaces usually correspond to structural boundaries. Therefore, their recognition and modeling are of basic importance in 3D image analysis. Several approaches have been proposed to find or visualize them, such as 3D edge detection and volume rendering algorithms. But edge detectors mainly seek discrete 3D edge-like points in a 3D image, and volume rendering is mainly used for display. Neither extracts a continuous edge surface model, which is usually needed for further analysis, understanding and interpretation of structures within a 3D image. We present two ways, simple and easy to implement, to extract surface models of step-like edge surfaces directly from a 3D image.
Hybrid atlas-based and image-based approach for segmenting 3D brain MRIs
Gloria Bueno, Olivier Musse, Fabrice Heitz, et al.
This work is a contribution to the problem of localizing key cerebral structures in 3D MRIs and its quantitative evaluation. In pursuing it, the cooperation between an image-based segmentation method and a hierarchical deformable registration approach has been considered. The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions of an image, I(s), from the data set are identified. These regions, M(I), are obtained combining information from deformable atlas, achieved by the warping of eight previous labeled maps on I(s). Then, the goal of the decision stage is to precisely locate the contours of the 3D regions set by the markers. This contour decision is performed by a 3D extension of the watershed transform. The anatomical structures taken into consideration and embedded into the atlas are brain, ventricles, corpus callosum, cerebellum, right and left hippocampus, medulla and midbrain. The hybrid method operates fully automatically and in 3D, successfully providing segmented brain structures. The quality of the segmentation has been studied in terms of the detected volume ratio by using kappa statistic and ROC analysis. Results of the method are shown and validated on a 3D MRI phantom. This study forms part of an on-going long term research aiming at the creation of a 3D probabilistic multi-purpose anatomical brain atlas.
Local contralateral subtraction based on simultaneous segmentation and registration method for computerized detection of pulmonary nodules
Many of the existing computer-aided diagnosis (CAD) schemes for detection of nodules in chest radiographs suffer from a large number of false positives. Previously, we reported a local contralateral subtraction method that removes false positives due to the presence of normal structures effectively. In this approach, registration of the left and right lung regions is performed for extraction of lung nodules while the normal anatomic structures are removed. In this study, we developed a novel method for simultaneous registration and segmentation which registers two similar images while a region with significant difference is adaptively segmented, and incorporated it into the local contralateral subtraction method. In this method, a non- linear functional that models the statistical properties of the subtraction of the two images is formulated, and the function is minimized by a coarse-to-fine approach to yield a mapping that yields the registration and a boundary that yields the segmentation. A preliminary result shows that the new method is effective in segmenting the abnormal structures and removing normal structures. The local contralateral subtraction based on the new segmentation and registration method was shown to be effective in reducing the number of false detections reported by our computer- aided diagnosis scheme for detection of lung nodules in chest radiographs.
Automating measurement of subtle changes in articular cartilage from MRI of the knee by combining 3D image registration and segmentation
John Andrew Lynch, Souhil Zaim, Jenny Zhao, et al.
In osteoarthritis, articular cartilage loses integrity and becomes thinned. This usually occurs at sites which bear weight during normal use. Measurement of such loss from MRI scans, requires precise and reproducible techniques, which can overcome the difficulties of patient repositioning within the scanner. In this study, we combine a previously described technique for segmentation of cartilage from MRI of the knee, with a technique for 3D image registration that matches localized regions of interest at followup and baseline. Two patients, who had recently undergone meniscal surgery, and developed lesions during the 12 month followup period were examined. Image registration matched regions of interest (ROI) between baseline and followup, and changes within the cartilage lesions were estimate to be about a 16% reduction in cartilage volume within each ROI. This was more than 5 times the reproducibility of the measurement, but only represented a change of between 1 and 2% in total femoral cartilage volume. Changes in total cartilage volume may be insensitive for quantifying changes in cartilage morphology. A combined used of automated image segmentation, with 3D image registration could be a useful tool for the precise and sensitive measurement of localized changes in cartilage from MRI of the knee.
Pattern Recognition/Preprocessing
icon_mobile_dropdown
Feature-extraction method based on the ideal observer
We discuss the design of feature-extraction methods based on the strategy of the ideal observer. We restrict our discussion to a binary hypothesis-testing task where the observer has to decide whether a signal is added onto a nonuniform background.
Nonlinear discriminant analysis
We describe a new nonlinear discriminant analysis method for feature extraction. This method applies a nonsingular transform to the data such that the transformed data have a Gaussian distribution. Then a Bayes likelihood ratio is calculated for the transformed data. The nonsingular transform makes use of wavelet transforms and histogram matching techniques. Wavelet transforms are an effective tool in analyzing data structures. Histogram matching is applied to the wavelet coefficients and the ordinary image pixel values in order to create a transformed image that has the desired Gaussian statistics.
Markov chain Monte Carlo posterior sampling with the Hamiltonian method
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy (phi) , where (phi) is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of (phi) and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. I show that the efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs remains constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of (phi) , is proposed to measure the convergence of the MCMC sequence.
Design of connected operators using the image foresting transform
The Image Foresting Transform (IFT) reduces optimal image partition problems from seed pixels into a shortest-path forest problem in a graph, whose solution can be obtained in linear time. It has allowed a unified and efficient approach to edge tracking, region growing, watershed transforms, multiscale skeletonization, and Euclidean distance transform. In this paper, we extend the IFT to introduce two connected operators: cutting-off-domes and filling-up-basins. The former simplifies grayscale images by reducing the height of its domes, while the latter reduces the depth of its basins. By automatically or interactively specifying seed pixels in the image and computing a shortest-path forest, whose trees are rooted at these seeds, the IFT creates a simplified image where the brightness of each pixel is associated with the length of the corresponding shortest-path. A label assigned to each seed is propagated, resulting a labeled image that corresponds to the watershed partitioning from markers. The proposed operators may also be used to provide regional image filtering and labeling of connected components. We combine the cutting-off-domes and filling-up-basins to implement regional minima/maxima, h-domes/basins, opening/closing by reconstruction, leveling, area opening/closing, closing of holes, and removal of pikes. Their applications are illustrated with respect to medical image segmentation.
Denoising of ultrasound sector scans by nonlinear filtering of a morphological and linear ratio pyramid
Volker H. Metzler, Marc Puls, Til Aach
The quality of ultrasound images is limited due to granular speckle noise. The presented despeckle algorithm compensates the depth-dependent shape of granular speckles in sector scans by an initial coordinate transform. This yields a horizontally oriented speckle pattern of constant resolution and hence allows the use of constant filter templates. The signal-dependent nature of multiplicative speckle noise is considered by a ratio pyramid containing noise-normalized, subsampled scales corrupted by signal- independent noise. Since speckles can be identified as positive and negative impulses on the subsampled scales, they are removed by selfdual nonlinear multistage filters (NMF). The templates are adapted to the granular appearance of the speckles and the degree of filtering is individually controlled by the local noise power in each scale. We propose a new selfdual morphological pyramid with the common erosion/dilation as analysis operators and the reconstructive dilation/erosion as synthesis operators. The resulting closing-by-reconstruction and opening-by- reconstruction branches consider local intensity amplifications and attenuations, respectively. They are generated separately and combined only for scale-selective restoration by NMFs. Besides morphological decomposition, a ratio Laplacian pyramid is evaluated and its performance is compared with the proposed morphological decomposition, a ratio Laplacian pyramid is evaluated and its performance is compared with the proposed morphological decomposition. Both methods lead to significant noise reduction, where the morphological method introduces less signal degenerations.
Improved approaches to hair removal from skin image
Zhishun She, Peter J. Fish, Andrew W.G. Duller
Two new approaches to hair removal from skin images using auto-regressive (AR) model signal and band-limited (BL) signal interpolation have been compared to conventional linear interpolation and found to lead to smaller interpolation errors. Experimental results illustrate that the proposed approaches are able to reduce the disruption of hair removal on the skin line pattern.
Illumination normalization of retinal images using sampling and interpolation
Yiming Wang, Weining Tan, Samuel C. Lee
Blood vessels in retinal images are often spread wildly across the image surface. By using this feature, this paper presents a novel approach for illumination normalization of retinal images. With the assumption that the reflectance of the vessels (including both major and small vessels) is a constant, it was found in our study that the illumination distribution of a retinal image can be estimated based on the locations of the vessel pixels and their intensity values. The procedures for estimating the illumination consists of two steps: (1) obtain the vessel map of the retinal image, and (2) estimate the illumination function (IF) of the image by interpolating the intensity values (luminance) of non-vessel pixels using a bicubic model function based on the locations of the vessel pixels and their intensity values. The illumination-normalized image can then be obtained by subtracting the original image from the estimated IF.20 non-uniformly illuminated sample retinal images that were tested using the proposed method. The results showed that the over-all standard deviation of the illumination for the image background reduced by 56.8% from 19.82 to 8.56, and the signal-to-noise ratio of the normalized images was greatly improved in the application of the global thresholding for image/region segmentation. Furthermore, when measured by the local luminosity histograms, the contrast of regions with low illumination containing features that are normally difficult to detect (such as small lesions and vessels) was also enhanced significantly. Therefore, it is concluded that the proposed method can be used to produce a desirable illumination- normalized image, from which region segmentation can be made easier and more accurate.
Registration I
icon_mobile_dropdown
Multifunction extension of simplex optimization method for mutual information-based registration of ultrasound volumes
Vladimir Zagrodsky, Raj Shekhar, J. Fredrick Cornhill
Mutual information has been demonstrated to be an accurate and reliable criterion function to perform registration of medical data. Due to speckle noise, ultrasound volumes do not provide a smooth mutual information function. Consequently the optimization technique used must be robust enough to avoid local maxima and converge on the desired global maximum eventually. While the well-known downhill simplex optimization uses a single criterion function, our extension to multi-function optimization uses three criterion functions, namely mutual information computed at three levels of intensity quantization and hence three degrees of noise suppression. Registration was performed with rigid as well as simple non-rigid transformation modes for real-time 3D ultrasound datasets of the left ventricle. Pairs of frames corresponding to the most stationary end- diastolic cardiac phase were chosen, and an initial misalignment was artificially introduced between them. The multi-function simplex optimization reduced the failure rate by a factor of two in comparison to the standard simplex optimization, while the average accuracy for the successful cases was unchanged. A more robust registration resulted form the parallel use of criterion functions. The additional computational cost was negligible, as each of the three implementations of the mutual information used the same joint histogram and required no extra spatial transformation.
Active edge maps for medical image registration
William Kerwin, Chun Yuan
Applying edge detection prior to performing image registration yields several advantages over raw intensity- based registration. Advantages include the ability to register multicontrast or multimodality images, immunity to intensity variations, and the potential for computationally efficient algorithms. In this work, a common framework for edge-based image registration is formulated as an adaptation of snakes used in boundary detection. Called active edge maps, the new formulation finds a one-to-one transformation T(x) that maps points in a source image to corresponding locations in a target image using an energy minimization approach. The energy consists of an image component that is small when edge features are well matched in the two images, and an internal term that restricts T(x) to allowable configurations. The active edge map formulation is illustrated here with a specific example developed for affine registration of carotid artery magnetic resonance images. In this example, edges are identified using a magnitude of gradient operator, image energy is determined using a Gaussian weighted distance function, and the internal energy includes separate, adjustable components that control volume preservation and rigidity.
Evaluating template bias when synthesizing population averages
Blake L. Carlson, Gary E. Christensen, Hans J. Johnson, et al.
Establishing the average shape and spatial variability for a set of similar anatomical objects is important for detecting and discriminating morphological differences between populations. This may be done using deformable templates to synthesize a 3D CT/MRI image of the average anatomy from a set of CT/MRI images collected from a population of similar anatomical objects. This paper investigates the error associated with the choice of template selected from the population used to synthesize the average population shape. Population averages were synthesized for a population of five infant skulls with sagittal synostosis and a population of six normal adult brains using a consistent linear-elastic image registration algorithm. Each data set from the populations was used as the template to synthesize a population average. This resulted in five different population averages for the skull population and six different population averages for the brain population. The displacement variance distance from a skull within the population to the other skulls in the population ranged from 5.5 to 9.9 mm2 while the displacement variance distance from the synthesized average skulls to the population ranged from 2.2 to 2.7 mm2. The displacement variance distance from a brain within the population to the other brains in the population ranged from 9.3 to 14.2 mm2 while the displacement variance distance from the synthesized average brains to the population ranged from 3.2 to 3.6 mm2. These results suggest that there was no significant difference between the choice of template with respect to the shape of the synthesized average data set for these two populations.
Statistical approach to anatomical landmark extraction in AP radiographs
Rok Bernard, Franjo Pernus
A novel method for the automated extraction of important geometrical parameters of the pelvis and hips from APR images is presented. The shape and intensity variations in APR images are encompassed by the statistical shape and appearance models built from a set of training images for each of the three anatomies, i.e., pelvis, right and left hip, separately. The identification of the pelvis and hips is defined as a flexible object recognition problem, which is solved by generating anatomically plausible object instances and matching them to the APR image. The criterion function minimizes the resulting match error and considers the object topology. The obtained flexible object defines the positions of anatomical landmarks, which are further used to calculate the hip joint contact stress. A leave-one-out test was used to evaluate the performance of the proposed method on a set of 26 APR images. The results show the method is able to properly treat image variations and can reliably and accurately identify anatomies in the image and extract the anatomical landmarks needed in the hip joint contact stress calculation.
Template selection and rejection for robust nonrigid 3D registration in the presence of large deformations
Peter Roesch, Torsten Mohs, Thomas Netsch, et al.
The purpose of the proposed template propagation method is to support the comparative analysis of image pairs even when large deformations (e.g. from movement) are present. Starting from a position where valid starting estimates are known, small sub-volumes (templates) are registered rigidly. Propagating registration results to neighboring templates, the algorithm proceeds layer by layer until corresponding points for the whole volume are available. Template classification is important for defining the templates to be registered, for propagating registration results and for selecting successfully registered templates which finally represent the motion vector field. This contribution discusses a template selection and classification strategy based on the analysis of the similarity measure in the vicinity of the optimum. For testing the template propagation and classification methods, deformation fields of four volume pairs exhibiting considerable deformations have been estimated and the results have been compared to corresponding points picked by an expert. In all four cases, the proposed classification scheme was successful. Based on homologous points resulting from template propagation, an elastic transformation was performed.
Multiscale image and multiscale deformation of brain anatomy for building average brain atlases
Colin Studholme, Valerie A. Cardenas, Michael W. Weiner
In this work we consider the process of aligning a set of anatomical MRI scans, from a group of subjects, to a single reference MRI scan as accurately as possible. A key requirement of this anatomical normalization is the ability to bring into alignment brain images with different ages and disease states with equal accuracy and precision, enabling the unbiased comparison of different groups. Typical images of such anatomy may vary in terms of both tissue shape, location and contrast. To address this we have developed, a highly localized free-form inter-subject registration algorithm driven by normalized mutual information. This employs an efficient multi-image resolution and multi-deformation resolution registration procedure. In this paper we examine the behavior of this algorithm when applied to aligning high-resolution MRI of groups of younger, older and atrophied brain anatomy to different target anatomies. To gain an insight into the quality of the spatial normalization, we have examined two properties of the transformations: The residual intensity differences between spatially normalized MRI values and the spatial discrepancies in transformation estimates between group and reference, derived from transformations between 168 different image pairs. These are examined with respect to the coarseness of the deformation model employed.
Registration II
icon_mobile_dropdown
Similarity measures for nonrigid registration
Peter Rogelj, Stanislav Kovacic
Non-rigid multimodal registration requires similarity measure with two important properties: locality and multi- modality. Unfortunately all commonly used multimodal similarity measures are inherently global and cannot be directly used to estimate local image properties. We have derived a local similarity measure based on joint entropy, which can operate on extremely small image regions, e.g. individual voxels. Using such small image regions reflects in higher sensitivity to noise and partial volume voxels, consequently reducing registration speed and accuracy. To cope with these problems we enhance the similarity measure with image segmentation. Image registration and image segmentation are related tasks, as segmentation can be performed by registering an image to a pre-segmented reference image, while on the other hand registration yields better results when the images are pre-segmented. Because of these interdependences it was anticipated that simultaneous application of registration and segmentation should improve registration as well as segmentation results. Several experiments based on synthetic images were performed to test this assumption. The results obtained show that our method can improve the registration accuracy and reduce the required number of registration steps.
f-information measures in medical image registration
A much-used measure for registration of three-dimensional medical images is mutual information, which originates from information theory. However, information theory offers many more measures that may be suitable for image registration. Such measures denote the divergence of the joint grey value distribution of two images from the joint distribution for complete independence of the images. This paper compares the performance of mutual information as a registration measure with that of other information measures. The measures are applied to rigid registration of clinical PET/MR and MR/CT images, for 35 and 41 image pairs respectively. An accurate gold standard transformation is available for the images, based on implanted markers. Both registration performance and accuracy of the measures are studied. The results indicate that some information measures perform very poorly for the chosen registration problems, yielding many misregistrations, even when using a good starting estimate. Other measures, however, were shown to produce significantly more accurate results than mutual information.
Constrained localized-warping-reduced registration errors due to lesions in functional neuroimages
Perry E. Radau, Piotr J. Slomka, Per Julin, et al.
The constrained, localized warping (CLW) algorithm was developed to minimize the registration errors caused by hypoperfusion lesions. SPECT brain perfusion images from 21 Alzheimer patients and 35 controls were analyzed. CLW automatically determines homologous landmarks on patient and template images. CLW was constrained by anatomy and where lesions were probable. CLW was compared with 3rd-degree, polynomial warping (AIR 3.0). Accuracy was assessed by correlation, overlap, and variance. 16 lesion types were simulated, repeated with 5 images. The errors in defect volume and intensity after registration were estimated by comparing the images resulting from warping transforms calculated when the defects were or were not present. Registration accuracy of normal studies was very similar between CLW and polynomial warping methods, and showed marked improvement over linear registration. The lesions had minimal effect on the CLW algorithm accuracy, with small errors in volume (> -4%) and intensity (< +2%). The accuracy improvement compared with not warping was nearly constant regardless of defect: +1.5% overlap and +0.001 correlation. Polynomial warping caused larger errors in defect volume (< -10%) and intensity (> +2.5%) for most defects. CLW is recommended because it caused small errors in defect estimation and improved the registration accuracy in all cases.
Structural outlier detection for automatic landmark extraction
Julian Mattes, Jacques Demongeot
The aim of this paper is to introduce a structural dissimilarity measure which allows to detect outliers in automatically extracted landmark pairs in two images. In previous work, to extract landmarks automatically, candidate points have been defined using invariance criteria coming from differential geometry such as maximum curvature; or they are statistical entities such as gravity centers of confiners, where the confiners are defined as the connected components of the level sets. After a first estimation of the semi-rigid transformation (representing translation, rotation, and scaling) relating the candidate point sets, outliers are detected applying the euclidian distance between corresponding points. However, this approach does not allow to distinguish between real deformations and outliers coming from noise or additional features in one of the images. In this paper, we define a structural dissimilarity measure which we use to decide if two associated candidate points come from two corresponding confiners. We select landmarks pairs with a dissimilarity value smaller than a given threshold and we calculate the affine transformation relating best all selected landmark pairs. We evaluated our technique on successive slices of a MRI image of the human brain and show that we obtain a significantly sharper error diminution using the new dissimilarity measure instead of the euclidian distance for outlier rejection.
Point-based registration under a similarity transform
Jay B. West, J. Michael Fitzpatrick, Philippe G. Batchelor
This paper investigates the problem of point-based registration under a similarity transformation. This is a transformation that consists of rotation, translation, and isotropic scaling. There are many applications for registration under a similarity transform. First, the medical applications that usually use rigid-body registration may in some cases be improved by using a scale factor to account for particular types of distortion (for example, drift in gradient strength in MR image volumes). Second, similarity transforms are often used in biometrics to analyze and compare different sets of data. It was shown by Gower in 1971 that the choice of scale factor is independent from the choice of rotation and translation. We use a well-known solution for the rotation and translation parts of the transformation, and concentrate on the problem of choosing the scale factor. We examine three different methods of scaling, one of which is a novel maximum likelihood approach. We derive the target registration error and show the bias for each method. We introduce two different models of fiducial localization error, and we show that for one error model, Gower's method of scaling to minimize the sum of squared distances between corresponding points is also the maximum likelihood solution. Under the other error model, however, maximum likelihood leads to a new method of scaling.
Automated registration of 3D x-ray angiography images to magnetic resonance images
Erwan Kerrien, Olivier Levrier, Rene Anxionnat, et al.
An automated algorithm for frameless registration of intra-cranial 3D X-ray angiograms (3DXA) to Magnetic Resonance (MR) images is described and evaluated. The registration procedure starts with the manual designation of a pre-defined anatomical point in both modalities. Then, the registration is performed through an iterative process that alternatively estimates the rotation and translation using an original correlation optimization scheme. The evaluation procedure implied comparisons with both manual (11 cases) and stereotactic frame based (9 cases) registration. The results encompass that manual registration is not a viable method of reference for registering such volumes whereas stereotactic frame (or equivalent means) is acceptable for validation purpose. A maximum error of 4 mm was measured for our automated algorithm while variations of up to 5 mm were considered on the initial point location. Convergence time was below 1 minute while an average time of 30 minutes was required to perform manual registration. This validation procedure demonstrates a good precision for the automated algorithm when compared to a stereotactic frame based matching. Such an algorithm could make the intra-cranial pathology assessment more reliable, enable frameless radiotherapy planning of AVMs in 3D, ease biopsy planning in neurosurgery or be helpful for educational purpose.
Multiresolution parameterization of meshes for improved surface-based registration
Sylvain Jaume, Matthieu Ferrant, Simon Keith Warfield, et al.
Common problems in medical image analysis involve surface-based registration. The applications range from atlas matching to tracking an object's boundary in an image sequence, or segmenting anatomical structures out of images. Most proposed solutions are based on deformable surface algorithms. The main problem of such methods is that the local accuracy of the matching must often be traded off against global smoothness of the surface in order to reach global convergence of the deformation process. Our contribution is to first build a Multi-Resolution (M-R) surface from a reference segmented image, and then match this surface onto the target image in an M-R fashion using a deformable surface-like algorithm. As we proceed from lower to higher resolution, the smoothing effect of the deformable surface is more and more localized, and the surface gets closer and closer to the target boundary. We present initial results of our algorithm for atlas registration onto brain MRI showing improved convergence and accuracy over classical deformable surface methods.
Computer-Aided Diagnosis I
icon_mobile_dropdown
Integer wavelet compression guided by a computer-aided detection system in mammography
Shih-Chung Benedict Lo, Erini Makariou M.D., Andrzej Delegacz, et al.
Since an image data compression technique is usually associated with a low-pass filter, the unsharpness of calcifications and edges are of clinical concerns in mammography. The same effect may turn film defects into calcification-like spots and could produce false-positive detection by the radiologist. In this study, we employed a highly sensitive calcification detection system to guide an S+P integer wavelet compression, so that the data fidelity of calcifications or unknown spots are fully preserved. The prediction component of the S+P decomposition is based on Daubechies'D8. Our results indicated that the modified CAD program detected an average of 1,193 potential calcifications on CC view mammograms and an average of 948 potential calcifications on MLO view mammograms, respectively. Compressed data rates between 0.1 to 0.43 bit/pixel were studied. The compressed images were evaluated by subjective comparison studies. The results indicated that no difference could be observed between the original and the 0.43 bit rate decompressed images. The radiologist identifies 20% of the compressed images at 0.1 bit rate suffering from minor blurry artifacts and 6% of the compressed images possessing greater edge sharpness. Without a lossless compression for microcalcifications, the radiologist identified 20% of the microcalcifications on the compressed mammograms at 0.1 bit rate suffering from minor compression artifacts.
Recognition of lesion correspondence on two mammographic views: a new method of false-positive reduction for computerized mass detection
We used the correspondence of detected structures on two views of the same breast for false-positive (FP) reduction in computerized detection of mammographic masses. For each initially detected object on one view, we considered all possible pairings with objects on the other view that fell within a radial band defined by the nipple-to-object distances. We designed a 'correspondence classifier' to classify these pairs as either the same mass (a TP-TP pair) or a mismatch (a TP-FP, FP-TP or FP-FP pair). For each pair, similarity measures of morphological and texture features were derived and used as input features in the correspondence classifier. Two-view mammograms from 94 cases were used as a preliminary data set. Initial detection provided 6.3 FPs/image at 96% sensitivity. Further FP reduction in single view resulted in 1.9 FPs/image at 80% sensitivity and 1.1 FPs/image at 70% sensitivity. By combining single-view detection with the correspondence classifier, detection accuracy improved to 1.5 FPs/image at 80% sensitivity and 0.7 FPs/image at 70% sensitivity. Our preliminary results indicate that the correspondence of geometric, morphological, and textural features of a mass on two different views provides valuable additional information for reducing FPs.
Computer-aided diagnosis of lesions on multimodality images of the breast
Maryellen Lissak Giger, Zhimin Huo, Karla Horsch, et al.
We have developed computerized methods for the analysis of lesions that combine results from different imaging modalities, in this case digitized mammograms and sonograms of the breast, for distinguishing between malignant and benign lesions. The computerized classification method -- applied here to mass lesions seen on both digitized mammograms and sonograms, includes: (1) automatic lesion extraction, (2) automated feature extraction, and (3) automatic classification. The results for both modalities are then merged into an estimate of the likelihood of malignancy. For the mammograms, computer-extracted lesion features include degree of spiculation, margin sharpness, lesion density, and lesion texture. For the ultrasound images, lesion features include margin definition, texture, shape, and posterior acoustic attenuation. Malignant and benign lesions are better distinguished when features from both mammograms and ultrasound images are combined.
Analysis of temporal change of mammographic features for computer-aided characterization of malignant and benign masses
A new classification scheme was developed to classify mammographic masses as malignant and benign by using interval change information. The masses on both the current and the prior mammograms were automatically segmented using an active contour method. From each mass, 20 run length statistics (RLS) texture features, 3 spiculation features, and mass size were extracted. Additionally, 20 difference RLS features were obtained by subtracting the prior RLS features from the corresponding current RLS features. The feature space consisted of the current RLS features, the difference RLS features, the current and prior spiculation features, and the current and prior mass sizes. Stepwise feature selection and linear discriminant analysis classification (LDA) were used to select and merge the most useful features. A leave-one-case-out resampling scheme was applied to train and test the classifier using 140 temporal image pairs (85 malignant, 55 benign) obtained from 57 biopsy-proven masses (33 malignant, 24 benign) in 56 patients. An average of 10 features were selected from the 56 training subsets: 4 difference RLS features, 4 RLS features and 1 spiculation feature from the current image, and 1 spiculation feature from the prior, were most often chosen. The classifier achieved an average training Az of 0.92 and a test Az of 0.88. For comparison, a classifier was trained and tested using features extracted from the 120 current single images. This classifier achieved an average training Az of 0.90 and a test Az of 0.82. The information on the prior image significantly (p=0.01) improved the accuracy for classification of the masses.
Computer-assisted diagnosis of chest radiographs for pneumoconioses
Peter Soliz, Marios S. Pattichis, Janakiramanan Ramachandran, et al.
A Computer-assisted Chest Radiograph Reader System (CARRS) was developed for the detection of pathological features in lungs presenting with pneumoconioses. CARRS applies novel techniques in automatic image segmentation, incorporates neural network-based pattern classification, and integrates these into a graphical user interface. The three aspects of CARRS are described: Chest radiograph digitization and display, rib and parenchyma characterization, and classification. The quantization of the chest radiograph film was optimized to maximize the information content of the digital images. Entropy was used as the benchmark for optimizing the quantization. From the rib-segmented images, regions of interest were selected by the pulmonologist. A feature vector composed of image characteristics such as entropy, textural statistics, etc. was calculated. A laterally primed adaptive resonance theory (LAPART) neural network was used as the classifier. LAPART classification accuracy averaged 86.8 %. Truth was determined by the two pulmonologists. The CARRS has demonstrated potential as a screening device. Today, 90% or more of the chest radiographs seen by the pulmonologist are normal. A computer-based system that can screen 50% or more of the chest radiographs represents a large savings in time and dollars.
Quantitative MR assessment of structural changes in white matter of children treated for ALL
Wilburn E. Reddick, John O. Glass, Raymond K. Mulhern
Our research builds on the hypothesis that white matter damage resulting from therapy spans a continuum of severity that can be reliably probed using non-invasive MR technology. This project focuses on children treated for ALL with a regimen containing seven courses of high-dose methotrexate (HDMTX) which is known to cause leukoencephalopathy. Axial FLAIR, T1-, T2-, and PD-weighted images were acquired, registered and then analyzed with a hybrid neural network segmentation algorithm to identify normal brain parenchyma and leukoencephalopathy. Quantitative T1 and T2 maps were also analyzed at the level of the basal ganglia and the centrum semiovale. The segmented images were used as mask to identify regions of normal appearing white matter (NAWM) and leukoencephalopathy in the quantitative T1 and T2 maps. We assessed the longitudinal changes in volume, T1 and T2 in NAWM and leukoencephalopathy for 42 patients. The segmentation analysis revealed that 69% of patients had leukoencephalopathy after receiving seven courses of HDMTX. The leukoencephalopathy affected approximately 17% of the patients' white matter volume on average (range 2% - 38%). Relaxation rates in the NAWM were not significantly changed between the 1st and 7th courses. Regions of leukoencephalopathy exhibited a 13% elevation in T1 and a 37% elevation in T2 relaxation rates.
Computer-Aided Diagnosis II
icon_mobile_dropdown
Computerized lung nodule detection on thoracic CT images: combined rule-based and statistical classifier for false-positive reduction
We are developing a computer-aided diagnosis (CAD) system for lung nodule detection on thoracic helical computed tomography (CT) images. In the first stage of this CAD system, lung regions are identified and suspicious structures are segmented. These structures may include true lung nodules or normal structures that consist mainly of vascular structures. We have designed rule-based classifiers to distinguish nodules and normal structures using 2D and 3D features. After rule-based classification, linear discriminant analysis (LDA) is used to further reduce the number of false positive (FP) objects. We have performed a preliminary study using CT images from 17 patients with 31 lung nodules. When only LDA classification was applied to the segmented objects, the sensitivity was 84% (26/31) with 2.53 (1549/612) FP objects per slice. When the LDA followed the rule-based classifier, the number of FP objects per slice decreased to 1.75 (1072/612) at the same sensitivity. These preliminary results demonstrate the feasibility of our approach for nodule detection and FP reduction on CT images. The inclusion of rule-based classification leads to an improvement in detection accuracy for the CAD system.
Patient-specific models for lung nodule detection and surveillance in CT images
Matthew S. Brown, Michael F. McNitt-Gray, Jonathan G. Goldin, et al.
The purpose of this work is to automatically detect lung nodules in CT images, and then relocalize them in follow-up scans so that changes in size or morphology can be measured. We propose a new method that uses a patient's baseline image data to assist in the segmentation of subsequent images. The system uses a generic, a priori model to analyze the baseline scan of a previously unseen patient and then a user confirms or rejects nodule candidates. For analysis of follow-up scans of that particular patient, a patient- specific model is derived that narrows the search in feature-space for previously labeled nodules based on the feature values measured on the baseline scan. Also, some previously identified false positives can be automatically relocalized and eliminated. In the baseline scans of eleven patients, a radiologist identified a total of 14 nodules. All 14 nodules were detected automatically by the system with an average of 11 false positives per case. In follow- up scans, using patient-specified models, 12 of the 14 nodules were relocalized. There was one previously unseen nodule, that was detected by the system, with 9 false positives per follow-up case.
Improvement of method for computer-assisted detection of pulmonary nodules in CT of the chest
Martin Fiebich, Dag Wormanns, Walter Heindel
Computed tomography of the chest can be used as a screening method for lung cancer in a high-risk population. However, the detection of lung nodules is a difficult and time-consuming task for radiologists. The developed technique should improve the sensitivity of the detection of lung nodules without showing too many false positive nodules. In the first step the CAD technique for nodule detection in CT examinations of the lung eliminates all air outside the patient, then soft tissue and bony structures are removed. In the remaining lung fields a three-dimensional region detection is performed and rule-based analysis is used to detect possible lung nodules. In a study, which should evaluate the feasibility of screening lung cancer, about 2000 thoracic examinations were performed. The CAD system was used for reporting in a consecutive subset (n=100) of those studies. Computation time is about 5 min on an Silicon Graphics O2 workstation. Of the total number of found nodules >= 5 mm (n=68) 26 were found by the CAD scheme, 59 were detected by the radiologist. The CAD workstation helped the radiologist to identify 9 additional nodules. The false positive rate was less than 0.1 per image. The nodules missed by the CAD scheme were analyzed and the reasons for failure categorized into the density of the nodule is too low, nodules is connected to chest wall, segmentation error, and misclassification. Possible solutions for those problems are presented. We have developed a technique, which increased the detection rate of the radiologist in the detection of pulmonary nodules in CT exams of the chest. Correction of the CAD scheme using the analysis of the missed nodules will further enhance the performance of this method.
Initial development of a computer-aided diagnosis tool for solitary pulmonary nodules
This paper describes the development of a computer-aided diagnosis (CAD) tool for solitary pulmonary nodules. This CAD tool is built upon physically meaningful features that were selected because of their relevance to shape and texture. These features included a modified version of the Hotelling statistic (HS), a channelized HS, three measures of fractal properties, two measures of spicularity, and three manually measured shape features. These features were measured from a difficult database consisting of 237 regions of interest (ROIs) extracted from digitized chest radiographs. The center of each 256x256 pixel ROI contained a suspicious lesion which was sent to follow-up by a radiologist and whose nature was later clinically determined. Linear discriminant analysis (LDA) was used to search the feature space via sequential forward search using percentage correct as the performance metric. An optimized feature subset, selected for the highest accuracy, was then fed into a three layer artificial neural network (ANN). The ANN's performance was assessed by receiver operating characteristic (ROC) analysis. A leave-one-out testing/training methodology was employed for the ROC analysis. The performance of this system is competitive with that of three radiologists on the same database.
Image analysis of pulmonary nodules using micro CT
Noboru Niki, Yoshiki Kawata, Masashi Fujii, et al.
We are developing a micro-computed tomography (micro CT) system for imaging pulmonary nodules. The purpose is to enhance the physician performance in accessing the micro- architecture of the nodule for classification between malignant and benign nodules. The basic components of the micro CT system consist of microfocus X-ray source, a specimen manipulator, and an image intensifier detector coupled to charge-coupled device (CCD) camera. 3D image reconstruction was performed by the slice. A standard fan- beam convolution and backprojection algorithm was used to reconstruct the center plane intersecting the X-ray source. The preprocessing of the 3D image reconstruction included the correction of the geometrical distortions and the shading artifact introduced by the image intensifier. The main advantage of the system is to obtain a high spatial resolution which ranges between b micrometers and 25 micrometers . In this work we report on preliminary studies performed with the micro CT for imaging resected tissues of normal and abnormal lung. Experimental results reveal micro architecture of lung tissues, such as alveolar wall, septal wall of pulmonary lobule, and bronchiole. From the results, the micro CT system is expected to have interesting potentials for high confidential differential diagnosis.
Computer-Aided Diagnosis III
icon_mobile_dropdown
Improved detection of simulated thrombus by layer decomposition of coronary angiograms
Robert A. Close, Craig A. Morioka, Craig K. Abbey, et al.
Layer decomposition is a promising technique for background removal and noise reduction in coronary angiograms. Our layer decomposition algorithm decomposes a projection image sequence into multiple 2D layers undergoing translation, rotation, and scaling. We apply this layer decomposition algorithm to simulated angiograms containing stenotic vessels with and without thrombus. We constructed 85 pairs of simulated angiographic sequences by embedding each of 5 simulated vessels (with and without thrombus) in 17 clinical angiograms. We computed the response of a matched eye filter applied to (1) one raw image of each sequence at the time of minimal motion (RAW), (2) a layered digital subtraction angiography (LDSA) image of the same frame, and (3) the time-averaged vessel layer image (LAYER). We find that on average the LAYER and LDSA images have higher signal-to-noise ration and larger area under the receiver- operator characteristic curves (AUC) than the raw images.
Automatic quantitative analysis of cardiac MR perfusion images
Marcel M. Breeuwer, Luuk J. Spreeuwers, Marcel J. Quist
Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.
Four-dimensional coronary morphology and computational hemodynamics
Andreas Wahle, Steven C. Mitchell, Sharan D. Ramaswamy, et al.
Conventional reconstructions from intravascular ultrasound (IVUS) stack the frames as acquired during the pullback of the catheter to form a straight three-dimensional volume, thus neglecting the vessel curvature and merging images from different heart phases. We are developing a comprehensive system for fusion of the IVUS data with the pullback path as determined from x-ray angiography, to create a geometrically accurate 4-D (3-D plus time) model of the coronary vasculature as basis for computational hemodynamics. The overall goal of our work is to correlate shear stress with plaque thickness. The IVUS data are obtained in a single pullback using an automated pullback device; the frames are afterwards assigned to their respective heart phases based upon the ECG signal. A set of 3-D models is reconstructed by fusion of IVUS and angiographic data corresponding to the same ECG-gated heart phase; methods of computational fluid dynamics (CFD) are applied to obtain important hemodynamic data. Combining these models yields the final 4-D reconstruction. Visualization is performed using the platform-independent VRML standard for a user-friendly manipulation of the scene. An extension for virtual angioscopy allows an easy assessment of the vessel features within their local context. Validation was successfully performed both in-vitro and in-vivo.
Toward automated bone fracture classification
Michael W. Funk, Essam A. El-Kwae, James F. Kellam
A model is developed for the automated classification of bone fractures via image analysis techniques. The model is based on the widely used fracture classification system developed by the M.E. Mueller Foundation of Bern, Switzerland. The system describes a hierarchy of fractures, six layers deep. It also describes a series of questions to be asked about a given fracture, in which each question answered classifies the fracture into more descriptive subcategories. The model developed considers fracture classification as a tree traversal problem, in which the lower layers of the tree represent more precise categorizations. At each of the tree's nodes, algorithms specific to that subcategory determine which of the child nodes will be visited. Digital image processing techniques are most readily applicable to the largest number of nodes. Thus, the initial algorithms in this work are based on image processing techniques. The main contributions of this paper include a model for automated bone fracture classification and the algorithms for classification of a subset of long bone fractures. This work aims to provide a solid model and initial results that will serve as the basis for further research into this challenging and potentially rewarding field.
Multigenerational analysis and visualization of large 3D vascular images
Methods exist for extracting vascular tree structures from very large 3D medical images, such as those arising from micro-CT scanners. Techniques have not been well addressed, however, for characterizing the detailed statistical structure of the tree or for interacting with such data. In this paper, we present our ongoing efforts on the detailed generational analysis of large 3D vascular trees. Our previously proposed system discussed the initial image analysis and tree representation of 3D vascular images. Our current work improves the performance of the image analysis process and gives new means for evaluating the quantitative information and geometrical characteristics of the vasculature. Furthermore, we have made it more feasible to perform multigenerational analysis and topology manipulation interactively by incorporating visualization tools. Our current implementation of the image processing and analysis methods generates varied details of the branching geometry at generation, inter-branch, and intra-branch levels. Variations of vessel surfaces, blood volumes, cross-sectional areas, and branch lengths in a whole tree are studied. The visualization tools provide functionality of displaying slices, projections of the 3D images, and surface rendering of the segmented trees. Also, tree-editing capability permits a user to interactively manipulate the vascular topology, such as modification of extraneous, generally peripheral artifactual, branches and generations, and update the statistical details of the tree in real time. We present results for 3D micro-CT rat heart images.
Restoration/Deblurring
icon_mobile_dropdown
Pseudobased coherent diffusion for robust real-time ultrasound speckle reduction
Khaled Z. Abd-Elmoniem, Abou-Bakr M. Youssef, Yasser M. Kadah
We propose a novel technique for ultrasound speckle reduction based on iterative solutions to the coherent diffusion equation with the speckled image considered as the initial heat distribution. According to the extent of speckle, the model changes progressively from isotropic diffusion through anisotropic coherent diffusion to mean curvature motion. This structure maximally low-pass filters those parts of the image corresponding to fully-formed speckle, while preserving information associated with resolved object structures. The distance measure used to assess the deviation between images is embedded within the diffusivity tensor and is utilized as an intrinsic stopping criterion that ends the diffusion process completely in all directions when the deviation between the original and the filtered image exceeds the speckle limit. This model is termed pseudo-biased diffusion due to this unique formulation. Hence, there is no need for specifying the number of iterations in advance as with previous methods. Moreover, the steady state solution does not converge to the trivial single gray level solution, but rather to an image that is close in structure to the original but with speckle noise substantially reduced. Efficient discretization schemes allow large time steps to be used in obtaining the solution to achieve real-time processing.
Parallel image restoration with spatially variant point spread function: description and first clinical results
Andrew Shearer, Gerard Gorman, Triona O'Doherty, et al.
In this paper we present a parallel code which performs in iterative image deconvolution using either a spatially- invariant point spread function (SI-PSF) or a spatially- variant point spread function (SV-PSF). The basic algorithm is described as well as a description of the parallel implementation. Applications and results in the area of medical x-ray imaging is discussed.
Spherical navigator echoes for full 3D rigid body motion measurement in MRI
Edward B. Welch, Armando Manduca, Roger Grimm, et al.
We are developing a 3-D spherical navigator (SNAV) echo technique for MRI that can measure rigid body motion in all six degrees of freedom simultaneously, in a single echo, by sampling a spherical shell in k-space. MRI pulse sequences were developed to acquire varying amounts of data on such a shell. 3-D rotations of an imaged object simply rotate the data on this shell, and can be detected by registration of magnitude values. 3-D translations add phase shifts to the data on the shell, and can be detected with a weighted least squares fit to the phase differences at corresponding points. Data collected with a computer controlled motion phantom with known rotational and translational motions was used to evaluate the technique. The accuracy and precision of the technique depend on the sampling density, with roughly 1000 sample points necessary for accurate detection to within the error limits of the motion phantom. This number of samples can be captured in a single SNAV echo with a 3-D helical spiral trajectory. Motion detection in MRI with spherical navigator echoes is thus feasible and practical. Accurate motion measurements about all three axes, suitable for retrospective or prospective correction, can be obtained in a single pulse sequence.
Motion detection in hybrid PET/SPECT imaging based on the spatial cross-correlation of temporal sinograms
Claire J. M. Pellot-Barakat, Marija Ivanovic, Kjell Erlandsson, et al.
Patient motion in gamma camera coincidence imaging results in severe reconstruction artifacts. A protocol is proposed to automatically detect and correct motion from SPECT coincidence studies. The method is based on fractionating the acquisition into three full temporal sets of coincidence data. For each set and camera position, partial sinograms are calculated by rebinning events acquired at the same rotation. Partial sinograms from successive angular positions as well as from successive sets are cross- correlated along their common range of projections. Decreases in the cross-correlation values indicate that data from two successive rotations or sets became inconsistent and permit localization of the motion that occurred during the study. Events acquired during motion are eliminated while pre and post motion events are recombined into sets of consistent rebinned data that are reconstructed independently and fused to provide a motion-artifact free reconstructed image. The methods were tested using a wide range of experimental motion data obtained from cylindrical phantoms containing spheres filled with Fluorine-18. Single arbitrary motions that occurred during the study could be detected and further corrected in all phantom studies when the total number of coincident events acquired was greater than 5x106 for lesion-to-background ratios greater than 5.
Construction and simplification of bone density models
Jianhua Yao, Russell H. Taylor
This paper presents a hierarchical tetrahedral mesh model to represent the bone density atlas. We propose and implement an efficient and automatic method to construct hierarchical tetrahedral meshes from CT data sets of bony anatomy. The tetrahedral mesh is built based on contour tiling between CT slices. The mesh is then smoothed using an enhanced Laplacian algorithm. And we approximate bone density variations by means of continuous density functions written as smooth Bernstein polynomial spline expressed in terms of barycentric coordinates associated with each tetrahedron. We further perform the tetrahedral mesh simplification by collapsing the tetrahedra and build hierarchical structure with multiple resolutions. Both the shape and density error bound are preserved during the simplification. Furthermore a deformable prior model is computed from a collection of training models. Point Distribution Model is used to compute the variability of the prior model. Both the shape information and the density statistics are parameterized in the prior model. Our model demonstrates good accuracy, high storage efficiency and processing efficiency. We also compute the Digitally Reconstructed Radiographs from our model and use them to evaluate the accuracy and efficiency of our model. Our method has been tested on femur and pelvis data sets. This research is part of our effort of building density atlases for bony anatomies and applying them in deformable density based registrations.
Defect interpolation in digital radiography: how object-oriented transform coding helps
Til Aach, Volker H. Metzler
Today's solid state flat panel radiography detectors provide images which contain artifacts caused by lines, columns and clusters of inactive pixels. If not too large, such defects can be filled by interpolation algorithms which usually work in the spatial domain. This paper describes an alternative spectral domain approach to defect interpolation. The acquired radiograph is modeled as the undistorted image multiplied by a known binary defect window. The window effect is then removed by deconvolving the window spectrum from the spectrum of the observed, distorted radiograph. The basic ingredient of our interpolation algorithm is an earlier approach to block transform coding of arbitrarily shaped image segments, that extrapolates the segment internal intensities over a block into which the segment is embedded. For defect interpolation, the arbitrarily shaped segment is formed by a local image region with defects, thus turning extrapolation into defect interpolation. Our algorithm reconstructs both oriented structures and noise- like information in a natural-looking manner, even for large defects. Moreover, our concept can also be applied to non- binary defect windows, e.g. for gain correction.
Poster Session: Tomographic Reconstruction, Statistical Methods, Shape, Data Retrieval, Motion, Multiresolution, Preprocessing, Pattern Recognition and Coding
icon_mobile_dropdown
Evaluation of reconstruction algorithms for triple-head coincidence imaging by hot sphere detectability
Stefaan Vandenberghe, Yves D'Asseler, Chris Matthews, et al.
Simulations and measurements of triple head PET acquisitions of a hot sphere phantom were performed to evaluate the performance of two different reconstruction algorithms (projection based ML-EM and listmode ML-EM)for triple head gamma camera coincidence systems. A geometric simulator assuming a detector with 100 percent detection efficiency and only detection of trues was used. The resolution was equal to the camera system. The measurements were performed with a triple headed gamma camera. Simulated and measured data were stored in listmode format, which allowed the flexibility for different reconstruction algorithms. As a measure for the performance the hot spot detectability was taken because tumor imaging is the most important clinical application for gamma camera coincidence systems. The detectability was evaluated by calculating the recovered contrast and the contrast-to-noise ratio. Results show a slightly improved contrast but a clearly higher contrast-to-noise ratio for list mode reconstruction.
Nonlinear compensation of distortions introduced by the presence of metal objects in magnetic resonance imaging
Francis M. Bui, Ken Bott, Martin P. Mintchev
Magnetic resonance imaging exhibits technical characteristics that make it invaluable in medical diagnosis. However its full potential has been severely limited by the presence of imaging artifacts. These artifacts cause distortions in the obtained images, which no longer represent faithfully the object being imaged. In this work, we study the particular type of artifact known as magnetic susceptibility differences artifacts, caused by the presence of a ferromagnetic source. Previously, we have quantified these artifacts from three different perspectives: (1) pixel displacement, (2) blurring and (3) non-linearity. However, the non-linear distortions were quantified using a single-parameter non-linear function. In the present study, a multiparameter non-linear function based on polynomial series is used in the optimization. The results show that this modification allows for more flexibility and, as a result, improves the optimization algorithm.
Fast automatic correction of motion artifacts in shoulder MRI
Armando Manduca, Kiaran P. McGee, Edward B. Welch, et al.
The ability to correct certain types of MR images for motion artifacts from the raw data alone by iterative optimization of an image quality measure has recently been demonstrated. In the first study on a large data set of clinical images, we showed that such an autocorrection technique significantly improved the quality of clinical rotator cuff images, and performed almost as well as navigator echo correction while never degrading an image. One major criticism of such techniques is that they are computationally intensive, and reports of the processing time required have ranged form a few minutes to tens of minutes per slice. In this paper we describe a variety of improvements to our algorithm as well as approaches to correct sets of adjacent slices efficiently. The resulting algorithm is able to correct 256x256x20 clinical shoulder data sets for motion at an effective rate of 1 second/image on a standard commercial workstation. Future improvements in processor speeds and/or the use of specialized hardware will translate directly to corresponding reductions in this calculation time.
Development and evaluation of minimal-scan reconstruction algorithms for diffraction tomography
Diffraction tomography (DT) is a tomographic inversion technique that reconstructs the spatially variant refractive index distribution of a scattering object, and it can be viewed as a generalization of conventional X-ray computed tomography (CT) where X-rays have been replaced with a diffracting acoustical or electromagnetic wavefield. It is widely believed that measurements from a full angular range of 2(pi) are generally required to exactly reconstruct a complex-valued refractive index distribution. However, we have recently revealed that one needs measurements only over the angular range zero to 270 degrees to perform an exact reconstruction, and we developed minimal-scan filtered backpropagation (MS-FBPP) algorithms to achieve this. In this work, we review and numerically implement a recently developed family of minimal-scan reconstruction algorithms for DT that effectively operate by transforming the minimal-scan DT reconstruction problem into a conventional full-scan X-ray CT reconstruction problem.
Cone-beam image reconstruction from equiangular sampling using spherical harmonics
Katsuyuki Taguchi, Gengsheng Larry Zeng, Grant T. Gullberg
All of the exact cone-beam reconstruction algorithms for so called long-object problem do not use equi-angular sampling but use equi-space sampling. However, cylindrical detectors (equi-angular sampling in xy and equi-spatial in z) have advantage in their compact design. Therefore, toward the long-object problem with equi-angular sampling, the purpose of this study is to develop a cone-beam reconstruction algorithm using equi-angular sampling for short-object problem. A novel implementation of Grangeat's algorithm using equi-angular sampling has been developed for short- object problem with and without detector truncation. First, both cone-beam projection g(Psi )((Theta) ,(phi) ) and the first derivative of plane integral (3D Radon transform) p(Psi )((Theta) ,(phi) ) are described using spherical harmonics with equi-angular sampling. Then, using Grangeat's formula, relationship between coefficients of spherical harmonics for g(Psi )((Theta) ,(phi) ) and p(Psi )((Theta) ,(phi) ) are found. Finally, a method has been developed to obtain p(Psi )((Theta) ,(phi) ) from cone- beam projection data in which the object is partially scanned. Images are reconstructed using the 3D Radon backprojection with rebinning. Computer simulations were performed in order to verify this approach: Isolated (axially bounded) objects were scanned both with circular and helical orbits. Wen the orbit of the cone vertex does not satisfy Tuy's data sufficiency conditions, strong oblique shadows and blurring in the axial direction were shown in reconstructed coronal images. ON the other hand, if the trajectory satisfied Tuy's data sufficiency condition, the proposed algorithm provides an exact reconstruction. In conclusion, a novel implementation of the Grangeat's algorithm for cone-beam image reconstruction using equi-angular sampling has been developed.
Template-based scatter correction in clinical brain perfusion SPECT
Michel Koole, Rik Van de Walle, Koen Van Laere, et al.
A practical method for scatter compensation in SPECT imaging is the triple energy window technique (TEW) which estimates the fraction of scattered photons in the projection data pixel by pixel. This technique requires an acquisition of counts in three windows of the energy spectrum for each projection bin, which is not possible on every gamma camera. The aim of this study is to set up a scatter template for brain perfusion SPECT imaging by means of the scatter data acquired with the triple energy window technique. This scatter template can be used for scatter correction as follows: the scatter template is realigned with the acquired, by scatter degraded and reconstructed image by means of the corresponding emission template, which also includes scatter counts. The ratios between the voxel values of this emission template and the acquired and reconstructed image are used to locally adjust the scatter template. Finally the acquired and reconstructed image is corrected for scatter by subtracting the thus obtained scatter estimates. We compared the template based approach with the TEW scatter correction technique for data acquired with same gamma camera system and found a similar performance for both correction methods.
Least-squares framework for projection MRI reconstruction
Jens Gregor, Fernando Rannou
Magnetic resonance signals that have very short relaxation times are conveniently sampled in a spherical fashion. We derive a least squares framework for reconstructing three-dimensional source distribution images from such data. Using a finite-series approach, the image is represented as a weighted sum of translated Kaiser-Bessel window functions. The Radon transform thereof establishes the connection with the projection data that one can obtain from the radial sampling trajectories. The resulting linear system of equations is sparse, but quite large. To reduce the size of the problem, we introduce focus of attention. Based on the theory of support functions, this data-driven preprocessing scheme eliminates equations and unknowns that merely represent the background. The image reconstruction and the focus of attention both require a least squares solution to be computed. We describe a projected gradient approach that facilitates a non-negativity constrained version of the powerful LSQR algorithm. In order to ensure reasonable execution times, the least squares computation can be distributed across a network of PCs and/or workstations. We discuss how to effectively parallelize the NN-LSQR algorithm. We close by presenting results from experimental work that addresses both computational issues and image quality using a mathematical phantom.
Bayesian image reconstruction for transmission tomography using mixture model priors and deterministic annealing algorithms
Ing-Tsung Hsiao, Anand Rangarajan, Gene R. Gindi
We previously introduced a new Bayesian reconstruction method for transmission tomographic reconstruction that is useful in attenuation correction in SPECT and PET. To make it practical, we apply a deterministic annealing algorithm to the method in order to avoid the dependence of the MAP estimate on the initial conditions. The Bayesian reconstruction method used a novel pointwise prior in the form of a mixture of gamma distributions. The prior models the object as comprising voxels whose values (attenuation coefficients) cluster into a few classes (e.g. soft tissue, lung, bone). This model is particularly applicable to transmission tomography since the attenuation map is usually well-clustered and the approximate values of attenuation coefficients in each region are known. The algorithm is implemented as two alternating procedures, a regularized likelihood reconstruction and a mixture parameter estimation. The Bayesian reconstruction algorithm can be effective, but has the problem of sensitivity to initial conditions since the overall objective is non-convex. To make it more practical, it is important to avoid such dependence on initial conditions. Here, we implement a deterministic annealing (DA) procedure on the alternating algorithm. We present the Bayesian reconstructions with/out DA and show the independence of initial conditions with DA.
Deformation of MR images using a local linear transformation
Pilar Castellanos, Pedro L. del Angel, Veronica Medina
A fully automatic method to deform medical images is proposed. The procedure is based on the application of a set of consecutive local linear transformations at fixed landmarks, generating a global non-linear deformation. Continuity is guaranteed by a smooth change form the landmark point to the neighborhood, which is a homotopy between an affine transformation and the identity map. Landmarks are distributed uniformly throughout both reference and target images and their density is increased to reach the desired similarity between both images. A hybrid genetic optimization algorithm is used to search for the transformation parameters by maximizing the normalized mutual information. It is shown, by means of the transformation of a circle into a triangle and vice versa, that the method has the capability to generate either sharp of smooth deformations. For magnetic resonance images, it is proved that the successive application of the local linear transformations allows us to increase the similarity between geometrically deformed images and target. The results suggest that the method can be applied to a wide range of non-rigid image registration problems.
Reduction of noise and image artifacts in computed tomography by nonlinear filtration of projection images
Omer Demirkaya
This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.
Fast-image filters as an alternative to reconstruction kernels in computed tomography
Thomas Flohr, Stefan Schaller, Alexander Stadler, et al.
In Computed Tomography, axial resolution is determined by the slice collimation and the spiral algorithm, while in-plane resolution is determined by the reconstruction kernel. Both choices select a tradeoff between image resolution (sharpness) and pixel noise. We investigated an alternative approach using default settings for image reconstruction which provide narrow reconstructed slice-width and high in-plane resolution. If smoother images are desired, we filter the original (sharp) images, instead of performing a new reconstruction with a smoother kernel. A suitable filter function in the frequency domain is the ratio of smooth and original (sharp) kernel. Efficient implementation was achieved by a Fourier transform of this ratio to the spatial domain. Separating the 2D spatial filtering into two subsequent 1D filtering stages in x- and y-direction further reduces computational complexity. Using this approach, arbitrarily oriented multi-planar reformats (MPRs) can be treated in exactly the same way as axial images. Due to efficient implementation, interactive modification of the filter settings becomes possible, which completely replace the variety of different reconstruction kernels. We implemented a further promising application of the method to thorax imaging, where different regions of the thorax (lungs and mediastinum) are jointly presented in the same images using different filter settings and different windowing.
Physical phantom evaluation of EM-IntraSPECT (EMIS) algorithm for nonuniform attenuation correction in cardiac imaging
Andrzej Krol, James E. Bowsher, David H. Feiglin, et al.
The purpose of this study was to evaluate performance of the EM-IntraSPECT (EMIS) algorithm for non-uniform attenuation correction in the chest. EMIS is a maximum-likelihood expectation maximization (MLEM) algorithm for simultaneously estimating SPECT emission and attenuation parameters from emission data alone. EMIS uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. A thorax phantom with a normal heart was used. The activity images reconstructed by EMIS were compared to images reconstructed using a conventional MLEM with a fixed uniform attenuation map. Uniformity of normal heart was improved with EMIS as compared to a conventional MLEM.
Renormalization method for inhomogeneity correction of MR images
Dongqing Chen, Lihong Li, Daeki Yoon, et al.
A correction method for inhomogeneity of magnetic resonance (MR) images was developed based on renormalization transformation. It is a post-processing algorithm on images. Unlike previous post-processing methods, which need to determine either a filter size or a free adjustable parameter for different applications, this presented method is fully automated. Tests on physical phantom data, patients' brain and neck MR images were presented.
Combined transformation of ordering SPECT sinograms for signal extraction from measurements with Poisson noise
A theoretically based transformation, which reorders SPECT sinograms degraded by the Poisson noise according to their signal-to-noise ratio (SNR), has been proposed. The transformation is equivalent to the maximum noise fraction (MNF) approach developed for Gaussian noise treatment. It is a two-stage transformation. The first stage is the Anscombe transformation, which converts Poisson distributed variable into Gaussian distributed one with constant variance. The second one is the Karhunen-Loeve (K-L) transformation along the direction of the slices, which simplifies the complex task of three-dimensional (3D) filtering into 2D spatial process slice-by-slice. In the K-L domain, the noise property of constant variance remains for all components, while the SNR of each component decreases proportional to its eigenvalue, providing a measure for the significance of each components. The availability of the noise covariance matrix in this method eliminates the difficulty of separating noise from signal. Thus we can construct an accurate 2D Wiener filter for each sinogram component in the K-L domain, and design a weighting window to make the filter adaptive to the SNR of each component, leading to an improved restoration of SPECT sinograms. Experimental results demonstrate that the proposed method provides a better noise reduction without sacrifice of resolution.
Performance evaluation of oblique surface reconstruction algorithm in multislice cone-beam CT
Laigao Michael Chen, Yun Liang, Dominic J. Heuscher
This paper presents implementation of an algorithm termed as oblique surface reconstruction (OSR) on cone-beam multi-slice reconstruction. Simulations on several mathematical phantoms are performed. Theoretical consideration is reviewed and the reconstruction of simulated phantom data in comparison to the current standard 180# LI is presented. The OSR is shown to produce images with high quality when practical spiral cone-beam scanning utilizing the multi-slice configuration is considered.
High-speed cone-beam reconstruction on PC
Rongfeng Yu, Ruola Ning, Biao Chen
Cone beam reconstruction has attracted a great deal of attention in the medical imaging community. However, high-resolution cone beam reconstruction (CBR) involves a huge set of data and very time consuming computing. It usually needs customized hardware or a large-scale computer to achieve acceptable speed. Although the Feldkamp algorithm is an approximate CBR algorithm, it is a practical and efficient 3D reconstruction algorithm and is a basic component in several exact cone-beam reconstruction algorithms (CBRA). In this paper, we present a practical implementation for high-speed CBR on a commercially available PC based on hybrid computing (HC). We implement Feldkamp CBR with multi-level acceleration. We use HC utilizing single instruction multiple data (SIMD) and making execution units (EU) in the processor work effectively. We also utilize the multi-thread and fiber support on the operating system, which automatically enable the reconstruction parallelism in the multi-processor environment, and makes data I/O to the hard disk more effective. Memory and cache access optimization is done by properly data partition. This approach was tested on an Intel Pentium III 500Mhz computer and was compared to the traditional implementation. It decreases more than 75% the filtering time for 288 pieces projections, saves more than 60% of the reconstruction time for the 5123 cube, and maintains good precision with less than 0.08% average error. Our system is cost-effective and high-speed. An effective reconstruction engine can be built with a market-available Symmetric Multi-processor (SMP) computer. This is an easy and cheap upgrade and is compatible with newer PC processors.
Segmentation of MR images of the brain based on statistical and spatial properties
Joao E. Batista
Segmentation solely based on statistical approaches do not take into account spatial properties of the images. However, regions are not only characterized in statistical terms. Structural and/or spatial properties are also important and should be both considered. This paper presents a method which incorporates statistical and spatial image properties under a unified scheme for segmentation of MR images of the brain. It combines a pyramidal or quad-tree smoothing operation with statistical segmentation performed at variable levels of the quad-tree, followed by a download boundary estimation. After the segmentation step (k-means clustering algorithm), all regions and their belonging pixels are computed and stored on a data structure suitable for the quad-tree smoothing and boundary estimation. This paper describes this technique in detail and shows some results obtained on both test and MR data.
Efficient voxel lookup in nonuniformly spaced images using virtual uniform axes
Medical image data is usually represented by a uniformly spaced grid of voxels. However, CT scanners for example are capable of producing non-uniformly spaced slice images. This is desirable when for a particular patient some regions (lesions) need to be imaged with a high resolution, while a lower resolution would be sufficient in other areas. Such an adaptive slice spacing can significantly reduce X-ray dose, thus directly benefitting the patient. Unfortunately, computational handling of the resulting volume data is far less efficient than that of uniformly spaced images. To deal with this problem, the present paper introduces a novel data structure for non-uniformly spaced image coordinates, the so-called virtual uniform axes. By a generalization of Euclid's greatest common divider (GCD) algorithm, a table of virtual voxels on a uniform grid is produced. Each of the uniform voxels in the virtual grid holds a pointer to the corresponding voxel in the original, non-uniform grid. Finding a voxel in the virtual uniform image can be done in constant time as compared to logarithmic time for finding a voxel in a non-uniform image. This is achieved with significantly less additional storage than by resampling the image data itself to a uniform grid. Interpolation artifacts are also completely avoided.
Deformable Geometry I
icon_mobile_dropdown
Shape-model-based adaptation of 3D deformable meshes for segmentation of medical images
Vladimir Pekar, Michael R. Kaus, Cristian Lorenz, et al.
Segmentation methods based on adaptation of deformable models have found numerous applications in medical image analysis. Many efforts have been made in the recent years to improve their robustness and reliability. In particular, increasingly more methods use a priori information about the shape of the anatomical structure to be segmented. This reduces the risk of the model being attracted to false features in the image and, as a consequence, makes the need of close initialization, which remains the principal limitation of elastically deformable models, less crucial for the segmentation quality. In this paper, we present a novel segmentation approach which uses a 3D anatomical statistical shape model to initialize the adaptation process of a deformable model represented by a triangular mesh. As the first step, the anatomical shape model is parametrically fitted to the structure of interest in the image. The result of this global adaptation is used to initialize the local mesh refinement based on an energy minimization. We applied our approach to segment spine vertebrae in CT datasets. The segmentation quality was quantitatively assessed for 6 vertebrae, from 2 datasets, by computing the mean and maximum distance between the adapted mesh and a manually segmented reference shape. The results of the study show that the presented method is a promising approach for segmentation of complex anatomical structures in medical images.
Poster Session: Tomographic Reconstruction, Statistical Methods, Shape, Data Retrieval, Motion, Multiresolution, Preprocessing, Pattern Recognition and Coding
icon_mobile_dropdown
Unsupervised partial volume estimation using 3D and statistical priors
Pierre Martin Tardif
Our main objective is to compute the volume of interest of images from magnetic resonance imaging (MRI). We suggest a method based on maximum a posteriori. Using texture models, we propose a new partial volume determination. We model tissues using generalized gaussian distributions fitted from a mixture of their gray levels and texture information. Texture information relies on estimation errors from multiresolution and multispectral autoregressive models. A uniform distribution solves large estimation errors, when dealing with unknown tissues. An initial segmentation, needed by the multiresolution segmentation deterministic relaxation algorithm, is found using an anatomical atlas. To model the a priori information, we use a full 3-D extension of Markov random fields. Our 3-D extension is straightforward, easily implemented, and includes single label probability. Using initial segmentation map and initial tissues models, iterative updates are made on the segmentation map and tissue models. Updating tissue models remove field inhomogeneities. Partial volumes are computed from final segmentation map and tissue models. Preliminary results are encouraging.
PDE-based approach for medial axis detection in x-ray angiographies
Benoit Tremblais, Bertrand Augereau, Michel Leard
In the present work we deal with the assistance to the diagnostic of coronaries stenosis from X-rays angiographies. Our goal is a 3D-reconstruction of the coronarian tree, therefore the extraction of some 2D characteristics is necessary. Here, we treat the problem of the 2D vessels medial axis extraction. The vessels geometry looks like valleys embedded in the image surface. Using differential geometry we can locally characterize medial axis as bottom lines of valleys. However, we have to calculate the image local derivatives, which is an ill-posed and noise sensitive problem. To overcome this drawback, we use a PDE based approach. We first consider the PDE's numerical scheme as an iterative method known as fixed point search. So, we obtain a new method which assure the stability of the resolution process. The combinaison of this method an appropriate PDE generates a scale-space where we can detect arteries of various diameters. We use then the eigenvalues and eigenvectors of the Weingarten endomorphism to define a new valley-ness measure. We have tested this technique on several angiographies, where the medial axis have well been extracted, even in presence of strong stenosis. Furthermore, the extracted axis are one pixel large and quite continuous.
Partial volume estimation using continuous representations
This paper presents a new method for partial volume estimation using standard eigenimage method and B-splines. The proposed method is applied on the multi-parameter volumetric images such as MRI. The proposed approach uses the B-spline bases (kernels) to interpolate a continuous 2D surface or 3D density function for a sampled image dataset. It uses the Fourier domain to calculate the interpolation coefficients for each data point. Then, the above interpolation is incorporated into the standard eigenimage method. This incorporation provides a particular mask depending on the B-spline basis used. To estimate the partial volumes, this mask is convolved with the interpolation coefficients and then the eigenimage transformation is applied on the convolution result. To evaluate the method, images scanned from a 3D simulation model are used. The simulation provides images similar to CSF, white matter, and gray matter of the human brain in T1-, T2-, and PD-weighted MRI. The performance of the new method is also compared to that of the polynomial estimators.1 The results show that the new estimators have standard deviations less than that of the eigenimage method (up to 25%) and larger than those of the polynomial estimators (up to 45%). The new estimators have superior capabilities compared to that of the polynomial ones in that they provide an arbitrary degree of continuity at the boundaries of pixels/voxels. As a result, employing the new method, a continuous, smooth, and very accurate contour/surface of the desired object can be generated. The new B-spline estimators are faster than the polynomial estimators but they are slower than the standard eigenimage method.
Knowledge representation for image-content analysis in medical image database
Hui Luo, Roger S. Gaborski, Raj S. Acharya
The object-oriented knowledge representation is considered as a natural and effective approach. Nevertheless, the use of object-oriented within complex image analysis has not undergone a rapid growth as it happened in other fields. We argue that one of the major problems comes from the difficulty of conceiving a comprehensive framework for coping with the different abstraction levels and the vision task operations. With the goal to overcome such a drawback, we present a new knowledge model for medical image content analysis based on the object-oriented paradigm. The new model abstracts common model for medical image content analysis based on the object-oriented paradigm. The new model abstracts common properties from different types of medical images by using three attribute parts: description, component, and semantic graph, and also specifies its actions to schedule the detection procedure, properly deform the shape of model components to match the corresponding anatomies in images, select the best match candidates, and verify combination graphs from detected candidates with the semantic graph defined in the model. The performance of the proposed model has been tested on pelvis digital radiographs. Initial results are encouraging.
Content-based image retrieval strategies for medical image libraries
Ahmed M. Ghanem, M. Emad M. Rasmy, Yasser M. Kadah
The objective of Content-based image retrieval (CBIR) in medical field is to permit radiologists to retrieve images of similar features that lead to similar diagnosis as the input image. This is different from other fields where the objective is to find the nearest images from the same category or match a part of an image. Therefore, such techniques cannot be directly applied in the medical field. In this study, a modified wavelet-based matching technique is introduced that is more robust to motion, noise, and brightness changes within the image. Also, we propose a description-based technique in which a semantic net is built in which each node represents a specific region and its spatial relation to other regions in the image. This semantic net can be considered as a hierarchical relationship tree with links between the nodes in the same level to describe the geographic relations between these nodes. Nodes contain region-specific features such as the moments of region boundaries in addition to local textural features. In the matching phase, the semantic net is built for the input image then used in the matching process. The matching process starts from the highest level in the hierarchical relationship tree for fast convergence.
Compensation of motion artifacts in MR mammography by elastic deformation
Ingo A. Boesnach, Martin Haimerl, Harmut Friedburg, et al.
In contrast agent aided dynamic MR mammography, the diagnostic relevance of the subtraction images is often reduced by artifacts which arise from misalignment of successive images due to patient motion. In this article a new registration technique is presented to compensate such motion artifacts. For this purpose we use a three dimensional elastic deformation model, because patient motion usually is neither rigid nor takes place in parallel planes in contrast to other registration problems such as CT of the brain. For the registration, a set of control points is selected with locally high gradients being uniformly distributed over the mammae. A template based matching of small regions around the control points gives their optimal displacement vectors between the pre contrast and the post contrast image. The Delaunay triangulation of the control points defines a set of tetrahedra which are affinely transformed according to the displacements of their corners. The developed method works very fast and thus is suitable for high resolution MR devices. The evaluation with a synthetically transformed post contrast image results in a reduction of the mean error per voxel from 5.81 mm to 0.75 mm after registration.
Efficient multiresolution approach for image segmentation based on Markov random field
This paper proposes a computationally efficient hierarchical technique for object detection and segmentation and compare it with two other segmentation algorithms. The segmentation (MR) algorithm is performed at coarse resolution based on a maximum a posteriori (MAP) estimation of the field of pixel classifications, which is modeled a Markov random field (MRF). MR performs segmentation of a given image at coarse resolutions. Each resolution will correspond to a hierarchical level in a quad tree. So the classification of a pixel at one resolution will correspond to the classification of four pixels at the next finer resolution. Using this relationship we segment the image at the coarse resolution, each pixel in coarse resolution can be related to 16 pixels in the finer resolution. To find minimum global energy at coarse resolution, one pixel from 16 of observed image field given the unobserved filed. The MAP estimates the pixel classes given the observed filed. Segmentation process at each individual pixel will be performed by searching randomly in each relative pixel at 4x4 block-pixel to find minimum global energy at coarse resolution. Images from simulated head phantoms, degraded by Gaussian noise, are used for comparison of the proposed method with simulated annealing (SA) and minimum gray level distance (MGLD) approaches. Computational cost and segmentation accuracy of these methods are studied. It is shown that the proposed MR method offers a robust and computationally inexpensive method for segmentation of noisy images.
Acceleration and evaluation of block-based motion estimation algorithms for x-ray fluoroscopy
Claudia Mayntz, Til Aach, Georg Schmitz
This paper discusses acceleration methods for block-based motion estimation with the focus on the application to moving low-dose x-ray images (x-ray fluoroscopy). These images often exhibit a very low signal-to-noise ratio. If the frame rate is sufficiently high, these degradations can at least be partly compensated by temporally motion- compensated filtering, which requires first a motion estimation step. Due to the low signal-to-noise level and strong local motions standard algorithms are not suitable for application to fluoroscopy sequences. In an earlier work we developed a full search Bayesian block matching algorithm using spatial and temporal regularization. Based on the so-called Successive Elimination Algorithm by Li and Salari we developed several acceleration methods for the motion estimation step. Here, these methods and a detailed evaluation based on several synthetically generated motion types in fluoroscopy sequences are provided. Using a weighted block-norm based inequality, combined with an efficient calculation of the error measure which is partitioned into local measures, the number of search positions is remarkably reduced without significant loss in estimation quality.
Efficient color image reconstruction by color multiplexing and vector quantization in the wavelet domain
Color images are usually represented by three-color planes. For high fidelity low bit rate image storage or transmission, redundancy of information on theses color planes can be further reduced when combined with other image coding techniques. Instead of generating three codebooks for three color planes individually when vector quantization (VQ) is applied, as is regularly done in color image compression with VQ, our new approach is to reduce three color planes into one multiplexed plane using a spatial color-multiplexing technique, achieving a 3:1 compression before quantization. Wavelet transform is then applied on the multiplexed plane. Vector quantization of the wavelet coefficients is performed by using an adaptive fuzzy leader clustering (AFLC) approach. Inverse wavelet transform and demultiplexing at the decoder side are required to recover the color image in the spatial domain. Our experiment shows that this new scheme yields high fidelity reconstruction at a considerably lower bit rate than the bit rate achievable without color multiplexing.
Multiscale shape representation by image foresting transform
Alexandre Xavier Falcao, Bruno S. Cunha
The image foresting transform (IFT) reduces optimal image partition problems into a shortest-path forest problem in a graph, whose solution can be obtained in linear time. Such a strategy has allowed a unified and effective approach to the design of image processing operators, such as edge detection, region growing, watershed transforms, distance transforms, and connected filters. This paper presents a fast and simple IFT-based approach to multiscale shape representation with applications to medical imaging. Given an image with multiple contours, each contour pixel defines a seed with a contour label and a pixel label. The IFT computes the Euclidean distance transform, propagating both types of labels to other pixels in the image. A difference image is created from the propagated labels. The skeleton by influence zone (SKIZ) and multiscale skeletons are produced by thresholding the difference image. As compared to other approaches, including multiscale skeletonization based on the Voronoi diagram, the presented method can generate high-quality one-pixel-wide connected skeletons and SKIZ for objects of arbitrary topologies, simultaneously. Multiscale shape reconstructions can then be obtained by considering the SKIZ, the skeletons and the Euclidean distance values. The method allows non-linear multiscale shape filtering without border shifting, as illustrated with medical images.
Multiresolution spline-based 3D/2D registration of CT volume and C-arm images for computer-assisted surgery
We propose an algorithm for aligning a preoperative computed tomography (CT) volume and intraoperative C-arm images, with applications in computer-assisted spinal surgery. Our three-dimensional (3D)/two-dimensional (2D) registration algorithm is based on splines and is tuned to a multiresolution strategy. Its goal is to establish the mutual relations of locations in the real-world scene to locations in the 3D CT and in the 2D C-arm images. The principle of the solution is to simulate a series of C-arm images, using CT data only. Each numerical simulation of a C-arm image is defined by its pose. Our registration algorithm then adjusts this pose until the given C-arm projections and the simulated projections exhibit the greatest degree of similarity. We show the performance of the algorithm for the experiments in a controlled environment which allows for an objective validation of the quality of our algorithm. For each of 100 randomly generated disturbances around the optimum solution, the 3D/2D registration algorithm was successful and resulted in image registration with subpixel error.
Detection and correction of geometric distortion in 3D MR images
Marcel M. Breeuwer, Mark Holden, Waldemar Zylka
Three-dimensional magnetic resonance medical images may contain scanner- and patient-induced geometric distortion. For qualitative diagnosis, geometric errors of a few millimeters are often tolerated. However, quantitative applications such as image-guided neurosurgery and radiotherapy can require an accuracy of a millimeter or better. We have developed a method to accurately measure scanner-induced geometric distortion and to correct the MR images for this type of distortion. The method involves a number of steps. First, a specially designed phantom is scanned that contains a large number of reference structures on positions with a manufacturing error of less than 0.05 mm. Next, the positions of the reference structures are automatically detected in the scanned images and a higher-order polynomial distortion-correction transformation is estimated. Then the patient is scanned and the transformation is applied to correct the patient images for the detected distortion. The distortion-correction method is explained in detail in this paper. The accuracy of the method has been measured with synthetically generated phantom scans that contain an exactly-known amount and type of distortion. The reproducibility of the method has been measured by applying it to a series of consecutive phantom scans. Validation results are briefly described in this paper, a more-detailed description is given in another submission to SPIE Medical Imaging 2001.
Adaptive template filter and its medical applications
Shuqian Luo, Jing Han
In this paper an adaptive template filtering method is described which can be used to increase the signal-to-noise ratio(SNR) and keep the important edge information of medical images. To date various filtering approaches are reported, most of them enhance SNR in different levels with loss of some useful information. We try to develop a robust algorithm. Unlike conventional filtering, where the template shape and coefficients are fixed, multiple templates are defined in the proposed algorithm. For each pixel, an optimal template is selected automatically depending on its neighbor pixels. Simulation and MRI image tests, both 2D and 3D, show that the new adaptive template filter provides higher SNR and sharper edges. Our method improved existing adaptive template filtering technique, corrected scale factor of threshold adjustment, extended it to 3D algorithm.
ImprocRAD software components for mammogram display and analysis
ImprocRAD is a software package for processing medical images generated by a radiology department. We first give a brief overview of the package and its components. In the context of the overview, we describe the components that we use for tasks related to mammogram display. These tasks are: Rapid display of mammogram image sets (240 Mbytes simultaneously memory resident) Re-sampling mammograms (speed vs. quality including full 2D cubic MMSE) Comparison of processing results (combining image matching with subtraction and difference metrics) Real-time processing including MTF compensation, region of interest showing full resolution Automated film-scanner calibration (an automated user interface for generating lookup tables) Timing and optimization tools (a stopwatch tool, trouble shooting window) In addition we will briefly describe the workstation we are using for mammogram display.
Additional speed-up technique to fuzzy clustering using a multiresolution approach
Martin Buerki, Helmut Oswald, Karl Loevblad, et al.
The fuzzy clustering algorithm (FCA) is a powerful tool for unsupervised investigation of complex data in functional MRI. The original, computationally very expensive algorithm has been adapted in various ways to increase its performance while keeping it stable and sensitive. A simple and highly efficient way to speed up the FCA is preselection (screening) of potentially interesting time-courses, in a way that those time-courses, where only noise is expected are discarded. Although quite successful, preselecting data by some criterion is a step back to model driven analysis and should therefore be used with deliberation. Furthermore, some screening methods run the risk of missing non-periodic signals. We propose an additional adaptation using a multi-resolution approach that first scales down the data volumes. Starting with the lowest resolution, the FCA is applied to that level and then, the computed centroids are used as initial values to the FCA for the next higher level of resolution and so on until the original resolution is reached. The processing of all lower resolution levels serves as a good and fast initialization of the FCA, resulting in a stable convergence and an improved performance without loss of information.
Polynomial transformation for MRI feature extraction
We present a non-linear (polynomial) transformation to minimize scattering of data points around normal tissue clusters in a normalized MRI feature space, in which normal tissues are clustered around pre-specified target positions. This transformation is motivated by non-linear relationship between MRI pixel intensities and intrinsic tissue parameters (e.g., T1, T2, PD). To determine scattering amount, we use ratio of summation of within-class distances fro clusters to summation of their between-class distances. We find the transformation by minimizing the scattering amount. Next, we generate a 3D visualization of the MRI feature space and define regions of interest (ROI's) on clusters seen for normal and abnormal tissues. We use these ROI's to estimate signature vectors (cluster centers). Finally, we use the signature vectors for segmenting and characterizing tissues. We used simulation, phantom, and brain MRI to evaluate the polynomial transformation and compare it to the linear transformation. In all studies, we were able to identify clusters for normal and abnormal tissues and segment the images. Compared to the linear method, the non-linear approach yields enhanced clustering properties and better separation of normal and abnormal tissues. ON the other hand, the linear transformation is more appropriate than the non-linear method for capturing partial volume information.
Antiscatter stationary-grid artifacts automated detection and removal in projection radiography images
Igor Belykh, Craig W. Cornelius
Antiscatter grids absorb scattered radiation and increase X- ray image contrast. Stationary grids leave line artifacts or create Moire patterns on resized digital images. Various grid designs were investigated to determine relevant physical properties that affect an image. A detection algorithm is based on grid peak determination in the image's averaged 1D Fourier spectrum. Grid artifact removal is based on frequency filtering in a spatial dimension orthogonal to the grid stripes. Different filter design algorithms were investigated to choose the transfer function that maximizes the suppression of grid artifacts with minimal image distortion. Algorithms were tested on synthetic data containing a variety of SNRs and grid spatial inclinations, on radiographic data containing phantoms with and without grids, and on a set of real CR images. Detector and filter performance were optimized by using Intel Signal Processing Library, resulting in a time of about 3 sec to process a 2Kx2.5K CR image on a Pentium II PC> There are no grid artifacts and no image blur revealed on processed images as evaluated by third party technical and medical experts. This automated grid artifact suppression method is built into a new version of Kodak PACS Link Medical Image Manager.
Adaptive EZW coding using a rate-distortion criterion
Che-Yi Yin
This work presents a new method that improves on the EZW image coding algorithm. The standard EZW image coder uses a uniform quantizer with a threshold (deadzone) that is identical in all subbands. The quantization step sizes are not optimized under the rate-distortion sense. We modify the EZW by applying the Lagrange multiplier to search for the best step size for each subband and allocate the bit rate for each subband accordingly. Then we implement the adaptive EZW codec to code the wavelet coefficients. Two coding environments, independent and dependent, are considered for the optimization process. The proposed image coder retains all the good features of the EZW, namely, embedded coding, progressive transmission, order of the important bits, and enhances it through the rate-distortion optimization with respect to the step sizes.
Preliminary validation of content-based compression of mammographic images
This paper presents some preliminary validation results from the content-based compression (CBC) of digitized mammograms for transmission, archiving, and, ultimately, telemammography. Unlike traditional compression techniques, CBC is a process by which the content of the data is analyzed before the compression takes place. In this approach the data is partitioned into two classes of regions and a different compression technique is performed on each class. The intended result achieves a balance between data compression and data fidelity. For mammographic images, the data is segmented into two non-overlapping regions: (1) background regions, and (2) focus-of-attention regions (FARs) that contain the clinically important information. Subsequently, the former regions are compressed using a lossy technique, which attains large reductions in data, while the latter regions are compressed using a lossless technique in order to maintain the fidelity of these regions. In this case, results show that compression ratios averaging 5-10 times greater than that of lossless compression alone can be achieved, while preserving the fidelity of the clinically important information.
High-fidelity adaptive medical image reconstruction using real-time wavelet filter design for fast Internet transmission and display
Vadim Kustov, Sunanda Mitra, Ryan B. Casey, et al.
Most high resolution medical images such as X-ray radiographic images require enormous storage space and considerable time for transmission and viewing. We propose a wavelet design that creates the optimal filter taps for any class of images adaptively for high fidelity image reconstruction using an energy compacted section of the wavelet decomposed original image with considerable reduction in memory requirement as well as in execution, transmission, and viewing time. This optimal filter tap design is based on two-channel perfect reconstruction quadrature mirror filter (PR-QMF) banks using an interior-point-based optimization algorithm. The algorithm finds wavelet filter taps that allows the smallest amount of energy in the detail sections of the wavelet decomposition of an image in real time. Once the filter taps have been created and a one level wavelet transform has been performed, the energy compacted component of the image containing one fourth of the number of elements in the original image, is retained without any significant loss in the information content. This energy compacted section of the image is then used for any chosen advanced compression algorithm. This technique provides a significant reduction in execution time without an appreciable increase in distortion for advanced lossy image compression algorithms.
Poster Session: Segmentation, Deformable Geometry, Registration, and Computer-Aided Diagnosis
icon_mobile_dropdown
Cardiac image segmentation using spatiotemporal clustering
Sasa Galic, Sven Loncaric
Image segmentation is an important and challenging problem in image analysis. Segmentation of moving objects in image sequences is even more difficult and computationally expensive. In this work we propose a technique for spatio- temporal segmentation of medical sequences based on K-mean clustering in the feature vector space. The motivation for spatio-temporalsegmentation approach comes from the fact that motion is a useful clue for object segmentation. Two- dimensional feature vector has been used for clustering in the feature space. In this paper we apply the proposed technique to segmentation of cardiac images. The first feature used in this particular application is image brightness, which reveals the structure of interest in the image. The second feature is the Euclidean norm of the optical flow vector. The third feature is the three- dimensional optical flow vector, which consists of computed motion in all three dimensions. The optical flow itself is computed using Horn-Schunck algorithm. The fourth feature is the mean brightness of the input image in a local neighborhood. By applying the clustering algorithm it is possible to detect moving object in the image sequence. The experiment has been conducted using a sequence of ECG-gated magnetic resonance (MR) images of a beating heart taken as in time so in the space.
Automatic detection of the myocardial boundaries of the right and left ventricles in MR cardio perfusion scans
Luuk J. Spreeuwers, Marcel M. Breeuwer
Recent advances in Magnetic Resonance Imaging allow fast recording of contrast enhanced myocardial perfusion scans. MR perfusion scans are made by recording, during a period of 20-40 seconds a number of short-axis slices through the myocardium. The scanning is triggered by the patient's ECG typically resulting in one set of slices per heart beat. For the perfusion analysis, the myocardial boundaries must be traced in all images Currently this is done manually, a tedious procedure, prone to inter- and intra-observer variability. In this paper a method for automatic detection of myocardial boundaries is proposed. This results in a considerable time reduction of the analysis and is an important step towards automatic analysis of cardiac MR perfusion scans. The most important consideration in the proposed approach is the use of not only spatial-intensity information, but also intensity-time and shape information to realize a robust segmentation. The procedure was tested on a total of 30 image sequences from 14 different scans. From 26 out of 30 sequences the myocardial boundaries were correctly found. The remaining 4 sequences were of very low quality and would most likely not be used for analysis.
Procedure to detect anatomical structures in optical fundus images
Langis Gagnon, Marc Lalonde, Mario Beaulieu, et al.
We present an overview of the design and test of an image processing procedure for detecting all important anatomical structures in color fundus images. These structures are the optic disk, the macula and the retinal network. The algorithm proceeds through five main steps: (1) automatic mask generation using pixels value statistics and color threshold, (2) visual image quality assessment using histogram matching and Canny edge distribution modeling, (3) optic disk localization using pyramidal decomposition, Hausdorff-based template matching and confidence assignment, (4) macula localization using pyramidal decomposition and (5) bessel network tracking using recursive dual edge tracking and connectivity recovering. The procedure has been tested on a database of about 40 color fundus images acquired from a digital non-mydriatic fundus camera. The database is composed of images of various types (macula- and optic disk-centered) and of various visual quality (with or without abnormal bright or dark regions, blurred, etc).
Morphological texture assessment of oral bone as a screening tool for osteoporosis
Mostafa Analoui, Hafsteinn Eggertsson, George Eckert
Three classes of texture analysis approaches have been employed to assess the textural characteristic of oral bone. A set of linear structuring elements was used to compute granulometric features of trabecular bone. Multifractal analysis was also used to compute the fractal dimension of the corresponding tissues. In addition, some statistical features and histomorphometric parameters were computed. To assess the proposed approach we acquired digital intraoral radiographs of 47 subjects (14 males and 33 females). All radiographs were captured at 12 bits/pixel. Images were converted to binary form through a sliding locally adaptive thresholding approach. Each subject was scanned by DEXA for bone dosimetry. Subject were classified into one of the following three categories according World Health Organization (WHO) standard (1) healthy, (2) with osteopenia and (3) osteoporosis. In this study fractal dimension showed very low correlation with bone mineral density (BMD) measurements, which did not reach a level of statistical significance (p<0.5). However, entropy of pattern spectrum (EPS), along with statistical features and histomorphometric parameters, has shown correlation coefficients ranging from low to high, with statistical significance for both males and females. The results of this study indicate the utility of this approach for bone texture analysis. It is conjectured that designing a 2-D structuring element, specially tuned to trabecular bone texture, will increase the efficacy of the proposed method.
Fusion for optimal path recovering in cerebral x-ray angiography
Cezary Boldak, Christine Toumoulin, Jean-Louis Coatrieux
The objective is to extract the most plausible graph of 2D vascular branches (e.g. with respect to some basic vessel features) or, in other words, to find the best pairing of vascular segments forming these branches. The assumption here is that a previous detection has been carried out which provides the vessel centerlines. The method, based on sparse to dense description, has been designed in order to eliminate irrelevant lines and to extract the most important branches. The key feature of the method is the use of data fusion concepts in a simple but efficient way, capable to later integrate possibilistic or fuzzy decisions. It makes use of local fusion decision at each node (vessel forking, crossing or ending), based on intensity, continuity and shape properties. Several criteria have been explored with hierarchically structured features. A global fusion allows multiple optimal paths to be set and further merged in order to derive a final graph. Local and global fusions are applied by traversing the vessel network form the extremities to the root and vice versa. Segments and branches are defined as objects for any further selection, manipulation and measurements. This method can be used both in pre-operative and intra-operative situations.
Automated detection of midsagittal plane in MR images of the head
Deming Wang, Jonathan B. Chalk, David M. Doddrell, et al.
A fully automated and robust method is presented for dividing MR 3D images of the human brain into two hemispheres. The method is developed specifically to deal with pathologically affected brains or brains in which the longitudinal fissure (LF) is significantly widened due to ageing or atrophy associated with neuro-degenerative processes. To provide a definitive estimate of the mid- sagittal plane, the method combines longitudinal fissure lines detected in both axial and corona slices of T1- weighted MR images and then fit these lines to a 3D plane. The method was applied to 36 brain MR image data sets (15 of them arising from subjects with probable Alzheimer's disease) all exhibiting some degrees of widened fissures and/or significant asymmetry due to pathology. Visual inspection of the results revealed that the separation was highly accurate and satisfactory. In some cases (5 in total), there were minor degrees of asymmetry in the posterior fossa structures despite successful splitting of cerebral cortex.
Stent detection for presentation by overlay in injected x-ray cardiac images
Vincent Courboulay, Michel Eboueya, Michel Menard, et al.
Coronary stenting is now a widely - used technique for the treatment of stenosis of coronary arteries. One of the main difficulties for the interventionists is the difficulty of locating the dilated stent in the image and controlling its accurate positioning with respect to the lesion. Indeed, the stent is less contrasted than injected vessels and can hardly be located in the injected frames. We propose to help them by identifying the stent in the non-injected frames and then showing it by overlay in the injected frames. Physicians can then more easily appreciate the relative positioning of the stent and the pathology. For the detection task, the performance of our algorithm varies around 80% with a standard deviation about 10%. This result has been obtained with more than 20 series. Each of them includes about 20 non-injected frames acquired on a LC+ system. At least one stent is present in these series. Our database includes stents whose radio opacity varies in a large range. With our non optimized implementation, time computation for a 512*512 image is about 20 seconds. This work is a first step in the use of image processing techniques for the segmentation of the prothesis that are routinely used by interventionists. By locating them in the image, the x-ray imaging system will be able to provide a better display of them.
Automatic bone-free rendering of cerebral aneurysms via 3D-CTA
Punam K. Saha, John M. Abrahams, Jayaram K. Udupa
3D computed tomographic angiography (3D-CTA) has been described as an alternative to digital subtraction angiography (DSA) in the clinical evaluation of cerebrovascular diseases. A bone-free rendition of 3D-CTA facilitates a quick and accurate clinical evaluation of the disease. We propose a new bone removal process that is accomplished in three sequential steps - (1) primary delineation and removal of bones, (2) removing the effect of partial voluming around bone surfaces, and (3) removal of thin bones around nose, mouth and eyes. The bone removed image of vasculature and aneurysms is rendered via maximum intensity projection (MIP). The method has been tested on 10 patients' 3D-CTA images acquired on a general Electric Hi-Speed Spiral CT Scanner. The algorithm successfully subtracted bone showing the cerebral vasculature in all 10 patients' data. The method allows for a unique analysis of 3D-CTA data for near automatic removal of bones. This greatly reduces the need for manual removal of bones that is currently utilized and greatly facilitates the visualization of the anatomy of vascular lesions.
Semiautomatic bone removal technique from CT angiography data
Cortical bone is the major barrier in visualizing the 3-D blood vessel tree from CT Angiography [CTA] data. Thus, we have developed a novel semi-automatic technique that removes the cortical bone and retains the clinical diagnostic information such as blood vessels, aneurysms, and calcifications. The technique is based on a methodical composite set of filters that use region-growing, adaptive, and morphological filtering algorithms. While using only voxel intensity value and region size information, this technique retains most of the CTA data untouched. We have implemented this method on 10 CTA abdomen and head data sets. The accuracy of the method was tested and proved successful by visual inspection of all segmented slices. The segmented CTA data were also visualized in 3-D with different Ray Casting Volume Rendering techniques (e.g. Maximum Intensity Projection). The blood vessels along with other diagnostic information were clearly visualized in 3-D without the obstruction of bone. The segmentation technique ran under one second per slice (image size is 512x512x2 bytes) on a PC with 550 MHz processor.
Extraction of the contours of left ventricular cavity according with those traced by medical doctors from left ventriculograms using a neural edge detector
Kenji Suzuki, Isao Horiba, Noboru Sugie, et al.
In this paper, we have proposed a novel edge detector using a multilayer neural network, called the neural edge detector (NED), and a new contour-extraction method using the NED to extract the contours according with those traced by medical doctors. The NED is a supervised edge detector: through training the NED with a set of input images and desired edges, it acquires the function of a desired edge detector. The proposed contour-extraction method consists of (a) edge detection using the NED, (b) extraction of rough contours based on band-pass filtering, and (c) contour tracking based on the candidates for the contours synthesized from the edges detected by the NED and the rough contours. The experiments to extract the contours of left ventricular cavity from left ventriculograms were performed. By comparative evaluation with the conventional edge detectors, it has been shown that the NED has the highest performance. Through the experiments to evaluate the performance of contour extraction, the following has been demonstrated: The proposed method can extract the contours according with those traced by medical specialists; The performance of the proposed method is higher than that of the conventional method; The proposed method has the about same ability of medical specialists.
Segmentation of bone and soft tissue regions in digital radiographic images of extremities
This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.
RBF network with cylindrical coordinate features for multispectral MRI segmentation
Spatial quantification of relevant brain structures, is usually carried out through the analysis of a stack of magnetic resonance (MR) images by means of some image segmentation approach. In this paper, multispectral MR imaging segmentation based on a modified radial-basis function network is presented. Multispectral MR image sets are constructed by collecting data for the same anatomical structures under T1, T2 and FLAIR excitation sequences. Classification features for the network are extended beyond the normalized intensities in each band to also include the cylindrical coordinates of the image pixels. Such coordinates are determined within a reference image space upon which all targets are registered to. The network classifier was designed to differentiate three structures: gray matter, white matter and image background. The classification layer was also modified to accommodate the pixel cylindrical coordinates as inputs. With the designed network, background pixels are correctly classified for all cases, while gray and white matter pixels are misclassified for about 10% of the cases in the validation set. The source of these errors can be traced to smooth transitions in the output nodes for these two classes. Thresholding the outputs of these nodes to include a reject class reduces the misclassification error. The small and simple architecture of the network shows good generalization, and thus good segmentation over unseen stacks.
Real-time retinal tracking for laser treatment planning and administration
Nahed Solouma, Abou-Bakr M. Youssef, Yehia Badr, et al.
We propose a computerized system to accurately point laser to the diseased areas within the retina based on predetermined treatment planning. The proposed system consists of a fundus camera using red-free illumination mode interfaced to a computer that allows real-time capturing of video input. The first image acquired is used as the reference image for treatment planning. A new segmentation technique was developed to accurately discern the image features using deformable models. A grid of seed contours over the whole image is initiated and allowed to deform by splitting and/or merging according to preset criteria until the whole vessel tree is extracted. This procedure extracts the whole area of small vessels but only the boundaries of the large vessels. Correlating the image with a one-dimensional Gaussian filter in two perpendicular directions is used to extract the core areas of such vessels. Faster segmentation can be obtained for subsequent images by automatic registration to compensate for eye movement and saccades. Comparing the two sets of landmark points using a least-squares error provide an optimal transformation between the two point sets. This allows for real-time location determination and tracking of treatment positions.
Extraction of coronary arteries by using a sequence of x-ray angiographic images
Chih-Yang Lin, Yu-Tai Ching, Shiuh-Yung James Chen
A method was developed for extraction coronary arteries from a contiguous sequence of angiographic images. Since coronary artery in the image usually has poor local contrast and has ribs, spine, and other tissues in the background. We remove the background using the information of temporal continuity. A set of multi-size matched filters process to enhance vessels from poor local contrast. The wavelet transformation based method is then employed to remove noise to enhance the image quality. We also design a stencil mask to remove the stationary tissues further.
Vascular segmentation algorithm using locally adaptive region growing based on centerline estimation
In this paper, we propose a new region-based approach on the basis of centerline estimation, to segment vascular networks in 3D CTA/MRA images. The proposed algorithm is applied repeatedly to newly updated local cubes. It consists of three tasks; local region growing, surfacic connected component labeling, and next local cube detection. The cube size is adaptively determined according to the estimated diameter. After region growing inside a local cube, we perform the connected component labeling procedure on all 6 faces of the current local cube (surfacic component labeling). Then the detected surfacic components are put into a queue to serve as seeds of following local cubes. Contrary to conventional centerline-tracking methods, the proposed algorithm can detect all bifurcations without any restriction because a region-based method is used at every local cube. And by confining region growing to a local cube, it can be more effective in producing prospective results. It should be noticed that the segmentation result is divided into several branches, so a user can easily edit the result branch-by-branch. The proposed method can automatically generate a flyway in a virtual angioscopic system since it provides a tree structure of the detected branches.
Segmentation of medical images using adaptive region growing
Interaction increases flexibility of segmentation but it leads to undesirable behavior of an algorithm if knowledge being requested is inappropriate. In region growing, this is the case for defining the homogeneity criterion as its specification depends also on image formation properties that are not known to the user. We developed a region growing algorithm that learns its homogeneity criterion automatically from characteristics of the region to be segmented. The method is based on a model that describes homogeneity and simple shape properties of the region. Parameters of the homogeneity criterion are estimated from sample locations in the region. These locations are selected sequentially in a random walk starting at the seed point, and the homogeneity criterion is updated continuously. The method was tested for segmentation on test images and of structures in CT images. We found the method to work reliable if the model assumption on homogeneity and region characteristics are true. Furthermore, the model is simple but robust, thus allowing for a certain degree of deviation from model constraints and still delivering the expected segmentation result. This approach was extended to a fully automatic and complete segmentation method by using the pixels with the smallest gradient length in the not yet segmented image region as a seed point.
Segmentation of ulcerated plaque: evaluation and optimization of a semiautomatic method for tracking the progression of carotid atherosclerosis
Jeremy D. Gill, Hanif M. Ladak, Aaron Fenster
A semi-automatic method for segmenting carotid lumen and plaque from three-dimensional vascular ultrasound (US) images has been developed. We examine its ability to distinguish changes in carotid vessel and plaque surface morphology, such as those caused by plaque ulceration. Two stenosed vessel phantoms were imaged using a 3D US imaging system. The phantoms were identical except for the inclusion of a hemispherical cut in the side of one of the vessels, in order to simulate the development of an ulceration. Ultrasound images of the phantoms were segmented using our algorithm, then the resulting surfaces were registered to one another using a rigid-body iterative closest point (ICP) algorithm. The volume of ulceration was determined by finding the difference between the two segmented surfaces in a region of interest surrounding the ulceration. Since the true volume of the ulceration was known a priori, an optimization strategy was used to tune the deformable model to better segment the ulceration. Analysis of ulceration volume as a function of the deformable model's parameters show that 1) large ulcerations are easily identified in our test case, and 2) the model is well behaved with respect to its parameters, suggesting that an automatic strategy for volumetric optimization is feasible.
Local-cost computation for efficient segmentation of 3D objects with live wire
Andrea Schenk, Guido P. M. Prause, Heinz-Otto Peitgen
We present an approach for the optimization of the live wire algorithm applied to 3D medical images. Our method restricts the computation of the cost function to relevant areas and considers regionally specific properties of the object boundary. As a consequence, precise contours can be obtained in reduced computation and interaction time. For the calculation of the cost function on the current image slice, the nearest contour on an adjacent slice is taken as reference. The reference contour is divided into local segments and the image pixels are classified into regions with respect to their distance to the contour segments. The size of these regions is controlled by a given maximum distance. Cost function parameters are learned separately from every local contour segment of the reference slice and define the cost function for the respective region on the current slice. We used the local cost computation for the interactive definition of object contours, as well as for the optimization of interpolated contours between user-defined contours. Applied to CT and MR data of the liver, our method showed considerable advantages over the conventional algorithm based on a global cost function, particularly for objects with inhomogeneities or with different surrounding tissue.
Automated detection of venous beading in retinal images
Samuel C. Lee, Yiming Wang, Weining Tan
The purpose of this paper is to present a method for automatic detection of venous beadings in the spatial domain. The method consists of three steps: (1) creating an accurate vein map, (2) creating an accurate vein width map where the width of every vein indicated by a string of numbers, each of which indicates the width of the vein in pixels at that location, and (3) introducing an automatic venous beading detection algorithm. The parameters used in the beading detection algorithm include the widths of the veins at two adjacent local maxima and minima, the difference between the two widths, and the lengths of the broad and narrow sections. The ranges of the values of these parameters were obtained empirically. Standard Photographs 6B were used to test the algorithm and the result was quite satisfactory.
Retinal blood vessel detection using frequency analysis and local-mean-interpolation filters
Weining Tan, Yiming Wang, Samuel C. Lee
Unlike the existing automatic retinal blood vessel detection methods in which the vessels are detected by edge detection, thresholding or both (such as successive local probing) in the spatial domain, this paper presents a frequency-domain approach to the vessel detection problem. By having a frequency-domain analysis of the vessel signals, we found that the vessel signals between 0.1 and 0.25 on the normalized frequency scale showed a relatively high signal-to-noise ratio, and thus could be filtered out from the other image signals by using a band-pass filter. Instead of using a conventional digital filter, a band of Local-Mean-Interpolation (LMI) filters were employed. They provide not only the function of a band-pass filter that is needed, but also a number of desirable features from practical point of view, such as easy to implement, computationally fast, and high filtering performance. Twenty randomly selected color retinal images were used in testing the proposed method. The results showed that the vessel details could be successfully detected by this new method. When compared with the hand-labeled ground-truth segmentation and measured by the Figure of Merit (FOM = true positive/(1+false positive)), it was found that the method achieved an FOM of up to 0.79. As a final note, with some modifications, the method presented may be extended to the automatic detection of vessels (or other features/objects) in other 2D or 3D medical images, such as ultra-sound, CAT, MRI images.
Multimodality image quantification using the Talairach grid
Manuel Desco, Javier Pascau, Santiago Reig, et al.
We present an application of the widely accepted anatomical reference of the Talairach atlas as a system for semiautomatic segmentation and analysis of MRI and PET images. The proposed methodology can be seen as a multimodal application where the anatomical information of the MRI is used to build the Talairach grid and a co-registered PET image is superimposed on the same grid. By doing so, the Talairach-normalized tessellation of the brain is directly extended to PET images, allowing for a convenient regional analysis of volume and activity rates of brain structures, defined in the Talairach Atlas as sets of cells. This procedure requires minimal manipulation of brain geometry, thus fully preserving individual brain morphology. To illustrate the potential of the Talairach method for neurological research, we applied our technique in a comparative study of volume and activity rate patterns in MRI and PET images of a group of 51 schizophrenic patients and 24 healthy volunteers. With regard to previous applications of the Talairach grid as an automatic segmentation system, the procedure presented here features two main improvements: the enhanced possibility of measuring metabolic activity in a variety of brain structures including small ones like the caudate nucleus, hippocampus or thalamus; and its conception as an easy-to-use tool developed to work in standard PC Windows environment.
Evaluation of segmentation using lung nodule phantom CT images
Philip F. Judy, Francine L. Jacobson
Segmentation of chest CT images has several purposes. In lung-cancer screening programs, for nodules below 5mm, growth measured from sequential CT scans is the primary indication of malignancy. Automatic segmentation procedures have been used as a means to insure a reliable measurement of lung nodule size. A lung nodule phantom was developed to evaluate the validity and reliability of size measurements using CT images. Thirty acrylic spheres and cubes (2-8 mm) were placed in a 15cm diameter disk of uniform-material that simulated the lung. To demonstrate the use of the phantom, it was scanned using out hospital's lung-cancer screening protocol. A simple, yet objective threshold technique was used to segment all of the images in which the objects were visible. All the pixels above a common threshold (the mean of the lung material and the acrylic CT numbers) were considered within the nodule. The relative bias did not depend on the shape of the objects and ranged from -18% for the 2 mm objects to -2.5% for 8-mm objects. DICOM image files of the phantom are available for investigators with an interest in using the images to evaluate and compare segmentation procedures.
Automatic scale selection for medical image segmentation
Ersin Bayram, Christopher L. Wyatt, Yaorong Ge
The scale of interesting structures in medical images is space variant because of partial volume effects, spatial dependence of resolution in many imaging modalities, and differences in tissue properties. Existing segmentation methods either apply a single scale to the entire image or try fine-to-coarse/coarse-to-fine tracking of structures over multiple scales. While single scale approaches fail to fully recover the perceptually important structures, multi-scale methods have problems in providing reliable means to select proper scales and integrating information over multiple scales. A recent approach proposed by Elder and Zucker addresses the scale selection problem by computing a minimal reliable scale for each image pixel. The basic premise of this approach is that, while the scale of structures within an image vary spatially, the imaging system is fixed. Hence, sensor noise statistics can be calculated. Based on a model of edges to be detected, and operators to be used for detection, one can locally compute a unique minimal reliable scale at which the likelihood of error due to sensor noise is less than or equal to a predetermined threshold. In this paper, we improve the segmentation method based on the minimal reliable scale selection and evaluate its effectiveness with both simulated and actual medical data.
Segmentation-based method incorporating fractional volume analysis for quantification of brain atrophy on magnetic resonance images
Deming Wang, David M. Doddrell
Partial volume effect is a major problem in brain tissue segmentation on digital images such as magnetic resonance (MR) images. In this paper, special attention has been paid to partial volume effect when developing a method for quantifying brain atrophy. Specifically, partial volume effect is minimized in the process of parameter estimation prior to segmentation by identifying and excluding those voxels with possible partial volume effect. A quantitative measure for partial volume effect was also introduced through developing a model that calculates fractional volumes for voxels with mixtures of two different tissues. For quantifying cerebrospinal fluid (CSF) volumes, fractional volumes are calculated for two classes of mixture involving gray matter and CSF, and white matter and CSF. Tissue segmentation is carried out using 1D and 2D thresholding techniques after images are intensity- corrected. Threshold values are estimated using the minimum error method. Morphological processing and region identification analysis are used extensively in the algorithm. As an application, the method was employed for evaluating rates of brain atrophy based on serially acquired structural brain MR images. Consistent and accurate rates of brain atrophy have been obtained for patients with Alzheimer's disease as well as for elderly subjects due to normal aging process.
Automatic boundary modification of warped basal ganglia template
Enmin Song, Valerie A. Cardenas, Frank Ezekiel, et al.
Accurate segmentation of magnetic resonance images of the brain is of increasing interest in the study of many brain disorders. This paper reports our approach to obtain the segmentation by warping our segmented template to a target and then automatically modifying the boundary of each structure. Test results show that our approach can increase the overlap between the warped template of the lenticular nucleus and the manually delineated lenticular nucleus by 10% compared the approach with only warping. 17
Segmentation of brain MR images: a self-adaptive online vector quantization approach
We present a fully automatic algorithm for brain magnetic resonance (MR) image segmentation. The three-dimensional (3D) volumetric MR dataset is first interpolated for an adequate local intensity vector on each voxel. Then a morphology dilation filter and region growing technique are applied to extract the region of brain volume, strapping away the skull, scalp and other tissues. The principal component analysis (PCA) is utilized to generate a series of feature vectors from the local vectors via the Karhunen-Loeve (K-L) transformation for those voxels within the extracted region. We choose those first few principal components that sum up to, at least, 90% percent of the total variance for optimizing the dimensions of the feature vectors. Then a modified self-adaptive online vector quantization algorithm is applied to these feature vectors for classification. The presented algorithm requires no prior knowledge of the data distribution except a maximum number of distinct groups for classification, which can be set based on anatomical knowledge. Numerical analysis of the algorithm and experimental tests on brain MR images are presented. Results demonstrate efficient, robust, and self-adaptive properties of the presented algorithm.
Efficient 3D volume segmentation of MR images by a modified deterministic annealing approach
This paper presents the results of applying the deterministic annealing (DA) algorithm to simulated magnetic resonance image segmentation. The applicability of this methodology for 3-D segmentation has been rigorously tested by using the simulated MRI volumes of normal brain at the Brain Web [8] for the 181 slices and whole volume of different modalities (T1, T2, and PD) without and with various levels of noise and intensity inhomogeneities. With proper thresholding of the clusters formed by the modified DA almost zero misclassification was achieved without the presence of noise. Even up to 7% addition of noise and 40% inhomogeneity, the average misclassification rates of the voxels belonging to white matter, gray matter, and cerebrospinal fluid were found to be less than 5% after median filtering. The accuracy, stability, global optimization and speed of the DA algorithm for 3-D MR image segmentation could provide a more rigorous tool for identification of diseased brain tissues from 3-D MR images than other existing 3-D segmentation techniques. Further inquiry into the DA algorithm shows that it is a Bayesian classifier with the assumption that the data to be classified follow a multivariate normal distribution. The characteristic of being a Bayesian classifier guarantees its achievement of global optimization.
Computerized lung nodule detection: comparison of performance for low-dose and standard-dose helical CT scans
The vast amount of image data acquired during a computed tomography (CT) scan makes lung nodule detection a burdensome task. Moreover, the growing acceptance of low-dose CT for lung cancer screening promises to further impact radiologists' workloads. Therefore, we have developed a computerized method to automatically analyze structures within a CT scan and identify those structures that represent lung nodules. Gray-level thresholding is performed to segment the lungs in each section to produce a segmented lung volume, which is then iteratively thresholded. At each iteration, remaining voxels are grouped into contiguous three-dimensional structures. Structures that satisfy a volume criterion then become nodule candidates. The set of nodule candidates is subjected to feature analysis. To distinguish candidates representing nodule and non-nodule structures, a rule-based approach is combined with an automated classifier. This method was applied to 43 standard-dose (diagnostic) CT scans and 13 low-dose CT scans. The method achieved an overall detection sensitivity of 71% with 1.5 false-positive detections per section on the standard-dose database and 71% sensitivity with 1.2 false-positive detections per section on the low-dose database. This automated method demonstrates promising performance in its ability to accurately detect lung nodules in standard-dose and low-dose CT images.
Brain tumor segmentation in MRI by using the fuzzy connectedness method
Jian-Guo Liu, Jayaram K. Udupa, David Hackney, et al.
The aim of this paper is the precise and accurate quantification of brain tumor via MRI. This is very useful in evaluating disease progression, response to therapy, and the need for changes in treatment plans. We use multiple MRI protocols including FLAIR, T1, and T1 with Gd enhancement to gather information about different aspects of the tumor and its vicinity- edema, active regions, and scar left over due to surgical intervention. We have adapted the fuzzy connectedness framework to segment tumor and to measure its volume. The method requires only limited user interaction in routine clinical MRI. The first step in the process is to apply an intensity normalization method to the images so that the same body region has the same tissue meaning independent of the scanner and patient. Subsequently, a fuzzy connectedness algorithm is utilized to segment the different aspects of the tumor. The system has been tested, for its precision, accuracy, and efficiency, utilizing 40 patient studies. The percent coefficient of variation (% CV) in volume due to operator subjectivity in specifying seeds for fuzzy connectedness segmentation is less than 1%. The mean operator and computer time taken per study is 3 minutes. The package is designed to run under operator supervision. Delineation has been found to agree with the operators' visual inspection most of the time except in some cases when the tumor is close to the boundary of the brain. In the latter case, the scalp is included in the delineation and an operator has to exclude this manually. The methodology is rapid, robust, consistent, yielding highly reproducible measurements, and is likely to become part of the routine evaluation of brain tumor patients in our health system.
Segmentation of cerebral MRI scans using a partial volume model, shading correction, and an anatomical prior
Aljaz Noe, Stanislav Kovacic, James C. Gee
A mixture-model clustering algorithm is presented for robust MRI brain image segmentation in the presence of partial volume averaging. The method uses additional classes to represent partial volume voxels of mixed tissue type in the image. Probability distributions for partial volume voxels are modeled accordingly. The image model also allows for tissue-dependent variance values and voxel neighborhood information is taken into account in the clustering formulation. Additionally we extend the image model to account for a low frequency intensity inhomogeneity that may be present in an image. This so-called shading effect is modeled as a linear combination of polynomial basis functions, and is estimated within the clustering algorithm. We also investigate the possibility of using additional anatomical prior information obtained by registering tissue class template images to the image to be segmented. The final result is the estimated fractional amount of each tissue type present within a voxel in addition to the label assigned to the voxel. A parallel implementation of the method is evaluated using synthetic and real MRI data.
Identification and classification of spine vertebrae by automated methods
We are currently working toward developing computer-assisted methods for the indexing of a collection of 17,000 digitized x-ray images by biomedical content. These images were collected as part of a nationwide health survey and form a research resource for osteoarthitis and bone morphometry. This task requires the development of algorithms to robustly analyze the x-ray contents for key landmarks, to segment the vertebral bodies, to accurately measure geometric features of the individual vertebrae and inter-vertebral areas, and to classify the spine anatomy into normal or abnormal classes for conditions of interest, including anterior osteophytes and disc space narrowing. Subtasks of this work have been created and divided among collaborators. In this paper, we provide a technical description of the overall task, report on progress made by collaborators, and provide the most recent results of our own research into obtaining first-order location of the spine region of interest by automated methods. We are currently concentrating on images of the cervical spine, but will expand the work to include the lumbar spine as well. Development of successful image processing techniques for computer-assisted indexing of medical image collections is expected to have a significant impact within the medical research and patient care systems.
Prostate brachytherapy seed segmentation using spoke transform
Steve Lam, Robert Jackson Marks II, Paul S. Cho
Permanent implantation of radioactive seeds is a viable and effective therapeutic option widely used today for early stage prostate cancer. In order to perform intraoperative dosimetry the seed locations must be determined accurately with high efficiency. However, the task of seed segmentation is often hampered by the wide range of signal-to-noise ratios represented in the x-ray images due to highly non-uniform background. To circumvent the problem we have developed a new method, the spoke transform, to segment the seeds from the background. This method uses spoke-like rotating line segments within the two concentric windows. The mean intensity value of the pixels that fall on each rotated line segment best describing the intersection between the seed that we are trying to segment is chosen. The inner window gives an indication of the background level immediately surrounding the seeds. The outer window is an isolated region not being segmented and represents a non-seed area in need of enhancement and a detection decision. The advantages of the method are its ability (1) to work with spatially varying local backgrounds and (2) to segment the hidden seeds. Pd-103 and I-125 images demonstrate the effectiveness of the spoke transform.
Detection of bone disease by hybrid SST-watershed x-ray image segmentation
Saeid Sanei, Mohammad Azron, Ong Sim Heng
Detection of diagnostic features from X-ray images is favorable due to the low cost of these images. Accurate detection of the bone metastasis region greatly assists physicians to monitor the treatment and to remove the cancerous tissue by surgery. A hybrid SST-watershed algorithm, here, efficiently detects the boundary of the diseased regions. Shortest Spanning Tree (SST), based on graph theory, is one of the most powerful tools in grey level image segmentation. The method converts the images into arbitrary-shape closed segments of distinct grey levels. To do that, the image is initially mapped to a tree. Then using RSST algorithm the image is segmented to a certain number of arbitrary-shaped regions. However, in fine segmentation, over-segmentation causes loss of objects of interest. In coarse segmentation, on the other hand, SST-based method suffers from merging the regions belonged to different objects. By applying watershed algorithm, the large segments are divided into the smaller regions based on the number of catchment's basins for each segment. The process exploits bi-level watershed concept to separate each multi-lobe region into a number of areas each corresponding to an object (in our case a cancerous region of the bone,) disregarding their homogeneity in grey level.
Objective and reproducible segmentation and quantification of tuberous sclerosis lesions in FLAIR brain MR images
Tanja Alderliesten, Wiro J. Niessen, Koen L. Vincken, et al.
A semi-automatic segmentation method for Tuberous Sclerosis (TS) lesions in the brain has been developed. Both T1 images and Fluid Attenuated Inversion Recovery (FLAIR) images are integrated in the segmentation procedure. The segmentation procedure is mainly based on the notion of fuzzy connectedness. This approach uses the two basic concepts of adjacency and affinity to form a fuzzy relation between voxels in the image. The affinity is defined using two quantities that are both based on characteristics of the intensities in the lesion and surrounding brain tissue (grey and white matter). The semi-automatic method has been compared to results of manual segmentation. Manual segmentation is prone to interobserver and intraobserver variability. This was especially true for this particular study, where large variations were observed, which implies that a golden standard for comparison was not available. The method did perform within the variability of the observers and therefore has the potential to improve reproducibility of quantitative measurements.
Investigation of a method to assess breast ultrasound level of suspicion
Michael P. Andre, Michael Galperin, Linda K. Olson, et al.
Research studies indicate that careful application of breast ultrasound is capable of reducing the number of unnecessary biopsies by 40% with potential cost savings of as much as $1 billion per year in the U.S. A well-defined rule-based system has been developed for scoring the Level of Suspicion (LOS) based on parameters describing the ultrasound appearance of breast lesion. Acceptance and utilization of LOS is increasing but it has proven difficult to teach the method and many radiologists have felt uncomfortable with the number of benign and malignant masses that overlap in appearance. In practice, the quality of breast ultrasound is highly operator dependent, it is often difficult to reproduce a finding and there is high variability of lesion description and assessment between radiologists. The goal of this research is to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses using sophisticated software developed for satellite imagery applications. The aim is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers, which will require biopsy anyway. In this paper we present our approach to develop a system to process, segment, analyze and classify medical images based on information content. A feasibility study was completed in a digital database of biopsy-proven image files from 46 women retrieved chronologically from our image library. Segmentation and classification were sufficiently accurate to correctly group all benign cystic masses, all benign solid masses and all solid malignant masses. The image analysis, computer-aided detection and image classification software system Image Companion developed by Almen Laboratories, Inc. was used to achieve the presented results.
Ultrasound image texture processing for evaluating fatty liver in peripartal dairy cows
Viren R. Amin, Gerd Bobe, Jerry Young, et al.
The objective of this work is to characterize the liver ultrasound texture as it changes in diffuse disease of fatty liver. This technology could allow non-invasive diagnosis of fatty liver, a major metabolic disorder in early lactation dairy cows. More than 100 liver biopsies were taken from fourteen dairy cows, as a part of the USDA-funded study for effects of glucagon on prevention and treatment of fatty liver. Up to nine liver biopsies were taken from each cow during peripartal period of seven weeks and total lipid content was determined chemically. Just before each liver biopsy was taken, ultrasonic B-mode images were digitally captured using a 3.5 or 5 MHz transducer. Effort was made to capture images that were non-blurred, void of large blood vessels and multiple echoes, and of consistent texture. From each image, a region-of-interest of size 100-by-100 pixels was processed. Texture parameters were calculated using algorithms such as first and second order statistics, 2D Fourier transformation, co-occurrence matrix, and gradient analysis. Many cows had normal liver (3% to 6% total lipid) and a few had developed fatty liver with total lipid up to 15%. The selected texture parameters showed consistent change with changing lipid content and could potentially be used to diagnose early fatty liver non-invasively. The approach of texture analysis algorithms and initial results on their potential in evaluating total lipid percentage is presented here.
Texture image analysis for osteoporosis detection with morphological tools
Sylvie Sevestre-Ghalila, Amel Benazza-Benyahia, Hichem Cherif, et al.
The disease of osteoporosis shows itself both in a reduction of the bone mass and a degradation of the microarchitecture of the bone tissue. Radiological images of heel's bone are analyzed in order to extract informations about microarchitectural patterns. We first extract the gray-scale skeleton of the microstructures contained in the underlying images. More precisely, we apply the thinning procedure proposed by Mersal which preserves connectivity of the microarchitecture. Then, a post-processing of the resulting skeleton consists in detecting the points of intersection of the trabecular bones (multiple points). The modified skeleton can be considered as a powerful tool to extract discriminant features between Osteoporotic Patients (OP) and Control Patients (CP). For instance, computing the distance between two horizontal (respectively vertical) adjacent trabecular bones is a straightforward task once the multiple points are available. Statistical tests indicate that the proposed method is more suitable to discriminate between OP and CP than conventional methods based on binary skeleton.
Energy minimization and region-growing-based interactive image segmentation
Domagoj Kovacevic, Sven Loncaric
In this work, two novel methods for semi-automatic image segmentation of medical images are presented. The developed techniques are applied to the problem of medical image segmentation. The two developed approaches are compared and the results are discussed. The first method is based on interactive boundary detection where the user is assisted by the computer in selection of border of the desired anatomical region. The user performs segmentation by selecting a sparse series of points along the desired region border. The optimal path (border segment) is calculated based on the minimization of the energy function. The second method is based on interactive region growing process. The algorithm computes image features along the user-defined path and the computed features are used in the region-growing process. The region selection process is repeated until the user is satisfied with the look of the segmented region. Those two methods are compared to two semi automatic edge and region based image segmentation techniques.
Total-variational-based optical flow for cardiac-wall motion tracking
Arun Kumar, Steven Haker, Arthur Stillman, et al.
In this note, we apply an L1 based approach to optical flow to measure heart wall motions. Our method captures discontinuities and sudden changes in the flow field much better than conventional quadratic gradient approaches.
Active double-contour for segmentation of vessels in digital subtraction angiography
Manfred Hinz, Klaus D. Toennies, Markus Grohmann, et al.
Successful extraction of small vessels in DSA images requires inclusion of prior knowledge about vessel characteristics. We developed an active double contour (ADC) that uses a vessel template as a model. The template is fitted to the vessel using an adapted ziplock snake approach based on two user-specified end locations. The external energy terms of the ADC describe an ideal vessel with projections changing slowly their course, width and intensity. A backtracking ability was added that enables overturning local decisions that may cause the ziplock snake to be trapped in a local minimum. This is because the optimization of the ADC is carried out locally. If the total energy indicates such case, vessel boundary points are removed and the ziplock process starts again without this location in its actual configuration. The method was tested on artificial data and DSA data. The former showed good agreement between artificial vessel and segmented structure at an SNR as low as 1.5:1. Results from DSA data showed robustness of the method in the presence of noise and its ability to cope with branchings and crossings. The backtracking was found to overcome local minima of the energy function at artefacts, vessel crossings and in regions of low SNR.
Nonlinear registration using B-spline feature approximation and image similarity
June-Sic Kim, Jae Seok Kim, In Young Kim, et al.
The warping methods are broadly classified into the image-matching method based on similar pixel intensity distribution and the feature-matching method using distinct anatomical feature. Feature based methods may fail to match local variation of two images. However, the method globally matches features well. False matches corresponding to local minima of the underlying energy functions can be obtained through the similarity based methods. To avoid local minima problem, we proposes non-linear deformable registration method utilizing global information of feature matching and the local information of image matching. To define the feature, gray matter and white matter of brain tissue are segmented by Fuzzy C-Mean (FCM) algorithm. B-spline approximation technique is used for feature matching. We use a multi-resolution B-spline approximation method which modifies multilevel B-spline interpolation method. It locally changes the resolution of the control lattice in proportion to the distance between features of two images. Mutual information is used for similarity measure. The deformation fields are locally refined until maximize the similarity. In two 3D T1 weighted MRI test, this method maintained the accuracy by conventional image matching methods without the local minimum problem.
New fully automatic method for CR image composition by white-band detection and consistency rechecking
Guo-Qing Wei, JianZhong Qian, Zhenyu Wu, et al.
This paper proposes a novel, fully automatic method for composing mosaic images in Computed Radiography. The method combines the detection of white-band edges with a cross- correlation technique. The white-band edges are positions of overlap lines. Several new kinds of measurements are proposed to evaluate the likelihood of a position to be a white-band edge. Multiple checks are used to reject less likely candidates. An error measure is defined for picking up the most likely candidates. The white-band candidate positions are fed to a cross-correlation method to compute the final alignment parameters for mosaic composition.
Adaptive free-form deformation for interpatient medical image registration
A number of methods have been proposed recently to solve nonrigid registration problems. One of these involves optimizing a Mutual Information (MI) based objective function over a regularly spaced grid of basis functions. This approach has produced good results but its computational complexity is inversely proportional to the compliance of the transformation. Transformations able to register two high resolution images on a very local scale need a large number of degrees of freedom. Finding an optimum in such a search space is lengthy and prone to convergence to local maxima. In this paper, we propose a modification to this class of algorithms that reduces their computational complexity and improves their convergence properties. The approach we propose adapts the compliance of the transformation locally. Registration is achieved iteratively, from a coarse to a fine scale. At each level, the gradient of the cost function with respect to the coefficients of a set of compactly supported radial basis functions spread over a regular grid is used to estimate a local adaptation of the grid. Optimization is then conducted over the estimated irregular grid one region at a time. Results show the advantage of the approach we propose over a method without local grid adaptation.
Task-specific comparison of 3D image registration methods
Laszlo G. Nyul, Jayaram K. Udupa, Punam K. Saha
We present a new class of approaches for rigid-body registration and their evaluation in studying Multiple Sclerosis via multi protocol MRI. Two pairs of rigid-body registration algorithms were implemented, using cross- correlation and mutual information, operating on original gray-level images and on the intermediate images resulting from our new scale-based method. In the scale image, every voxel has the local scale value assigned to it, defined as the radius of the largest sphere centered at the voxel with homogeneous intensities. 3D data of the head were acquired from 10 MS patients using 6 MRI protocols. Images in some of the protocols have been acquired in registration. The co-registered pairs were used as ground truth. Accuracy and consistency of the 4 registration methods were measured within and between protocols for known amounts of misregistrations. Our analysis indicates that there is no best method. For medium and large misregistration, methods using mutual information, for small misregistration, and for the consistency tests, correlation methods using the original gray-level images give the best results. We have previously demonstrated the use of local scale information in fuzzy connectedness segmentation and image filtering. Scale may also have considerable potential for image registration as suggested by this work.
Fast noniterative registration of magnetic resonance images
Nicolino J. Pizzi, Murray Alexander, Rodrigo A. Vivanco, et al.
EvIdent (EVent IDENTification) is an exploratory data analysis system for the detection and investigation of novelty, identified for a region of interest and its characteristics, within a set of images. For functional magnetic resonance imaging, for instance, a characteristic of the region of interest is a time course, which represents the intensity value of voxels over several discrete instances in time. An essential preprocessing step is the rapid registration of these images prior to analysis. Two dimensional image registration coefficients are obtained within EvIdent by solving a regression problem based on integration of a linearized matching equation over a set of patches in the image space. The registration method is robust to noise, offers a flexible hierarchical procedure, is easily generalizable to 3D registration, and is well suited to parallel processing. EvIdent, written in Java and C++, offers a sophisticated data model, an extensible algorithm framework, and a suite of graphical user interface constructs. We describe the registration algorithm and its implementation within the EvIdent software.
Nonrigid multimodality image registration
David Mattes, David R. Haynor, Hubert Vesselle, et al.
We have designed, implemented, and validated an algorithm capable of 3D PET-CT registration in the chest, using mutual information as a similarity criterion. Inherent differences in the imaging protocols produce significant non-linear motion between the two acquisitions. To recover this motion, local deformations modeled with cubic B-splines are incorporated into the transformation. The deformation is defined on a regular grid and is parameterized by potentially several thousand coefficients. Together with a spline-based continuous representation of images and Parzen histogram estimates, the deformation model allows for closed-form expressions of the criterion and its gradient. A limited-memory quasi-Newton optimization package is used in a hierarchical multiresolution framework to automatically align the images. To characterize the performance of the algorithm, 27 scans from patients involved in routine lung cancer screening were used in a validation study. The registrations were assessed visually by two observers in specific anatomic locations using a split window validation technique. The visually reported errors are in the 0-6mm range and the average computation time is 100 minutes.
Automatic nonrigid registration of MR and PET images
Junki Lee, June-Sic Kim, Jae Seok Kim, et al.
Registration of functional PET and MR images is a necessary step for combining functional information from PET images with anatomical information in MR images. But, the methods published are non-automatic or rigid body transformation. In this paper, we present a method that mapping PET image onto the MR image with automatic non-rigid transformation. The method is largely composed of two parts. First part is the segmentation and extracting the features of both MR and PET by using FCM and morphological methods. And, second part is the non-rigid mapping of PET image onto the PET template and MR image onto MR template. The templates are made from 20 PET images and 20 MR images each other. And, the MR template is registered with PET template. In non-rigid mapping, we use Bayesian framework in which statistical information on the imaging process is combined with prior information on expected template deformations to make inferences about the parameters of the deformation field. The method newly defines intensity similarity between the deforming scan and the target brain. Intensity similarity combined with prior information is used to generate deformation field. We applied our algorithm to PET and T1-weighted MR images from many patients. The registered images were validated by physicians. And we got the satisfactory results.
Automatic detection of endplates and pedicles in spine x-ray images
Guo-Qing Wei, JianZhong Qian, Helmut F. Schramm
Endplates and pedicles are important anatomies for deformity analysis of spines in radiographs. The first part of the paper presents an evidence-reasoning approach to endplate detection. Multiple pieces of local visual evidence about the presence of an endplate at an image point are first computed. They are then combined with some prior knowledge about vertebra shape to arrive at a consistent and robust detection. In the second part, the paper presents a learning-based method for pedicle detection. Variations in pedicle shapes are learned automatically. Data compression techniques are used to both reduce the data dimension for a fast training and detection, and to enable a multi-scale search without multi-scale training. 15
Characterizing populations and searching for diagnostics via elastic registration of MRI images
David Pettey, James C. Gee
Given image data from two distinct populations and a family of functions, we find the scalar discriminant function which best discriminates between the populations. The goals are two-fold: first, to construct a discriminant function which can accurately and reliably classify subjects via the image data. Second, the best discriminant allows us to see which features in the images distinguish between the populations; these features can guide us to finding characteristic differences between the two groups even if these differences are not sufficient to perform classification. We apply our method to mid-sagittal MRI sections of the corpus callosum from 34 males and 52 females. While we are not certain of the ability of the derived discriminant function to perform sex classification, we find that regions in the anterior of the corpus callosum do appear to be more important for the discriminant function than other regions. This indicates there may be significant differences in the relative size of the splenium in males and females, as has been reported elsewhere. More notably, we applied previous methods which support this view on our larger data set, but found that these methods no longer show statistically significant differences between the male and female splenium.
Self-organizing features for regularized standardization of brain images
Didem Gokcay, John G. Harris, Christiana M. Leonard, et al.
Normal variability of anatomy is a key issue in standardization. By reducing normal variability, functional activity from multiple subjects can be overlayed to study localization, and variability outside normal ranges can be used to report abnormalities. Most of the existing global standardization methods fail to align individual anatomic structures. We propose a semi-automatic, feature-based standardization technique to complement these global methods. Benefits of our method are speed and accuracy in local alignment. The method consists of three phases: In phase one, templates are generated from the atlas structures, using Self-Organizing Maps (SOMs). The parameters of each SOM are determined using a new topology evaluation technique. In phase two, the atlas templates are reconfigured using points from individual features, to establish a one-to-one correspondence between the atlas and individual structures. During training, a regularization procedure can be optionally invoked to guarantee smoothness in areas where the discrepancy between the atlas and individual feature is high. In the final phase, difference vectors are generated using the corresponding points of the atlas and individual structure. The whole image is warped by interpolation of the difference vectors through Gaussian radial basis functions, which are determined by minimizing the membrane energy. Results are demonstrated on simulated features, as well as selected sulci in brain MRIs.
Mammogram registration using the Cauchy-Navier spline
Michael A. Wirth, Christopher Choi
The process of comparative analysis involves inspecting mammograms for characteristic signs of potential cancer by comparing various analogous mammograms. Factors such as the deformable behavior of the breast, changes in breast positioning, and the amount/geometry of compression may contribute to spatial differences between corresponding structures in corresponding mammograms, thereby significantly complicating comparative analysis. Mammogram registration is a process whereby spatial differences between mammograms can be reduced. Presented in this paper is a nonrigid approach to matching corresponding mammograms based on a physical registration model. Many of the earliest approaches to mammogram registration used spatial transformations which were innately rigid or affine in nature. More recently algorithms have incorporated radial basis functions such as the Thin-Plate Spline to match mammograms. The approach presented here focuses on the use of the Cauchy-Navier Spline, a deformable registration model which offers approximate nonrigid registration. The utility of the Cauchy-Navier Spline is illustrated by matching both temporal and bilateral mammograms.
Tissue color image segmentation and analysis for automated diagnostics of adenocarcinoma of the lung
Mohamed Sammouda, Noboru Niki, Toshiro Niki, et al.
Designing and developing computer-assisted image processing techniques to help doctors improve their diagnosis has received considerable interests over the past years. In this paper, we present a method for segmentation and analysis of lung tissue that can assist for the diagnosis of the adenocancinoma of the lung. The segmentation problem is formulated as minimization of an energy function synonymous to that of Hopfield Neural Network (HNN) for optimization. We modify the HNN to reach a status close to the global minimum in a pre-specified time of convergence. The energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima to be close to the global minimum. Each lung color image is represented in RGB and HSV color spaces and the segmentation results are comparatively presented. Furthermore, the nuclei are automatically extracted based on green color histogram threshold. Then, the nucleus radius is computed using the maximum drawable circle inside the object. Finally, all nuclei with abnormal size are extracted, and their morphology in the raw tissue image drew automatically. These results can provide the pathologists with more accurate quantitative information that can help greatly in the final decision.
Stimulus representations that are invariant under invertible transformations of sensor data
David N. Levin M.D.
Humans have a remarkable ability to perceive the constancy of a stimulus even though its appearance has changed to factors that are extrinsic to it. This paper shows how sensory devices can invariantly represent stimuli, even though their sensor state may have been transformed by factors extrinsic to the stimuli. Such transformations may be caused by changes of observational conditions such as: 1) alterations of the device's sensory apparatus, 2) changes in the observational environment external to the sensory device and the stimuli, and 3) modifications of the presentation of the stimuli themselves. The stimulus representations are invariant because they describe certain relationships of each sensor state to the time series of recently encountered sensor states, and these relationships are unchanged by any invertible transformation of sensor states. This paper describes three analytic methods of creating such representations, utilizing tensor calculus, affine-connected differential geometry, respectively. These techniques may be useful for designing a representation engine that comprises the front end of an intelligent sensory device. It could create stimulus representations that are amenable to pattern analysis because they are not affected by many factors that are extrinsic to the stimuli.
3D quantitative analysis of brain SPECT images
Sven Loncaric, Ivan Ceskovic, Ratimir Petrovic, et al.
The main purpose of this work is to develop a computer-based technique for quantitative analysis of 3-D brain images obtained by single photon emission computed tomography (SPECT). In particular, the volume and location of ischemic lesion and penumbra is important for early diagnosis and treatment of infracted regions of the brain. SPECT imaging is typically used as diagnostic tool to assess the size and location of the ischemic lesion. The segmentation method presented in this paper utilizes a 3-D deformable model in order to determine size and location of the regions of interest. The evolution of the model is computed using a level-set implementation of the algorithm. In addition to 3-D deformable model the method utilizes edge detection and region growing for realization of a pre-processing. Initial experimental results have shown that the method is useful for SPECT image analysis.
New approach to evaluate rotation of cervical vertebrae
Matthias Hahn
Functional deficits after whiplash injury can be analyzed with a quite novel radiologic method by examination of joint-blocks in C0/1 and C1/2. Thereto the movability of C0, C1 and C2 is determined with three spiral CT-scans of the patient's cervical spine. One series in neutral and one in maximal active lateral right and left rotation each. Previous methods were slice based and time consuming when manually evaluated. We propose a new approach to a computation of these angles in 3D. After a threshold segmentation of bone tissue, a rough 2D classification takes place for C0, C1 and C2 in each rotation series. The center of an axial rotation for each vertebra is gained from the approximation of its center of gravity. The rotation itself is estimated by a cross-correlation of the radial distance functions. From the previous rotation the results are taken to initialize a 3D matching algorithm based on the sum of squared differences in intensity. The optimal match of the vertebrae is computed by means of the multidimensional Powell minimization algorithm. The three translational and three rotational components build a six-dimensional search-space. The vertebrae detection and rotation computation is done fully automatic.
Digital mammography: a weak continuity texture representation for detection of microcalcifications
Barbara Caputo, Giovanni E. Gigante
This paper proposes a Weak Continuity Texture Representation (WCTR) method for detecting clustered microcalcifications in digitized mammograms. This technique is compared with other texture-analysis methods (Co-occurrence Matrices, Gabor Energy Mask, and Wavelet Filter). The WCTR is a new method for texture representation, based on the characterization of textures using statistics of their coarseness. Form edge maps, obtained by a weak membrane at different noise levels, density values are computed which are representative of the texture coarseness. We chose six different noise levels; each texture class is then represented by six edge-density values. Textural features extracted using the four methods are used to discriminate between positive ROI's containing clustered microcalcifications and negative ROI's containing normal tissue; a three-layer backpropagation neural network is employed as a classifier. A ROC analysis is used to evaluate the classification performance. From an original database of 151 ROIs two different combinations of training and testing sets are used: 50/70 training cases and 101/81 testing cases. The best performance is obtained with the WCTR method in both cases (92% and 93% respectively). These results show the effectiveness of WCTR for the detection of microcalcifications in mammographic images.
Application of adaptive boosting to EP-derived multilayer feed-forward neural networks (MLFN) to improve benign/malignant breast cancer classification
Walker H. Land Jr., Timothy D. Masters, Joseph Y. Lo, et al.
A new neural network technology was developed for improving the benign/malignant diagnosis of breast cancer using mammogram findings. A new paradigm, Adaptive Boosting (AB), uses a markedly different theory in solutioning Computational Intelligence (CI) problems. AB, a new machine learning paradigm, focuses on finding weak learning algorithm(s) that initially need to provide slightly better than random performance (i.e., approximately 55%) when processing a mammogram training set. Then, by successive development of additional architectures (using the mammogram training set), the adaptive boosting process improves the performance of the basic Evolutionary Programming derived neural network architectures. The results of these several EP-derived hybrid architectures are then intelligently combined and tested using a similar validation mammogram data set. Optimization focused on improving specificity and positive predictive value at very high sensitivities, where an analysis of the performance of the hybrid would be most meaningful. Using the DUKE mammogram database of 500 biopsy proven samples, on average this hybrid was able to achieve (under statistical 5-fold cross-validation) a specificity of 48.3% and a positive predictive value (PPV) of 51.8% while maintaining 100% sensitivity. At 97% sensitivity, a specificity of 56.6% and a PPV of 55.8% were obtained.
Analyzing multimodality tomographic images and associated regions of interest with MIDAS
Wai-Hon Tsui, Henry Rusinek, Peter Van Gelder, et al.
This paper outlines the design and features incorporated in a software package for analyzing multi-modality tomographic images. The package MIDAS has been evolving for the past 15 years and is in wide use by researchers at New York University School of Medicine and a number of collaborating research sites. It was written in the C language and runs on Sun workstations and Intel PCs under the Solaris operating system. A unique strength of the MIDAS package lies in its ability to generate, manipulate and analyze a practically unlimited number of regions of interest (ROIs). These regions are automatically saved in an efficient data structure and linked to associated images. A wide selection of set theoretical (e.g. union, xor, difference), geometrical (e.g. move, rotate) and morphological (grow, peel) operators can be applied to an arbitrary selection of ROIs. ROIs are constructed as a result of image segmentation algorithms incorporated in MIDAS; they also can be drawn interactively. These ROI editing operations can be applied in either 2D or 3D mode. ROI statistics generated by MIDAS include means, standard deviations, centroids and histograms. Other image manipulation tools incorporated in MIDAS are multimodality and within modality coregistration methods (including landmark matching, surface fitting and Woods' correlation methods) and image reformatting methods (using nearest-neighbor, tri-linear or sinc interpolation). Applications of MIDAS include: (1) neuroanatomy research: marking anatomical structures in one orientation, reformatting marks to another orientation; (2) tissue volume measurements: brain structures (PET, MRI, CT), lung nodules (low dose CT), breast density (MRI); (3) analysis of functional (SPECT, PET) experiments by overlaying corresponding structural scans; (4) longitudinal studies: regional measurement of atrophy.
Computerized classification of liver disease in MRI using an artificial neural network
Xuejun Zhang, Masayuki Kanematsu, Hiroshi Fujita, et al.
We developed a software named LiverANN based on artificial neural network (ANN) technique for distinguishing the pathologies of focal liver lesions in magnetic resonance (MR) imaging, which helps radiologists integrate the imaging findings with different pulse sequences and raise the diagnostic accuracy even with radiologists inexperienced in liver MR imaging. In each patient, regions of focal liver lesion on T1-weighted, T2-weighted, and gadolinium-enhanced dynamic MR images obtained in the hepatic arterial and equilibrium phases were placed by a radiologist (M.K.), then the program automatically calculated the brightness and homogeneity into numerical data within the selected areas as the input signals to the ANN. The outputs from the ANN were the 5 categories of focal hepatic diseases: liver cyst, cavernous hemangioma, dysplasia, hepatocellular carcinoma, and metastasis. Fifty cases were used for training the ANN, while 30 cases for testing the performance. The result showed that the LiverANN classified 5 types of focal liver lesions with sensitivity of 93%, which demonstrated the ability of ANN to fuse the complex relationships among the image findings with different sequences, and the ANN-based software may provide radiologists with referential opinion during the radiologic diagnostic procedure.
Detection of clustered microcalcifications in masses on mammograms by artificial neural networks
The existence of a cluster of microcalcifications in mass area on mammogram is one of important features for distinguishing the breast cancer between benign and malignant. However, missed detections often occur because of its low subject contrast in denser background and small quantity of microcalcifications. To get a higher performance of detecting the cluster in mass area, we combined the shift-invariant artificial neural network (SIANN) with triple-ring filter (TRF) method in our computer-aided diagnosis (CAD) system. 150 region-of- interests around mass containing both of positive and negative microcalcifications were selected for training the network by a modified error-back-propagation algorithm. A variable-ring filter was used for eliminating the false- positive (FP) detections after the outputs of SIANN and TRF. The remained Fps were then reduced by a conventional three layer artificial neural network. Finally, the program identified clustered microcalcifications form individual microcalcifications. In a practical detection of 30 cases with 40 clusters in masses, the sensitivity of detecting clusters was improved form 90% by our previous method to 95% by using both SIANN and TRF, while the number of FP clusters was decreased from 0.85 to 0.40 cluster per image.
Computerized analysis of lesions in 3D MR breast images
He Wang, Bin Zheng, Walter F. Good, et al.
In this paper, a novel method is used for computerized lesion detection and analysis in three-dimensional(3D) contrast enhanced MR breast images. The automatic analysis involves three steps: 1) alignment between series; 2) extraction of suspicious regions; and 3) application of feature classification to each region. Assuming that there are only small geometric deformations after global registration, we adopted a 3D thin-plate spline based registration method, in which the control points are determined using 3D gradient and local correlation. Experiments show superior correlation between neighboring slices with 3D alignment as compared to a previous two-dimensional(2D) method. After registration, a new series named enhancement rate images(ERIs) are created. Suspicious volumes-of-interest(VOIs) are identified by 3D region labeling after thresholding the ERIs. Since carcinomas can typically be characterized by irregular borders and rapid and high uptake of contrast followed by a washout, a set of morphological features(irregularity, spiculation index, etc) and enhancement features(small volume enhancement rate, slope of average rate, etc) are calculated for selected VOIs and evaluated in a rule-based classifier to identify malignant lesions from benign lesions or normal tissues.
Development of a multiple-template matching technique for removal of false positives in a computer-aided diagnostic scheme
Qiang Li, Shigehiko Katsuragawa, Roger M. Engelmann, et al.
A problem in most current computer-aided diagnostic (CAD) scheme is the relatively large number of false positives that are incorrectly reported as nodules by the scheme. Therefore, in this study, we developed a multiple-templates matching technique to significantly reduce the number of false positives in our CAD scheme. With this technique applied to our CAD scheme for detection of pulmonary nodules in chest radiographs, we removed a large number of false positives (44.3%) with reduction of a small number of true positives (2.3%). We believe that this technique has the potential to significantly improve the performance of many different CAD schemes for detection of various lesions in medical images including nodules in chest radiographs, masses and microcalcifications in mammograms, nodules, colon polyps, liver tumors, and aneurysms in CT images.
Computer-aided diagnosis system for coronary artery stenosis using a neural network
Kenji Suzuki, Isao Horiba, Noboru Sugie, et al.
We have developed a new computer-aided diagnosis system for coronary artery stenosis, which can learn medical doctors' clinical experiences and medical knowledge. In order to develop such a system, we have employed a multilayer neural network (NN). The NN has the capability to learn experts' experiences and knowledge. The proposed system consists of (a) automatic vessel tracking, (b) automatically extraction of the edges of the vessel, and (c) estimation of stenosis based on the NN. In order to evaluate the performance of the proposed system, two experiments with the phantoms and clinical images were performed. The stenoses estimated by the proposed system agreed well with not only the stenoses based on the actual measurement of the phantoms but also those diagnosed by a medical specialist from coronary arteriograms. The experimental results have shown that the proposed system has the capability to learn medical doctors' clinical experiences and medical knowledge. The proposed system has been proved to be useful to aid to diagnose coronary artery stenosis.
Automated classification of mammographic microcalcifications by using artificial neural networks and ACR BI-RADS criteria
Takeshi Hara, Akitsugu Yamada, Hiroshi Fujita, et al.
We have been developing an automated detection scheme for mammographic microcalcifications as a part of computer-assisted diagnosis (CAD) system. The purpose of this study is to develop an automated classification technique for the detected microcalcifications. Types of distributions of calcifications are known to be significantly relevant to their probability of malignancy, and are described on ACR BI-RADS (Breast Imaging Reporting and Data System) , in which five typical types are illustrated as diffuse/scattered, regional, segmental, linear and clustered. Detected microcalcifications by our CAD system are classified automatically into one of their five types based on shape of grouped microcalcifications and the number of microcalcifications within the grouped area. The type of distribution and other general image feature values are analyzed by artificial neural networks (ANNs) and the probability of malignancy is indicated. Eighty mammograms with biopsy-proven microcalcifications were employed and digitized with a laser scanner at a pixel size of 0.1mm and 12-bit density depth. The sensitivity and specificity were 93% and 93%, respectively. The performance was significantly improved in comparison with the case that the five criteria in BI-RADS were not employed.
CAD system for the assistance of a comparative reading for lung cancer using retrospective helical CT images
The objective of our study is to develop a new computer- aided diagnosis (CAD) system to support effectually the comparative reading using serial helical CT images for lung cancer screening without using the film display. The placement of pulmonary shadows between the serial helical CT images is sometimes different to change the size and the shape of lung by inspired air. We analyzed the motion of the pulmonary structure using the serial cases of 17 pairs, which are different in the inspired air. This algorithm consists of the extraction process of region of interest such as the lung, heart and blood vessels region using thresholding and fuzzy c-means method, and the comparison process of each region in serial CT images using template matching. We validated the efficiency of this algorithm by application to image of 60 subjects. The algorithm could compare the slice images correctly in most combinations with respect to physician's point of view. The experimental results of the proposed algorithm indicate that our CAD system without using the film display is useful to increase the efficiency of the mass screening process.
Computer-aided differential diagnosis of pulmonary nodules based on a hybrid classification approach
Yoshiki Kawata, Noboru Niki, Hironobu Omatsu, et al.
We are developing computerized feature extraction and classification methods to analyze malignant and benign pulmonary nodules in 3D thoracic CT images. Internal structure features were derived form CT density and 3D curvatures to characterize the inhomogeneous of CT density distribution inside the nodule. In the classification step, we combined an unsupervised k-means clustering (KMC) procedure and a supervised linear discriminate (LD) classifier. The KMC procedure classified the sample nodules into two classes by using the mean CT density values for two different regions such as a core region and a complement of the core region in 3D nodule image. The LD classifier was designed for each class by using internal structure features. The forward stepwise procedure was used to select the best feature subset from multi-dimensional feature spaces. The discriminant scores output form the classifier were analyzed by receiver operating characteristic (ROC) method and the classification accuracy was quantified by the area, Ax, under the ROC curve. We analyzed a data set of 248 pulmonary nodules in this study. The hybrid classifier was more effective than the LD classifier alone in distinguishing malignant and benign nodules. The improvement was statistically significant in comparison to classification in the LD classifier alone. The results of this study indicate the potential of combining the KMC procedure and the LD classifier for computer-aided classification of pulmonary nodules.
3D image guidance in radiotherapy: a feasibility study
Matthias Ebert, Burkhard A. Groh, Mike Partridge, et al.
Currently, one major research field in radiotheraphy is focused on patient setup verification and on detection of organ motion and deformation. A phantom study is performed to demonstrate the feasibility of image guidance in radiotherapy. Patient setup errors are simulated with a humanoid phantom, which is imaged using a linear accelerator and a therapy simulator to address megavoltage and kilovoltage (kV) computed tomography (CT), respectively. Projections are recorded by a flat panel imager. The various data sets of the humanoid phantom are compared by mutual information matching. The CT investigations show that the spatial resolution is better than 1.6 mm for high contrast objects. The uncertainties remaining after mutual information matching are found to be less than 1 mm for translations and 1 degree(s) for rotations. The phantom study indicates that the detection of patient setup errors as well as organ motion or deformation is possible with a high accuracy, especially if a kV X-ray tube could be attached to the linear accelerator. The presented method allows sophisticated quality assurance of beam delivery in each fraction and may even enable the use of new concepts of adaptive radiotherapy.
Accurate lumen surface roughness measurement method in carotid atherosclerosis
Chao Han, Thomas S. Hatsukami, Chun Yuan
Lumen surface quality is one characteristic used to characterize flow disturbances generated by small lesions of atherosclerosis. Mean curvature and Gaussian curvature are a set of local differential-geometric shape descriptors in classical differential geometry. Gaussian curvature represents intrinsic surface geometry whereas mean curvature is extrinsic at individual surface points. Here, we have chosen the Gaussian curvature to characterize the lumen surface quality of the carotid artery, referred to as roughness. An accurate roughness measurement method for carotid arteries, based on surface triangulation expression, is presented. This method is divided into the associated three sub-problems during processing: 1) representation of contours, 2) optimal surface tiling, and 3) calculation of roughness. The main advantages of this method are 1) the high curvature points are preserved; 2) roughness is calculated without explicit derivative estimates; 3) the accuracy of the roughness measurement is controlled using the area threshold, which determines the approximate error of surface. In theory, this technique is reasonable, but it will permit further studies to determine the association between roughness and the pathogenesis of carotid atherosclerosis.
Automatic detection of lung nodules from multislice low-dose CT images
Li Fan, Carol L. Novak, JianZhong Qian, et al.
We describe in this paper a novel, efficient method to automatically detect lung nodules from low-dose, high- resolution CT (HRCT) images taken with a multi-slice scanner. First, the program identifies initial anatomical seeds, including lung nodule candidates, airways, vessels, and other features that appear as bright opacities in CT images. Next, a 3D region growing method is applied to each seed. The thresholds for segmentation are adaptively adjusted based upon automatic analysis of the local histogram. Once an object has been examined, vessels and other non-nodule objects are quickly excluded from future study, thus saving computation time. Finally, extracted 3D objects are classified a nodule candidates or non-nodule structures. Anatomical knowledge and multiple measurements, such as volume and sphericity, are used to categorize each object. The detected nodules are presented to the user for examination and verification. The proposed method was applied to 14 low dose HRCT patient studies. Since the CT images were taken with a multi-slice scanner, the average number of slices per study was 292. In every case the x-ray exposure was about 20 mAs, a suitable dosage for screening. In our preliminary results, the method detected an average of 8 nodules per study, with an average size of 3.3 mm in diameter.
Automatic detection of cellular necrosis in epithelial cell cultures
Andres Santos, Cristina Ramiro, Manuel Desco, et al.
Automatic discrimination and quantification of alive and dead cells in phase contrast microscopy images allows in vivo analysis of the viability of cultured cells without staining. Unsupervised segmentation, based on texture analysis, classifies each image region into three groups: live cells, necrotic cells and background. The segmentation is based on three discriminant functions, built using a total of 12 parameters derived from the histogram and the co-occurrence matrix. These parameters were selected performing a discriminant analysis on a training set that included images from three different cultures. Once images are automatically segmented, the approximate number of live and dead cells is obtained by dividing each area by the average size of each cell type. The number and percentage of live and necrotic cells have been obtained for primary cellular cultures in intervals of 48 hr. during two weeks. The results have been compared with the figures given by an experienced human observer, showing a very good correlation (Pearson's coefficient 0.95, kappa 0.87). A reliable and easy-to-use tool has been developed. It provides quantitative results on phase contrast microscopy images of cell cultures, with preliminary results showing accuracy similar to that provided by an expert, allowing to count a higher number of fields.
Computer-aided diagnosis of the solitary pulmonary nodule imaged on CT: 2D, 3D, and contrast enhancement features
Michael F. McNitt-Gray, Nathaniel Wyckoff, Jonathan G. Goldin, et al.
The goal of this work is to determine whether malignant solitary pulmonary nodules (SPNs) can be discriminated form benign lesions based on quantitative features derived form CT images. The goal is to reach an accurate diagnosis quickly and without the need for additional imaging or more invasive tests. CT images were obtained from 54 patients identified as having an SPN. Of these, 24 SPN patients scanned by a spiral volumetric technique before and after the injection of an intravenous contrast agent. Diagnostic truth was determined using either pathology results from biopsy or surgical resection or from radiographic follow-up. All images were acquired using a volumetric CT scan protocol of <EQ3 mm beam collimation, pitch 1, and were reconstructed <EW3 mm apart (typically 1.5mm). For patients receiving the contrast enhanced spiral CT protocol, the nodule was scanned prior to the contrast injection and at 45, 90, 180, and 300 seconds after injection. Nodule boundaries were isolated using a semi-automated contouring procedure on each image in which the nodule appeared. The contour boundaries, as well as their internal pixels, were combined to form 3D regions of interest (ROIs). These ROIs were then used to extract two dimensional (from a representative slice) and 3D (from the complete volume of interest) measures of interest. Two dimensional measure categories include: attenuation, size, texture and boundary shape. 3D categories include attenuation properties, size and surface boundary shape. Each nodule's contrast enhancement was measured using individual pre-contrast and post-contrast images acquired at the same location. Stepwise analyses were performed for each category of features and then again using the combined results form each category of features. Once features were selected, they were used as input variables to a linear discriminant and a logistic regression classifier. The performance of each was evaluated using ROC analysis. Because false negatives are much more serious than false positives, we also evaluated performance using the false positive fraction (FPF) at which the true positive fraction (TPF) goes to 1.0. When the Logistic Regression classifier was used with 2D non texture features, it achieved >90% accuracy, area under ROC of >0.96 and FPF of .28 for TPF = 1.0. When the Logistic Regression classifier was used with 3D and contrast enhancement features, it achieved 92% accuracy, an area under ROC of .969 and a FPF of .063 for TPF = 1.0. When combinations of group features were used, the performance was not as good as individual groups listed above. The best was when enhancement and sphericity were used in a linear discriminant model, which achieved 87.5% accuracy and area under ROC of .820. The combination of contrast enhancement measures with other, morphological descriptors, hold promise for accurate classification of solitary pulmonary nodules imaged on CT. These results are preliminary as they are based on small numbers of cases and may be sensitive to the results of individual cases.
Comparison of three multiscale vessel enhancement filters intended for intracranial MRA: initial phantom results
Brian E. Chapman, Dennis L. Parker
Vessel enhancement filtering has been proposed by a number of researchers as a preprocessing step to improve the vessel detail obtained in maximum intensity projection (MIP) images of 3D MRA data. In this paper we compare the performance of three proposed vessel enhancement filters as a function of contrast, signal-difference-to-noise ratio (SDNR) and vessel size. Filters were applied to simple digital phantoms where the signal, background and noise characteristics were known and could be precisely controlled.
Investigating different similarity measures for a case-based reasoning classifier to predict breast cancer
Anna O. Bilska-Wolak, Carey E. Floyd Jr.
This paper investigates the effects of using different similarity measures for a case-based reasoning (CBR) classifier to predict breast cancer. The CBR classifier used a mammographer's BI-RADSTM description of a lesion to predict breast biopsy outcome. The classifier compared the case to be examined to a reference collection of cases and identified those that were similar. The decision variable was formed as the ratio of similar cases that were malignant to all similar cases. A reference collection of 1027 biopsy-proven cases from Duke University Medical Center was used as input. Both Euclidean and Hamming distance measures were compared using all possible combinations of nine BI-RADSTM features and age. Performance was evaluated using jackknife sampling and ROC analysis. For all combinations of features, it was found that Euclidean distance measure produced greater ROC areas and partial ROC areas than Hamming. The differences were significant at an alpha level of 0.05. The greatest ROC area of 0.82 +/- 0.01 was generated using six of the features and Euclidean distance measure. The results of both distance measures yielded greater ROC areas than previously reported values and were similar to results generated with an Artificial Neural Network using 10 features.
Automatic temporal subtraction of chest radiographs and its enhancement for lung cancers
In this study, we demonstrate an approach of using automatic edge detection and registration techniques for the temporal subtraction of chest radiographs. We also show that the contrasts of lung cancers were greatly enhanced. Difference images were obtained by subtracting an earlier chest radiograph from the later chest radiograph of the same patient. Prior to the subtraction, the lung areas were extracted by a convolution neural network. Segmented lungs and rib cages on both images were used to perform registration using directional edge extractors. Control points were selected prior to a warping process for the final registration. We also investigated the contrast enhancement of the lung cancer by analyzing the local area on the images. The signal-to-noise ratios at each cancer location were compared to evaluate the degree of improvement between the later chest image and the subtraction image. Our results indicated that the average signal-to-noise ratios at cancer locations were increased from 50% to 80%. The selected cases were collected for this study based on their subtleness. For each of this case, at least 4 radiologists out of 15 radiologists missed the cancer with a mean at 7.2 radiologists resulting from a recent observer study.
Validation of brain segmentation and tissue classification algorithm for T1-weighted MR images
Vikram Chalana, Lydia Ng, Larry R. Rystrom, et al.
Volumetric analysis of the brain from MR images is an important biomedical research tool. Segmentation of the brain parenchyma and its constituent tissue types, the gray matter and the white matter, is necessary for volumetric information in longitudinal and cross-sectional studies. We have implemented and compared two different classes of algorithms for segmentation of the brain parenchyma. In the first algorithm a combination of automatic thresholding and 3-D mathematical morphology was used to segment the brain while in the second algorithm an optical flow-based 3-D non-rigid registration approach was used to warp an MR head atlas to the subject brain. For tissue classification within the brain area a 3-D Markov Random Field model was used in conjunction with supervised and unsupervised classification. The algorithms described above were validated on a data set provided at the Internet Brain Segmentation Repository that consists of 20 normal T1 volumes (3 mm slice thickness) with manually segmented brain and manually classified tissues. While the morphological segmentation algorithm had an average similarity index of 0.918, the atlas-based brain segmentation algorithm has an average similarity index of 0.953. The supervised tissue classification had an average similarity index of 0.833 for gray matter voxels and 0.766 for white matter voxels. The performance of these algorithms is quite acceptable to end-users both in terms of accuracy and speed.
Improvement of mammographic lesion detection by fusion of information from different views
In screening mammography, two standard views, craniocaudal (CC) and medio-lateral oblique (MLO), are commonly taken, and radiologists use information from the two views for lesion detection and diagnosis. Current computer-aided diagnosis (CAD) systems are designed to detect lesions on each view separately. We are developing a CAD method that utilizes information from the two views to reduce false-positives (FPs). Our two-view detection scheme consists of two main stages, a one-view pre-screening stage and a two-view correspondence stage. The one-view and two-view scores are then fused to estimate the likelihood that an object is a true mass. In this study, we analyzed the effectiveness of the proposed fusion scheme for FP reduction and its dependence on the number of objects per image in the pre-screening stage. The preliminary results demonstrate that the fusion of information from the CC and MLO views significantly reduced the FP rate in comparison to the one-view scheme. When the pre-screening stage produced 10 objects per image, the two-view fusion technique reduced the FP rate from an average of 2.1 FPs/image in our current one-view CAD scheme to 1.2 FPs/image at a sensitivity of 80%. The results also indicate that the improvement in the detection accuracy was essentially independent of the number of initial objects per image obtained at the pre-screening stage for this data set.
Analysis of evolving processes in pulmonary nodules using a sequence of three-dimensional thoracic images
Yoshiki Kawata, Noboru Niki, Hironobu Omatsu, et al.
This paper presents a method to analyze volume evolutions of pulmonary nodules for discrimination between malignant and benign nodules. Our method consists of four steps; The 3D rigid registration of the two successive 3D thoracic CT images, the 3D affine registration of the two successive region-of-interest (ROI) images, non rigid registration between local volumetric ROIs, and analysis of the local displacement field between successive temporal images. In preliminary study, the method was applied to the successive 3D thoracic images of two pulmonary lesions including a metastasis malignant case and an inflammatory benign to quantify the evolving process in the pulmonary nodules and surrounding structure. The time intervals between successive 3D thoracic images for the benign and malignant cases were 120 and 30 days, respectively. From the display of the displacement fields and the contrasted image by the vector field operator based on the Jacobian, it was observed that the benign case reduced in the volume and the surrounding structure was involved into the nodule in the evolution process. It was also observed that the malignant case expanded in the volume. These experimental results indicate that our method is a promising tool to quantify how the lesions evolve their volume and surrounding structures.
Clinical test results of computer-aided detection system for lung cancer using helical CT images
Kazuhori Kubota, Mitsuru Kubo, Yoshiki Kawata, et al.
We have developed a computer assisted automatic detection system for lung cancer that detects tumor candidates at an early stage form helical CT images. In July 1997, we started the comparative field trial using our system prospectively. Chest CT images obtained by helical CT scanner have drawn a great interest in the detection of suspicious regions. However, mass screening based on helical CT images leads to a considerable number of images to be diagnosed. We expect that our system can reduce the time complexity and increase diagnostic confidence. In this paper, we describe the detection results of the system for the nodules of definite diagnosis. We show the prospective results and the retrospective results. These results show that the system can detect lung cancer candidates at an early stage successfully and can be applied to a mass screening. In addition, we describe the necessity of the CAD system having the function which can be compared with the previous CT images.
Automated selection of computed tomography display parameters using neural networks
Di Zhang, Scott Neu, Daniel J. Valentino
A collection of artificial neural networks (ANN's) was trained to identify simple anatomical structures in a set of x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point by using the image pixels located on the horizontal and vertical lines that ran through the point. The neural networks were integrated into a computer software tool whose function is to select an index into a list of CT window/level values from the location of the user's mouse cursor. Based upon the anatomical structure selected by the user, the software tool automatically adjusts the image display to optimally view the structure.
Novel use of the Hotelling observer for computer-aided diagnosis of solitary pulmonary nodules
We propose to investigate a novel use of the Hotelling observer for the task of discrimination of solitary pulmonary nodules from a database of regions that were all deemed suspicious. A database of 239 regions of interest (ROIs) was collected from digitized chest radiographs. Each of these 256x256 pixel ROIs contained a suspicious lesion in the center for which we have a truth file. For our study, 25 separate Hotelling observers were set up in a 5x5 grid across the center of the ROIs. Each separate observer was designed to 'observe' a 15x15 pixel area of the image. Leave-one-out training was used to generate 25 output observer features. These 25 features were then narrowed down using a sequential forward searching linear discriminant analysis. The forward search was continued until the accuracy declined at 13 features and the subset was used as the input layer to an artificial neural network (ANN). This network was trained to minimize mean squared error and the output was the area under the ROC curve. The trained ANN gave an ROC area of .86. In comparison, three radiologists performed at ROC area indexes of .72, .79, and .83.
Automated radiographic absorptiometry system for quantitative rheumatoid arthritis assessment
Vivek Swarnakar, Bo Fan, Harry K. Genant
Quantifying disease progression of patients with early stage Rheumatoid Arthritis (RA) presents special challenges. Establishing a robust and reliable method that combines the ACR criteria with bone and soft-tissue measurement techniques, would make possible the diagnosis of early RA and/or the monitoring of the progress of the disease. In this paper an automated, reliable and robust system that combines the ACR criteria with radiographic absorptiometry based bone and soft-tissue density measurement techniques is presented. The system is comprised of an image digitization component and an automated image analysis component. Radiographs of the hands and the calibration wedges are acquired and digitized following a standardized procedure. The image analysis system segments the relevant joints into soft-tissue and bone regions and computes density values of each of these regions relative to the density of the reference wedges. Each of the joints are also scored by trained radiologists using the well established ACR criteria. The results of this work indicate that use of standardized imaging procedures and robust image analysis techniques can significantly improve the reliability of quantitative measurements for rheumatoid arthritis assessment. Furthermore, the methodology has the potential to be clinically used in assessing disease condition of early stage RA subjects.
Computerized characterization of contrast enhancement patterns for classifying pulmonary nodules
This paper presents a computerized classification scheme of pulmonary nodules in contrast enhanced dynamic CT images. Conventionally, we extracted 3D nodule images by using a deformable surface model. However, there was a limit in segmenting the 3D nodule images contacted with vessels and bronchi. In order to improve the segmentation accuracy of the 3D nodule images, we developed a software tool to eliminate the leaked region of the 3D nodule image due to vessels and bronchi interactively. Using our data set including 62 cases (27 benign and 35 malignant cases), we demonstrate how the segmentation accuracy affects the classification accuracy of our scheme.
Preliminary performance analysis of breast MRI CAD system
Alan I. Penn, Nitin Kumar M.D., Scott F. Thompson, et al.
Alan Penn & Associates, Inc. is developing a computer-aided-diagnosis (CAD) system to assist radiologists in distinguishing benign from malignant breast lesions in gadolinium-enhanced magnetic resonance (MR) images. The CAD system uses reader interpretations and computer analysis to generate numeric and visualization aids to help the radiologist make more informed decisions. This paper presents results of analyses that evaluate the statistical and diagnostic effectiveness of the breast-MR CAD system: 1)Evaluation of a first-generation CAD system by five radiologists at the University of Pennsylvania Medical Center (UPMC). 2)Validation of the computer-generated fractal measure using a second set of images. This paper also presents a description of a second-generation system that includes kinetic interpretations.
Eliminating false-positive microcalcification clusters in a mammography CAD scheme using a Bayesian neural network
Darrin C. Edwards, John Papaioannou, Yulei Jiang, et al.
We have applied a Bayesian Neural network (BNN) to the task of distinguishing between true-positive (TP) and false- positive (FP) detected clusters in a computer-aided diagnosis (CAD) scheme for detecting clustered microcalcifications in mammograms. Because BNNs can approximate ideal observer decision functions given sufficient training data, this approach should have better performance than our previous FP cluster elimination methods. Eight cluster-based features were extracted from the TP and FP clusters detected by the scheme in a training dataset of 39 mammograms. This set of features was used to train a BNN with eight input nodes, five hidden nodes, and one output node. The trained BNN was tested on the TP and FP clusters and detected by our scheme in an independent testing set of 50 mammograms. The BNN output was analyzed using ROC and FROC analysis. The detection scheme with BNN for FP cluster elimination had substantially better cluster sensitivity at low FP rates (below 0.8 FP clusters per image) than the original detection scheme without the BNN. Our preliminary research shows that a BNN can improve the performance of our scheme for detecting clusters of microcalcifications.