Proceedings Volume 5032

Medical Imaging 2003: Image Processing

cover
Proceedings Volume 5032

Medical Imaging 2003: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 15 May 2003
Contents: 15 Sessions, 204 Papers, 0 Presentations
Conference: Medical Imaging 2003 2003
Volume Number: 5032

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • CAD I
  • CAD II
  • Registration I
  • CAD III
  • CAD I
  • Cellular Image Analysis
  • Shape/Motion
  • Registration II
  • Deformable Geometry
  • Tissue Image Analysis
  • Segmentation I
  • Pattern Recognition and Database Retrieval
  • Segmentation II
  • Tomographic Reconstruction
  • Poster Session
  • Tomographic Reconstruction
  • Multiresolution/Multispectral Image Processing
  • Poster Session
  • Tomographic Reconstruction
  • Poster Session
CAD I
icon_mobile_dropdown
Segmentation of multiple sclerosis lesions using support vector machines
Ricardo José Ferrari, Xingchang Wei M.D., Yunyan Zhang M.D., et al.
In this paper we present preliminary results to automatically segment multiple sclerosis (MS) lesions in multispectral magnetic resonance datasets using support vector machines (SVM). A total of eighteen studies (each composed of T1-, T2-weighted and FLAIR images) acquired from a 3T GE Signa scanner was analyzed. A neuroradiologist used a computer-assisted technique to identify all MS lesions in each study. These results were used later in the training and testing stages of the SVM classifier. A preprocessing stage including anisotropic diffusion filtering, non-uniformity intensity correction, and intensity tissue normalization was applied to the images. The SVM kernel used in this study was the radial basis function (RBF). The kernel parameter (γ) and the penalty value for the errors were determined by using a very loose stopping criterion for the SVM decomposition. Overall, a 5-fold cross-validation accuracy rate of 80% was achieved in the automatic classification of MS lesion voxels using the proposed SVM-RBF classifier.
Full automation of morphological segmentation of retinal images: a comparison with human-based analysis
Age-Related Macular Degeneration (ARMD) is the leading cause of irreversible visual loss among the elderly in the US and Europe. A computer-based system has been developed to provide the ability to track the position and margin of the ARMD associated lesion; drusen. Variations in the subject's retinal pigmentation, size and profusion of the lesions, and differences in image illumination and quality present significant challenges to most segmentation algorithms. An algorithm is presented that first classifies the image to optimize the variables of a mathematical morphology algorithm. A binary image is found by applying Otsu's method to the reconstructed image. Lesion size and area distribution statistics are then calculated. For training and validation, the University of Wisconsin provided longitudinal images of 22 subjects from their 10 year Beaver Dam Study. Using the Wisconsin Age-Related Maculopathy Grading System, three graders classified the retinal images according to drusen size and area of involvement. The percentages within the acceptable error between the three graders and the computer are as follows: Grader-A: Area: 84% Size: 81%; Grader-B: Area: 63% Size: 76%; Grader-C: Area: 81% Size: 88%. To validate the segmented position and boundary one grader was asked to digitally outline the drusen boundary. The average accuracy based on sensitivity and specificity was 0.87 for thirty four marked regions.
Automated classification of wall motion abnormalities by principal component analysis of endocardial shape motion patterns in echocardiograms
Johan G. Bosch, Francisca Nijland, Steven C. Mitchell, et al.
Principal Component Analysis of sets of temporal shape sequences renders eigenvariations of shape/motion, including typical normal and pathological endocardial contraction patterns. A previously developed Active Appearance Model for time sequences (AAMM) was employed to derive AAMM shape coefficients (ASCs) and we hypothesized these would allow classification of wall motion abnormalities (WMA). A set of stress echocardiograms (single-beat 4-chamber and 2-chamber sequences with expert-verified endocardial contours) of 129 infarct patients was split randomly into training (n=65) and testing (n=64) sets. AAMMs were generated from the training set and for all sequences ASCs were extracted and statistically related to regional/global Visual Wall Motion Scoring (VWMS) and clinical infarct severity and volumetric parameters. Linear regression showed clear correlations between ASCs and VWMS. Infarct severity measures correlated poorly to both ASCs and VWMS. Discriminant analysis showed good prediction from low #ASCs of both segmental (85% correctness) and global WMA (90% correctness). Volumetric parameters correlated poorly to regional VWMS. Conclusions: 1)ASCs show promising accuracy for automated WMA classification. 2)VWMS and endocardial border motion are closely related; with accurate automated border detection, automated WMA classification should be feasible. 3)ASC shape analysis allows contour set evaluation by direct comparison to clinical parameters.
Detection of abnormal diffuse perfusion in SPECT using a normal brain atlas
Jean-Francois Laliberte, Jean Meunier, Max Mignotte, et al.
Despite the advent of sophisticated image analysis algorithms, most SPECT (Single Photon Emission Computerized Tomography)cerebral perfusion studies are assessed visually, leading to unavoidable and significant inter and intra-observer variability. Here, we present an automatic method for evaluating SPECT studies based on a computerized atlas of normal regional cerebral bloodflow(rCBF). To generate the atlas, normal(screened volunteers)brain SPECT studies are registered with an affine transformation to one of them arbitrarily selected as reference to remove any size and orientation variations that are assumed irrelevant for our analysis. Then a smooth non-linear registration is performed to reveal the local activity pattern displacement among the normal subjects. By computing and applying the mean displacement to the reference SPECT image, one obtain the atlas that is the normal mean distribution of the rCBF(up to an affine transformation difference). To complete the atlas we add the intensity variance with the displacement mean and variance of the activity pattern. To investigate a patient's condition, we proceed similarly to the atlas construction phase. We first register the patient's SPECT volume to the atlas with an affine transformation. Then the algorithm computes the non-linear 3D displacement of each voxel needed for an almost perfect shape (but not intensity)fit with the atlas. For each brain voxel, if the intensity difference between the atlas and the registered patient is higher than normal differences then this voxel is counted as "abnormal" and similarly if the 3D motion necessary to move the voxel to its registered position is not within the normal displacements. Our hypothesis is that this number of abnormal voxels discriminates between normal and abnormal studies. A Markovian segmentation algorithm that we have presented elsewhere is also used to identify the white and gray matters for regional analysis. We validated this approachusing 23 SPECT perfusion studies (99mTc ECD)selected visually for clear diffuse anomalies (a much more stringent test than "easy" focal lesions detection) and 21 normal studies. A leave-one-out strategy was used to test our approach to avoid any bias. Based on the number of "abnormal" voxels, two simple supervised classifiers were tested:(1)minimum distance-to-mean and (2)Bayesian. A voxel was considered "abnormal" if its P value with respect to the atlas was lower that 0.01(1%). The results show that for the whole brain, a combination of the number of intensity and displacement "abnormal" voxel is a powerful discriminant with a 91% classification rate. If we focus only on the voxels in the segmented gray matter the rates are slighty higher.
Ophthalmologic diagnostic tool using MR images for biomechanically-based muscle volume deformation
Michael Buchberger, Thomas Kaltofen
We would like to give a work-in-progress report on our ophthalmologic diagnostic software system which performs biomechanically-based muscle volume deformations using MR images. For reconstructing a three-dimensional representation of an extraocular eye muscle, a sufficient amount of high resolution MR images is used, each representing a slice of the muscle. In addition, threshold values are given, which restrict the amount of data used from the MR images. The Marching Cube algorithm is applied to the polygons, resulting in a 3D representation of the muscle, which can efficiently be rendered. A transformation to a dynamic, deformable model is applied by calculating the center of gravity of each muscle slice, approximating the muscle path and subsequently adding Hermite splines through the centers of gravity of all slices. Then, a radius function is defined for each slice, completing the transformation of the static 3D polygon model. Finally, this paper describes future extensions to our system. One of these extensions is the support for additional calculations and measurements within the reconstructed 3D muscle representation. Globe translation, localization of muscle pulleys by analyzing the 3D reconstruction in two different gaze positions and other diagnostic measurements will be available.
Model-based imaging of cardiac electrical function in human atria
Robert Modre, Bernhard Tilg, Gerald Fischer, et al.
Noninvasive imaging of electrical function in the human atria is attained by the combination of data from electrocardiographic (ECG) mapping and magnetic resonance imaging (MRI). An anatomical computer model of the individual patient is the basis for our computer-aided diagnosis of cardiac arrhythmias. Three patients suffering from Wolff-Parkinson-White syndrome, from paroxymal atrial fibrillation, and from atrial flutter underwent an electrophysiological study. After successful treatment of the cardiac arrhythmia with invasive catheter technique, pacing protocols with stimuli at several anatomical sites (coronary sinus, left and right pulmonary vein, posterior site of the right atrium, right atrial appendage) were performed. Reconstructed activation time (AT) maps were validated with catheter-based electroanatomical data, with invasively determined pacing sites, and with pacing at anatomical markers. The individual complex anatomical model of the atria of each patient in combination with a high-quality mesh optimization enables accurate AT imaging, resulting in a localization error for the estimated pacing sites within 1 cm. Our findings may have implications for imaging of atrial activity in patients with focal arrhythmias.
CAD II
icon_mobile_dropdown
Evaluation of quantitative measures of breast tissue density from mammography with truth from MRI data
Xiao Hui Wang, Brian E. Chapman, Cynthia A. Britton M.D., et al.
Breast tissue density is one of the most cited risk factors in breast cancer development. Nevertheless, estimates of the magnitude of breast cancer risk associated with density vary substantially because of the inadequacy of methods used in tissue density assessment (e.g., subjective and/or qualitative assessment) and lack of a reliable gold standard. We have developed automated algorithms for quantitatively measuring breast composition from digitized mammograms. The results were compared to objective truth as determined by quantitative measures from breast MR images, as well as to subjective truth as determined by radiologists' readings from digitized mammograms using BI-RAD standards. Higher linear correlation between estimates calculated from mammograms using the methods developed herein and estimates derived from breast MR images demonstrates that the mammography-based methods will likely improve our ability to accurately determine the breast cancer risk associated with breast density. By using volumetric measures from breast MR images as a gold standard, we are able to estimate the adequacy and accuracy of our algorithms. The results can be used for providing a calibrated method for estimating breast composition from mammograms.
Computerized analysis of mammographic parenchymal patterns using fractal analysis
Mammographic parenchymal patterns have been shown to be associated with breast cancer risk. Fractal-based texture analyses, including box-counting methods and Minkowski dimension, were performed within parenchymal regions of normal mammograms of BRCA1/BRCA2 gene mutation carriers and within those of women at low risk for developing breast cancer. Receiver Operating Characteristic (ROC) analysis was used to assess the performance of the computerized radiographic markers in the task of distinguishing between high and low-risk subjects. A multifractal phenomenon was observed with the fractal analyses. The high frequency component of fractal dimension from the conventional box-counting technique yielded an Az value of 0.84 in differentiating between two groups, while using the LDA to estimate the fractal dimension yielded an Az value of 0.91 for the high frequency component. An Az value of 0.82 was obtained with fractal dimensions extracted using the Minkowski algorithm.
ROC study: effects of computer-aided diagnosis on radiologists' characterization of malignant and benign breast masses in temporal pairs of mammograms
We conducted an observer performance study using receiver operating characteristic (ROC) methodology to evaluate the effects of computer-aided diagnosis (CAD) on radiologists’ performance for characterization of masses on serial mammograms. The automated CAD system, previously developed in our laboratory, can classify masses as malignant or benign based on interval change information on serial mammograms. In this study, 126 temporal image pairs (73 malignant and 53 benign) from 52 patients containing masses on serial mammograms were used. The corresponding masses on each temporal pair were identified by an experienced radiologist and automatically segmented by the CAD program. Morphological, texture, and spiculation features of the mass on the current and the prior mammograms were extracted. The individual features and the difference between the corresponding current and prior features formed a multidimensional feature space. A subset of the most effective features that contained the current, prior, and interval change information was selected by a stepwise procedure and used as input predictor variables to a linear discriminant classifier in a leave-one-case-out training and testing resampling scheme. The linear discriminant classifier estimated the relative likelihood of malignancy of each mass. The classifier achieved a test Az value of 0.87. For the ROC study, 4 MQSA radiologists and 1 breast imaging fellow assessed the masses on the temporal pairs and provided estimates of the likelihood of malignancy without and with CAD. The average Az value for the likelihood of malignancy estimated by the radiologists was 0.79 without CAD and improved to 0.87 with CAD. The improvement was statistically significant (p=0.0003). This preliminary result indicated that CAD using interval change analysis can significantly improve radiologists’ accuracy in classification of masses and thereby may increase the positive predictive value of mammography.
Comparison of approaches for risk-modulated CAD
We compared computerized methods that incorporate automated lesion characterization and methods for the assessment of the breast parenchymal pattern on mammograms in order to better predict the pathological status of a breast lesion. Computer-extracted mass feature automatically characterized the shape, spiculation, contrast, and margin of each lesion. On the digitized mammogram of the contralateral breast, computer-extracted texture features were automatically extracted to characterize the radiographic breast parenchymal patterns. Three approaches were investigated. A computerized risk-modulated analysis system for mammographic images is expected to improve characterization of lesions by incorporating cancer-risk information into the decision-making process.
Computerized detection and classification of lesions on breast ultrasound
Karen Drukker, Maryellen Lissak Giger, Carl J. Vyborny, et al.
We are developing a computerized method that detects suspicious areas on ultrasound images, and then distinguishes between malignant and benign-type lesions. The computerized scheme identifies potential lesions based on expected lesion shape and margin characteristics. All potential lesions are subsequently classified by a Bayesian neural net based on computer-extracted lesion features. The scheme was trained on a database of 400 cases (757 images) - consisting of complex cysts, benign and malignant lesions - and tested on a comparable database of 458 cases (1740 images) including 578 normal images. We investigated the performances of lesion detection and subsequent classification by a Bayesian neural net for two tasks. The first task was the distinction between actual lesions and false-positive (FP) detections, and the second task the distinction between actual malignant lesions and all detected lesion candidates. In training, the detection and classification method obtained an Az value of 0.94 in the distinction of false-positive detections from actual lesions, and an Az of 0.91 was obtained on the testing database. The task of distinguishing malignant lesions from all other detections (false-positives plus all benign type lesions) showed to be more challenging and Az values of 0.87 and 0.81 were obtained during training and testing, respectively. For the testing database, the combined detection and classification scheme correctly identified lesions in 82% (0.45 FP per image) of all the patients, and in 100% (0.43 FP malignancies per image) of the cancer patients.
A mammographic mass CAD system incorporating features from shape, fractal, and channelized Hotelling observer measurements: preliminary results
In this paper, we present preliminary results from a highly sensitive and specific CAD system for mammographic masses. For false positive reduction, the system incorporated features derived from shape, fractal, and channelized Hotelling observer (CHO) measurements. The database for this study consisted of 80 craniocaudal mammograms randomly extracted from USF's digital database for screening mammography. The database contained 49 mass findings (24 malignant, 25 benign). To detect initial mass candidates, a difference of Gaussians (DOG) filter was applied through normalized cross correlation. Suspicious regions were localized in the filtered images via multi-level thresholding. Features extracted from the regions included shape, fractal dimension, and the output from a Laguerre-Gauss (LG) CHO. Influential features were identified via feature selection techniques. The regions were classified with a linear classifier using leave-one-out training/testing. The DOG filter achieved a sensitivity of 88% (23/24 malignant, 20/25 benign). Using the selected features, the false positives per image dropped from ~20 to ~5 with no loss in sensitivity. This preliminary investigation of combining multi-level thresholded DOG-filtered images with shape, fractal, and LG-CHO features shows great promise as a mass detector. Future work will include the addition of more texture and mass-boundary descriptive features as well as further exploration of the LG-CHO.
Mammographic mass characterization using sharpness and lobulation measures
For radiologists lesion margin appearance is of high importance when classifying breast masses as malignant or benign lesions. In this study, we developed different measures to characterize the margin of a lesion. Towards this goal, we developed a series of algorithms to quantify the degree of sharpness and lobulation of a mass margin. Besides, to estimate spiculation of a margin, features previously developed for mass detection were used. Images selected from the publicly available data set "Digital Database for Screening Mammography" were used for development and evaluation of these algorithms. The data set consisted of 777 images corresponding to 382 patients. To extract lesions from the mammograms a segmentation algorithm based on dynamic programming was used. Features were extracted for each lesion. A k-nearest neighbor algorithm was used in combination with a leave-one-out procedure to select the best features for classification purposes. Classification accuracy was evaluated using the area Az under the receiver operating characteristic curve. The average test Az value for the task of classifying masses on a single mammographic view was 0.79. In a case-based evaluation we obtained an Az value of 0.84.
Registration I
icon_mobile_dropdown
Overcoming activation-induced registration errors in fMRI
Jeffery J. Orchard, Chen Greif, Gene H. Golub, et al.
It has been shown that the presence of a blood oxygen level dependent (BOLD) signal in high-field (3T and higher) fMRI datasets can cause stimulus-correlated registration errors, especially when using a least-squares registration method. These errors can result in systematic inaccuracies in activation detection. The authors have recently proposed a new method to solve both the registration and activation detection least-squares problems simultaneously. This paper gives an outline of the new method, and demonstrates its robustness on simulated fMRI datasets containing various combinations of motion and activation. In addition to a discussion of the merits of the method and details on how it can be efficiently implemented, it is shown that, compared to the standard approach, the new method consistently reduces false-positive activations by two thirds and reduces false-negative activations by one third.
Simultaneous registration and bias correction of brain intra-operative MR images
Mathieu De Craene, Torsten Butz, Eduard Solanas, et al.
Intra-Operative MR imaging is an emerging tool for image guided (neuro)surgery. Due to the small size of the magnets and the short acquisition time, the images produced by such devices are often subject to distortions. In this work, we show the particular case of images provided by an ODIN device (Odin Medical Technologies, Newton, MA 02458, USA). Such images suffer from geometric distortions and an important bias field in the luminance. In order to simultaneously correct these deformations, we propose to register a preoperative ODIN image with a diagnosis MR high resolution image while compensating the bias field.
Diffusion tensor orientation matching for image registration
Kathleen M. Curran, Daniel C. Alexander
We present a new method to perform registration of DT-MRI (Diffusion Tensor Magnetic Resonance Imaging) data. The goal of image registration is to determine the spatial alignment between multiple images of the same or different subjects, acquired intra or inter-modality. Registration of DT-MRI is more complex than for scalar data because it contains additional directional information. The exploitation of DT-MR data for registration should improve the accuracy of image matching for scalar data because the information in DT-MRI is complementary to that contained in standard MR images and thus provides additional cues for matching, which can be used both to test registration quality and improve it. Moreover, developing techniques for spatial normalisation of DT-MR images allows for cross-population studies to be performed using the whole tensor. The novelty of the proposed approach is that it uses the tensor orientation to calculate the registration transformation. We have quantitatively shown that this new algorithm reconstructs some synthetic transformations more closely than current techniques. However, further analysis of our results is necessary to quantify the advantage of our methods more clearly.
Two algorithms for non-rigid image registration and their evaluation
This paper presents two non-rigid image registration algorithms: Thirion's Demons method and its spline-based extension, and compares their performance on the task of inter-subject registration of MRI brain images. The methods are designed to be fast and derive their speed from the uncoupling of the correspondence calculation and deformation interpolation procedures, each of which are then amenable to efficient implementation. The evaluation results indicate that this uncoupling does not significantly limit the registration accuracy that can be achieved.
Finite-element deformable sheet-curve models for registration of breast MR images
It is clinically important to develop novel approaches to accurately assess early response to chemoprevention. We propose to quantitatively measure changes of breast density and breast vascularity in glandular tissue to assess early response to chemoprevention. In order to accurately extract glandular tissue using pre- and post-contrast magnetic resonance (MR) images, non-rigid registration is the key to align MR images by recovering the local deformations. In this paper, a new registration method has been developed using finite-element deformable sheet-curve models to accurately register MR breast images for extraction of glandular tissue. Finite-element deformable sheet-curve models are coupling dynamic systems to physically model the boundary deformation and image deformation. Specifically, deformable curves are used to obtain a reliable matching of the boundaries using physically constrained deformations. A deformable sheet with the energy functional of thin-plate-splines is used to model complex local deformations between the MR breast images. Finite-element deformable sheet-curve models have been applied to register both digital phantoms and MR breast image. The experimental results have been compared to point-based methods such as the thin-plate-spline (TPS) approach, which demonstrates that our method is of a great improvement over point-based registration methods in both boundary alignment and local deformation recovery.
CAD III
icon_mobile_dropdown
Effect of the number of cases in image database on the performance of computer-aided diagnosis (CAD) for the detection of pulmonary nodules in chest radiographs
Junji Shiraishi, Hiroyuki Abe, Roger M. Engelmann, et al.
We investigated the effect of the number of cases included in an image database on development of a computer-aided diagnosis (CAD) scheme for the detection of lung nodules, in terms of the performance of the CAD scheme. A total number of 1000 chest radiographs with nodules was used in this study. All images were divided randomly into subsets consisting of the same number of cases from different sources. The subsets we used in this study were 10 sets of 100 cases, 5 sets of 200 cases, and 2 sets of 500 cases. The entire database and all of the subsets were tested by use of the same CAD scheme, but with different parameter settings for consistency tests. When the sensitivities of the CAD scheme for each subset were kept at a level of 70.0 %, the numbers of false positives per image were 0.1 for 100 cases, 0.6 for 200 cases, 2.9 for 500 cases, and 6.2 for 1000 cases. Therefore, the performance of the CAD scheme in detecting lung nodules was strongly affected by the number of cases used. We conclude that a large-scale image database is needed for reliable evaluation of the performance of CAD.
Classification of lung nodules in diagnostic CT: an approach based on 3D vascular features, nodule density distribution, and shape features
We have developed various segmentation and analysis methods for the quantification of lung nodules in thoracic CT. Our methods include the enhancement of lung structures followed by a series of segmentation methods to extract the nodule and to form 3D configuration at an area of interest. The vascular index, aspect ratio, circularity, irregularity, extent, compactness, and convexity were also computed as shape features for quantifying the nodule boundary. The density distribution of the nodule was modeled based on its internal homogeneity and/or heterogeneity. We also used several density related features including entropy, difference entropy as well as other first and second order moments. We have collected 48 cases of lung nodules scanned by thin-slice diagnostic CT. Of these cases, 24 are benign and 24 are malignant. A jackknife experiment was performed using a standard back-propagation neural network as the classifier. The LABROC result showed that the Az of this preliminary study is 0.89.
Recognition method of lung nodules using blood vessel extraction techniques and 3D object models
Gentaro Fukano, Hotaka Takizawa, Kanae Shigemoto, et al.
In this paper, we propose a method for reducing false positives in X-ray CT images using ridge shadow extraction techniques and 3D geometric object models. Suspicious shadows are detected by our variable N-quoit (VNQ) filter, which is a type of mathematical morphology filter. This filter can detect lung cancer shadows with the sensitivity over 95[%], but it also detects many false positives which are mainly related to blood vessel shadows. We have developed two algorithms to distinguish lung nodule shadows from blood vessel shadows. In the first algorithm, the ridge shadows, which come from blood vessels, are emphasized by our Tophat by Partial Reconstruction filter which is also a type of mathematical morphology filter. And then, the region of the ridge shadow is extracted using binary distance transformation. In the second algorithm, we propose a recognition method of nodules using 3D geometric lung nodule and blood vessel models. The anatomical knowledge about the 3D structures of nodules and blood vessels can be reflected in recognition process. By applying our new method to actual CT images (37 patient images), a good result has been acquired.
Method for analysis and display of distribution of emphysema in CT scans
William J. Kostis, Simina C. Fluture, David F. Yankelevitz, et al.
A novel method for the assessment and display of the distribution of emphysema in low-dose helical CT scans has been developed. The automated system segments the lung volume and estimates the degree of emphysema as a function of slice position within the lung. Eighty low-dose (120 kVp, 40 mA) high-resolution (2.5 mm slice thickness) CT scans were randomly selected from our lung cancer screening program. Three emphysema assessments were performed on each scan: the traditional method of averaging the degree of emphysema on four pre-selected CT slices, the total volumetric percentage of emphysema, and a graphical display of emphysema burden as a function of slice position based on a sliding window algorithm. The traditional four-slice estimates showed a high correlation (0.98) with the total volumetric percentages, yet provided limited spatial information. In those cases with a higher overall percentage of emphysema, the distribution within the lung as quantified by the new method was more skewed than that of less severe cases or normals. Analysis and display of the spatial distribution of emphysema allows for assessment of emphysema burden within each lung zone, which may be useful for quantitating the type of emphysema and the progression of disease over time.
Validation of a constraint satisfaction neural network for breast cancer diagnosis: new results from 1,030 cases
Previously, we presented a Constraint Satisfaction Neural Network (CSNN) to predict the outcome of breast biopsy using mammographic and clinical findings. Based on 500 cases, the study showed that CSNN was able to operate not only as a predictive but also as a knowledge discovery tool. The purpose of this study is to validate the CSNN on a database of additional 1,030 cases. An auto-associative backpropagation scheme was used to determine the CSNN constraints based on the initial 500 patients. Subsequently, the CSNN was applied to 1,030 new patients (358 patients with malignant and 672 with benign lesions) to predict breast lesion malignancy. For every test case, the CSNN reconstructed the diagnosis node given the network constraints and the external inputs to the network. The activation level achieved by the diagnosis node was used as the decision variable for ROC analysis. Overall, the CSNN continued to perform well over this large dataset with ROC area of Az=0.81±0.02. However, the diagnostic performance of the network was inferior in cases with missing clinical findings (Az=0.80±0.02) compared to those with complete findings (Az=0.84±0.03). The study also demonstrated the ability of the CSNN to effectively impute missing findings while performing as a predictive tool.
Improving CAD performance in detecting masses depicted on prior images
Bin Zheng, Xiao Hui Wang, Luisa Wallace M.D., et al.
We investigated a new approach to improve the performance of a computer-aided detection (CAD) scheme in identifying masses depicted on images acquired earlier ("prior"). The scheme was trained using a dataset with simulated mass features. From a database with images acquired during two consecutive examinations, 100 locations matched pairs of malignant mass regions were selected in both the “current” and the most recent “prior” images. While reviewing the current images, mass regions were identified and as a result biopsies were ultimately performed. Prior images were not identified as suspicious by radiologists during the original interpretation. The same number of false-positive regions was also selected in both current and prior images. The selected regions were then randomly divided into training and testing datasets with 50 true-positive and 50 false-positive regions in each. For each selected region, five features; area, contrast, circularity, normalized standard deviation of radial length, and conspicuity; were computed. The ratios of the average difference of five feature values between current and prior mass regions in the training datasets were also computed. Multiplying these ratios by the computed values in current mass regions, we generated a new dataset of simulated features of “prior” mass regions. Three artificial neural networks (ANN) were trained. ANN-1 and ANN-2 were trained using training datasets of current and prior regions, respectively. ANN-3 was trained using simulated “prior” dataset. The performance of three ANNs was then evaluated using the testing dataset of prior images. Areas under ROC curves (Az) were 0.613 ± 0.026 for ANN-1, 0.678 ± 0.029 for ANN-2, and 0.667 ± 0.029 for ANN-3, respectively. This preliminary study demonstrated that one could estimate an average change of feature values over time and "adjust" CAD performance for better detection of masses at an earlier stage.
CAD I
icon_mobile_dropdown
Time-lapse microscopy and image processing for stem cell research modeling cell migration
This paper presents hardware and software procedures for automated cell tracking and migration modeling. A time-lapse microscopy system equipped with a computer controllable motorized stage was developed. The performance of this stage was improved by incorporating software algorithms for stage motion displacement compensation and auto focus. The microscope is suitable for in-vitro stem cell studies and allows for multiple cell culture image sequence acquisition. This enables comparative studies concerning rate of cell splits, average cell motion velocity, cell motion as a function of cell sample density and many more. Several cell segmentation procedures are described as well as a cell tracking algorithm. Statistical methods for describing cell migration patterns are presented. In particular, the Hidden Markov Model (HMM) was investigated. Results indicate that if the cell motion can be described as a non-stationary stochastic process, then the HMM can adequately model aspects of its dynamic behavior.
Cellular Image Analysis
icon_mobile_dropdown
Partially independent component analysis of tumor heterogeneities by DCE-MRI
JunYing Zhang, Rujirutana Srikanchana, Jianhua Xuan, et al.
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has emerged as an effective tool to access tumor vascular characteristics. DCE-MRI can be used to characterize noninvasively, microvasculature providing information about tumor microvessel structure and function (e.g., tumor blood volume, vascular permeability, tumor perfusion). However, pixels of DCE-MRI represent a composite of more than one distinct functional biomarker (e.g., microvessels with fast or slow perfusion) whose spatial distributions are often heterogeneous. Complementary to various existing methods (e.g., compartment modeling, factor analysis), this paper proposes a blind source separation method which allows for a computed simultaneous imaging of multiple biomarkers from composite DCE-MRI sequences. The algorithm is based on a partially-independent component analysis, whose parameters are estimated using a subset of informative pixels defining the independent portion of the observations. We demonstrate the principle of the approach on simulated image data set, and we then apply the method to the tissue heterogeneity characterization of breast tumors where spatial distribution of tumor blood volume, vascular permeability, and tumor perfusion, as well as their time activity curves (TACs) are simultaneously estimated.
Recognition of viruses by electron microscopy using higher order spectral features
C.L. Hannah Ong, Vinod Chandran
A limitation of using electron microscopy as a diagnostic tool in virology is the expertise required in analysing and interpreting the images. EM images of different viruses can be very similar in shape. An automated recognition method is proposed in this paper. It is based on radial spectra of higher-order spectral parameters robust to translation, scaling and noise. These features are also roation invariant and can be averaged for a population of viral particles without the need to normalize and align them. They extract symmetry information and are sensitive enough to distinguish viruses that appear nearly circular to the human eye. The method was tested using three such viruses with very similar morphologies - the Adeno, the HAV and the Astro. 70 viral particles of each class from three images were used for training. In the first test, random unseen sets of viral particles form the same images were chosen. In the second test, images of viruses from other sources, where the specimen preparation and the microscope are different, were used to determine the reliability of the system. Both tests have shown high classification accuracy improving rapidly to 100% as the test ensemble grew to 20 particles.
Automatic identification of bacterial types using statistical imaging methods
Sigal Trattner, Hayit Greenspan, Gapi Tepper, et al.
The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.
Shape/Motion
icon_mobile_dropdown
Hippocampal shape analysis: surface-based representation and classification
Li Shen, James Ford, Fillia Makedon, et al.
Surface-based representation and classification techniques are studied for hippocampal shape analysis. The goal is twofold: (1) develop a new framework of salient feature extraction and accurate classification for 3D shape data; (2) detect hippocampal abnormalities in schizophrenia using this technique. A fine-scale spherical harmonic expansion is employed to describe a closed 3D surface object. The expansion can then easily be transformed to extract only shape information (i.e., excluding translation, rotation, and scaling) and create a shape descriptor comparable across different individuals. This representation captures shape features and is flexible enough to do shape modeling, identify statistical group differences, and generate similar synthetic shapes. Principal component analysis is used to extract a small number of independent features from high dimensional shape descriptors, and Fisher's linear discriminant is applied for pattern classification. This framework is shown to be able to perform well in distinguishing clear group differences as well as small and noisy group differences using simulated shape data. In addition, the application of this technique to real data indicates that group shape differences exist in hippocampi between healthy controls and schizophrenic patients.
Shape classification of malignant lymphomas and leukemia by morphological watersheds and ARMA modeling
Mehmet Celenk, Yinglei Song, Limin Ma, et al.
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.
Quantitative analysis of three-dimensional tubular tree structures
Quantitative assessment of tree structures is very important for evaluation of airway or vascular tree morphology and its associated function. Our skeletonization and branch-point identification method provides a basis for tree quantification or tree matching, tree-branch diameter measurement in any orientation, and labeling individual branch segments. All main components of our method were specifically developed to deal with imaging artifacts typically present in volumetric medical image data. The proposed method has been tested in a computer phantom subjected to changes of its orientation as well as in a repeatedly CT-scanned rigid plastic phantom. In all cases, our method produced reliable and well positioned centerlines and branch-points.
Skeletonization on 3D tree-embedded graphs
Cherng-Min Ma, Shu-Yen Wan, Jiann-Der Lee
Thinning is for extracting unit-width skeletons from original objects. Such unit-width skeletons are useful in analyzing tree-structured objects, such bronchi or blood tubes. A tree-structured object could be segmented as a graph since the tails of different branches of the object may be too close and taken as cycles. One possible approach for extracting a tree structure from an original tree-oriented object is to extract a unit-width skeleton, then extract a tree structure from the unit-width skeleton. One major drawback of this approach is that the information of the thickness of each branch is vanished in the first step where the thickness of a branch is important in deciding which voxel should be reduced and which should not. This paper proposes an approach to obtain unit width tree structures from original tree-embedded objects directly through the thinning process.
Combined intra- and interslice motion artifact suppression in magnetic resonance imaging
Haitham M. Ahmed, Refaat E. Gabr, Abou-Bakr M. Youssef, et al.
We propose a technique for suppression of both intra-slice and inter-slice types of motion artifacts simultaneously. Starting from the general assumption of rigid body motion, we consider the case when the acquisition of the k-space is in the form of bands of finite number of lines arranged in a rectilinear fashion to cover the k-space area of interest. We also assume that an averaging factor of at least 2 is desired. Instead of acquiring a full k-space of each image and then average the result, we propose a new acquisition strategy based on acquiring the k-space in consecutive bands having 50% overlap going from one end of the phase encoding direction to the other end. In case of no motion, this overlap can be used as the second acquisition (NEX=2). When motion is encountered, both types motion are reduced to the same form under this acquisition strategy. In particular, detection and correction of motion between consecutive bands result in suppression of both motion types. In this work, this is achieved by utilizing the overlap area to estimate the motion, which is then taken into consideration in further reconstruction (or even acquisition if real-time control is available on the MR system). We demonstrate the accuracy and computational efficiency of this motion estimation approach. Once the motion is estimated, we propose a simple strategy to reconstruct artifact-free images from the acquired data that take into account the possible voids in the acquired k-space as a result of rotational motion between blades.
Myocardial motion analysis and visualization from echocardiograms
We present a new framework to estimate and visualize heart motion from echocardiograms. For velocity estimation, we have developed a novel multiresolution optical flow algorithm. In order to account for typical heart motions like contraction/expansion and shear, we use a local affine model for the velocity in space and time. The motion parameters are estimated in the least-squares sense inside a sliding spatio-temporal window. The estimated velocity field is used to track a region of interest which is represented by spline curves. In each frame, a set of sample points on the curves is displaced according to the estimated motion field. The contour in the subsequent frame is obtained by a least-squares spline fit to the displaced sample points. This ensures robustness of the contour tracking. From the estimated velocity, we compute a radial velocity field with respect to a reference point. Inside the time-varying region of interest, the radial velocity is color-coded and superimposed on the original image sequence in a semi-transparent fashion. In contrast to conventional Tissue Doppler methods, this approach is independent of the incident angle of the ultrasound beam. The motion analysis and visualization provides an objective and robust method for the detection and quantification of myocardial malfunctioning. Promising results are obtained from synthetic and clinical echocardiographic sequences.
Registration II
icon_mobile_dropdown
Tensor scale-based image registration
Tangible solutions to image registration are paramount in longitudinal as well as multi-modal medical imaging studies. In this paper, we introduce tensor scale - a recently developed local morphometric parameter - in rigid image registration. A tensor scale-based registration method incorporates local structure size, orientation and anisotropy into the matching criterion, and therefore, allows efficient multi-modal image registration and holds potential to overcome the effects of intensity inhomogeneity in MRI. Two classes of two-dimensional image registration methods are proposed - (1) that computes angular shift between two images by correlating their tensor scale orientation histogram, and (2) that registers two images by maximizing the similarity of tensor scale features. Results of applications of the proposed methods on proton density and T2-weighted MR brain images of (1) the same slice of the same subject, and (2) different slices of the same subject are presented. The basic superiority of tensor scale-based registration over intensity-based registration is that it may allow the use of local Gestalts formed by the intensity patterns over the image instead of simply considering intensities as isolated events at the pixel level. This would be helpful in dealing with the effects of intensity inhomogeneity and noise in MRI.
Registration of medical images using an interpolated closest point transform: method and validation
Zhujiang Cao, Shiyan Pan, Rui Li, et al.
Image registration is an important procedure for medical diagnosis. Since the large inter-site retrospective validation study led by Fitzpatrick at Vanderbilt University, voxel-based methods and more specifically mutual information (MI) based registration methods have been regarded as the method of choice for rigid-body intra-subject registration problems. In this study we propose a method that is based on the iterative closest point (ICP) algorithm and a pre-computed closest point map obtained with a slight modification of the fast marching method proposed by Sethian. We also propose an interpolation scheme that allows us to find the corresponding points with a sub-voxel accuracy even though the closest point map is defined on a regular grid. The method has been tested both on synthetic and real images and registration results have been assessed quantitatively using the data set provided by the Retrospective Registration Evaluation Project. For these volumes, MR and CT head surfaces were extracted automatically using a level-set technique. Results show that on these data sets this registration method leads to accuracy numbers that are comparable to those obtained with voxel-based methods.
Symmetric image registration
Peter Rogelj, Stanislav Kovacic
A quality of image match is usually estimated by measuring image similarity. Unfortunately, similarity measures assess only such transformations that change appearance of the deformed image, and in the case of non-rigid registration the results of the similarity measurement depend on the registration direction. This asymmetric relation leads to registration inconsistency and reduces the quality of registration. In this work we propose a symmetric registration approach, which improves the registration by measuring similarity in both registration directions. The solution presented in this paper is based on the interaction of both images involved in the registration process. Images interact with forces, which are according to the Newton's action-reaction law forming a symmetric relationship. These forces may transform both of the images, although in our implementation one of the images remains fixed. The experiments performed to demonstrate the advantages of the symmetric registration approach involve registration of simple objects, recovering synthetic deformation, and interpatient registration of real images of head. The results show improvements of registration consistency and also indicate the improvement of registration correctness.
Overcoming the distortion problem in image-enhanced fluoroscopy
In this paper, we examine the problem of image distortion influoroscopy, in particular its effect applications in image-guided surgery that make use of tracking and calibration techniques to relate a point in physical space in the operating room (OR)to the corresponding pointin an intraoperative fluoroscopic image. We call such applications image-enhanced fluoroscopy. In order to derive the relationship between physical space and a fluoroscopic image, two sets of parameters must be known. The first set describes the pose of the fluoroscope at the time when the image was taken; these parameters are usually estimated by rigidly attaching a device to the fluoroscope that is able to be tracked by either an optical or magnetic tracking system present in the OR. The second set of parameters describes the projection model of the fluoroscope itself: this set includes focal length and X-ray source position relative to the image intensifier. In the case that we examine here, these values are estimated using a set of BBs of known geometry placed between the X-ray source and the image intensifier. Because the image intensifier is not perfectly planar, and also because of ambient magnetic fields, the image displayed by the fluoroscope does not match that predicted by a linear projection model. Using the known geometry of the BBs appearing in the image, we may attempt to recover the image that would be given by a linear projection. We call this process dewarping. In this paper, we address the question of the optimal dewarp function. Although a fifth-order polynomial is commonly used, we show that a third-order polynomial is superior. We use real fluoroscope images to demonstrate the potential dangers of using too high a polynomial order for the dewarping process.
Validation of 3D ultrasound: CT registration of prostate images
Evelyn A. Firle, Stefan Wesarg, Grigoris Karangelis, et al.
All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.
Segmentation of three-dimensional images using non-rigid registration: methods and validation with application to confocal microscopy images of bee brains
Torsten Rohlfing, Robert Brandt, Randolf Menzel, et al.
This paper describes the application and validation of automatic segmentation of three-dimensional images by non-rigid registration to atlas images. The registration-based segmentation technique is applied to confocal microscopy images acquired from the brains of 20 bees. Each microscopy image is registered to an already segmented reference atlas image using an intensity-based non-rigid image registration algorithm. This paper evaluates and compares four different approaches: registration to an individual atlas image (IND), registration to an average shape atlas image (AVG), registration to the most similar image from a database of individual atlas images (SIM), and registration to all images from a database of individual atlas images with subsequent fuzzy segmentation (FUZ). For each strategy, the segmentation performance of the algorithm was quantified using both a global segmentation correctness measure and the similarity index. Manual segmentation of all microscopy images served as a gold standard. The best segmentation result (median correctness 91 percent of all voxels) was achieved using the FUZ paradigm. Robustness was also the best for this strategy (minimum correctness over all individuals 84 percent). The mean similarity index value of segmentations produced by the FUZ paradigm is 0.86 (IND, 0.81; AVG, 0.84; SIM, 0.82). The superiority of the FUZ paradigm is statistically significant (two-sided paired t-test, P<0.001).
Deformable Geometry
icon_mobile_dropdown
Independent component analysis in statistical shape models
Mehmet Üzümcü, Alejandro F. Frangi, Johan H. C. Reiber, et al.
Statistical shape models generally use Principal Component Analysis (PCA) to describe the main directions of shape variation in a training set of example shapes. However, PCA assumes a number of restrictions on the data that do not always hold. In this paper we explore the use of an alternative shape decomposition, Independent Component Analysis (ICA), which does not assume a Gaussian distribution of the input data. Several different methods for performing ICA are available. Three most frequently used methods were tested in order to evaluate their effect on the resulting vectors. In statistical shape models, generally not all the eigenvectors that result from the PCA are used. Vectors de-scribing noise are discarded to obtain a compact description of the data set. The selection of these vectors is based on the natural ordering of the vectors according to the variance in that direction which is inherent to PCA. With ICA, how-ever, there is no natural ordering of the vectors. Four methods for sorting the ICA vectors are investigated. The different ICA-methods yielded highly similar yet not identical results. Vectors obtained with ICA showed localized shape variations, whereas eigenvectors obtained with PCA show global shape variations. From the results of the ordering methods can be seen that PCA is better suited for dimensionality reduction. Of the ordering methods that were tested, the best results were obtained with the ordering according to the locality of the shape variations.
Three-dimensional active shape model matching for left ventricle segmentation in cardiac CT
Hans C. van Assen, Rob J. van der Geest, Mikhail G. Danilouchkine, et al.
Manual quantitative analysis of cardiac left ventricular function using multi-slice CT is labor intensive because of the large datasets. We present an automatic, robust and intrinsically three-dimensional segmentation method for cardiac CT images, based on 3D Active Shape Models (ASMs). ASMs describe shape and shape variations over a population as a mean shape and a number of eigenvariations, which can be extracted by e.g. Principal Component Analysis (PCA). During the iterative ASM matching process, the shape deformation is restricted within statistically plausible constraints (±3σ). Our approach has two novel aspects: the 3D-ASM application to volume data of arbitrary planar orientation, and the application to image data from another modality than which was used to train the model, without the necessity of retraining it. The 3D-ASM was trained on MR data and quantitatively evaluated on 17 multi-slice cardiac CT data sets, with respect to calculated LV volume (blood pool plus myocardium) and endocardial volume. In all cases, model matching was convergent and final results showed a good model performance. Bland-Altman analysis however, showed that bloodpool volume was slightly underestimated and LV volume was slightly overestimated by the model. Nevertheless, these errors remain within clinically acceptable margins. Based on this evaluation, we conclude that our 3D-ASM combines robustness with clinically acceptable accuracy. Without retraining for cardiac CT, we could adapt a model trained on cardiac MR data sets for application in cardiac CT volumes, demonstrating the flexibility and feasibility of our matching approach. Causes for the systematic errors are edge detection, model constraints, or image data reconstruction. For all these categories, solutions are discussed.
Left ventricle contour detection in x-ray angiograms using multiview active appearance models
Elco Oost, Boudewijn P. F. Lelieveldt, Gerhard Koning, et al.
Automatic Left Ventricle (LV) border detection in X-ray angiograms for the quantitative assessment of cardiac function has proven to be a highly challenging task. The main difficulty is segmenting the End Systolic (ES) phase, in which much of the contrast dye has been squeezed out of the LV due to contraction, resulting in poor LV definition. 2D Active Appearance Models (AAMs) have shown utility for segmenting End Diastolic (ED) angiograms, but do not perform satisfactory in individual ES angiograms. In this work, we present a new Multi-view AAM in which we exploit the existing correlation in shape and texture between ED and ES phase to steer the segmentation of both frames simultaneously. Model position and orientation remain independent, whereas appearance statistics are coupled. In addition, an AAM is presented in which the gray-value information of the inner part of the LV is not taken into account. This so-called boundary AAM is applied mainly to enhance local boundary localization performance. Both models are applied in a combined manner and are validated quantitatively. In 61 out of 70 experiments good convergence for both ED and ES segmentation was achieved, with average border positioning errors of 1.86 mm (ED) and 1.93 mm (ES).
Three-dimensional active contour model for characterization of solid breast masses on three-dimensional ultrasound images
Berkman Sahiner, Aditya Ramachandran, Heang-Ping Chan, et al.
The accuracy of discrimination between malignant and benign solid breast masses on ultrasound images may be improved by using computer-aided diagnosis and 3-D information. The purpose of this study was to develop automated 3-D segmentation and classification methods for 3-D ultrasound images, and to compare the classification accuracy based on 2-D and 3-D segmentation techniques. The 3-D volumes were recorded by translating the transducer across the lesion in the z-direction while conventional 2-D images were acquired in the x-y plane. 2-D and 3-D segmentation methods based on active contour models were developed to delineate the mass boundaries. Features were automatically extracted based on the segmented mass shapes, and were merged into a malignancy score using a linear classifier. 3-D volumes containing biopsy-proven solid breast masses were collected from 102 patients (44 benign and 58 malignant). A leave-one-out method was used for feature selection and classifier design. The area Az under the test receiver operating characteristic curves for the classifiers using the 3-D and 2-D active contour boundaries were 0.88 and 0.84, respectively. More than 45% of the benign masses could be correctly identified using the 3-D features without missing a malignancy. Our results indicate that an accurate computer classifier can be designed for differentiation of malignant and benign solid breast masses on 3-D sonograms.
Integration of ultrasound-based registration with statistical shape models for computer-assisted orthopedic surgery
We present the first use of ultrasound to instantiate and register a statistical shape model of bony structures. Our aim is to provide accurate image-guided total hip replacement without the need for a preoperative computed tomography (CT) scan. We propose novel methods to determine the location of the bone surface intraoperatively using percutaneous ultrasound and, with the aid of a statistical shape model, reconstruct a complete three-dimensional (3D) model of relevant anatomy. The centre of the femoral head is used as a further constraint to improve accuracy in regions not accessible to ultrasound. CT scans of the femur from a database were aligned to one target CT scan using a non-rigid registration algorithm. The femur surface from the target scan was then propagated to each of the subjects and used to produce a statistical shape model. A cadaveric femur not used in the shape model construction was scanned using freehand 3D ultrasound. The iterative closest point (ICP) algorithm was used to match points corresponding to the bone Surface derived from ultrasound with the statistical bone surface model. We used the mean shape and the first five modes of variation of the shape model. The resulting root mean square (RMS) point-to-surface distance from ICP was minimised to provide the best fit of the model to the ultrasound data.
Three-dimensional appearance model for hippocampus segmentation from MRI
Jan Klemencic, Josien P. W. Pluim, Max A. Viergever, et al.
A generic approach to building a 3D active appearance model (AAM) for medical image segmentation is presented. Provided a training set of manually segmented images, the model-building procedure is fully automatic. Shape information is obtained from a free-form image registration algorithm. Our AAM is evaluated using the hippocampus as the structure of interest (SOI); the training set consists of 28 manually segmented brain MR scans. The main contributions of this work are: (a) A concept of incorporating the SOI surroundings into the AAM. The concept is also applicable to other medical image based AAMs. (b) A two-step free-form registration procedure (matching the grayscale images first, then matching the segmented images). In this manner, landmark correspondence is improved, and the expert knowledge (i.e., manual segmentations) is less compromised by registration inaccuracies. (c) Two distinct AAM versions are compared: one without and one with statistical texture variation information. Compared to segmentation of an unknown image by registration to a reference image, the main advantage of the AAM is speed: the computation time is brought down from around 5 hours (for free-form deformation computation) to only a few minutes (for optimizing the model parameters), with only a slight degradation in segmentation accuracy.
Tissue Image Analysis
icon_mobile_dropdown
MRI tissue segmentation incorporating a bias field modulated smoothness prior
Enmin Song, Valerie A. Cardenas, Diana Sacrey, et al.
This paper examines a refinement to probabilistic intensity based tissue segmentation methods, which makes use of knowledge derived from an MRI bias field estimate. Intensity based labeling techniques have employed local smoothness priors to reduce voxel level tissue labeling errors, by making use of the assumption that, within uniform regions of tissue, a voxel should be highly likely to have a similar tissue assignment to its neighbors. Increasing the size of this neighborhood provides more robustness to noise, but reduces the ability to describe small structures. However, when intensity bias due to RF field inhomogeneity is present within the MRI data, local contrast to noise may vary across the image. We therefore propose an approach to refining the labeling by making use of the bias field estimate, to adapt the neighborhood size applied to reduce local labeling errors. We explore the use of a radially symmetric Gaussian weighted neighborhood, and the use of the mean and median of the adapted region probabilities, to refine local probabilistic labeling. The approach is evaluated using the Montreal brainweb MRI simulator as a gold standard providing known gray, white and CSF tissue segmentation. These results show that the method is capable of improving the local tissue labeling in areas most influenced by inhomogeneity. The method appears most promising in its application to regional tissue volume analysis or higher field MRI data where bias field inhomogeneity can be significant.
Improved inversion of MR elastography images by spatiotemporal directional filtering
Armando Manduca, David S. Lake, Richard L. Ehman
MR elastography can visualize and measure propagating shear waves in tissue-like materials subjected to harmonic mechanical excitation. This allows the calculation of local values of material parameters such as shear modulus and attenuation. Various inversion algorithms to perform such calculations have been proposed, but they are sensitive to areas of low displacement amplitude (and hence low SNR) that result from interference patterns due to reflection and refraction. A spatio-temporal directional filter applied as a pre-processing step can separate interfering waves so they can be processed separately. Weighted combinations of inversions from such directionally separated data sets can significantly improve reconstructions of shear modulus and attenuation.
Differentiating therapy-induced leukoencephalopathy from unmyelinated white matter in children treated for acute lymphoblastic leukemia (ALL)
Reliably detecting subtle therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its nearly identical MR properties and location with unmyelinated white matter. T1, T2, PD, and FLAIR images were collected for 44 children aged 1.7-18.7 (median 5.9) years near the start of therapy for ALL. The ICBM atlas and corresponding apriori maps were spatially normalized to each patient and resliced using SPM99 software. A combined imaging set consisting of MR images and WM, GM and CSF apriori maps were then analyzed with a Kohonen Self-Organizing Map. Vectors from hyperintense regions were compared to normal appearing genu vectors from the same patient. Analysis of the distributions of the differences, calculated on T2 and FLAIR images, revealed two distinct groups. The first large group, assumed normal unmyelinated white matter, consisted of 37 patients with changes in FLAIR ranging from 80 to 147 (mean 117∓17) and T2 ranging from 92 to 217 (mean 144∓28). The second group, assumed leukoencephalopathy, consisted of seven patients with changes in FLAIR ranging from 154 to 196 (mean 171∓19) and T2 ranging from 190 to 287 (mean 216∓33). A threshold was established for both FLAIR (change > 150) and T2 (change > 180).
Quantification of trabecular bone anisotropy by means of tensor scale
Trabecular bone (TB) is a network of interconnected struts and plates that constantly remodels to adapt dynamically to the stresses to which it is subjected in such a manner that the trabeculae are oriented along the major stress lines (Wolf's Law). Structural anisotropy can be expressed in terms of the fabric tensor. Next to bone density, TB has been found to be the largest determinant of bone biomechanical behavior. Existing methods, including mean intercept length (MIL), provide only a global statistical average of TB anisotrophy and, generally, require a large sample volume. In this paper, we present a new method, based on the recently conceived notion of tensor scale, which provides regional information of TB orientation and anisotropy. Preliminary evaluation of the method in terms of its sensitivity to resolution and image rotation is reported. The characteristic differences between TB anisotropy computed from transverse and longitudinal sections have been studied and potential applications of the method to in vivo MR imaging are demonstrated. Finally, the ongoing extension of three-dimensional tensor scale in quantitative analysis of tissue morphology is discussed.
Application of the standard Hough transform to high-resolution MRI of human trabecular bone to predict mechanical strength
In this study we introduce two non-linear structural measures based on the Standard Hough-Transform (SHT) that are applied to high resolution MR-images of human trabecular bone specimens in order to predict biomechanical properties. The results are compared to bone mineral density (BMD) and linear morphometric parameters. Axial MR-images (voxel-size: 117x156x300 mm3) of 33 human femoral and 10 spinal specimens are obtained using a 3D-gradient-echo-sequence. After measurement of BMD by quantitative computed tomography (QCT) all specimens are tested destructively for maximum compressive strength (MCS). The SHT is applied to the binarized and Sobel-filtered images and the peak-value (maxH) and its corresponding bin (posH) of the normalized Hough-spectrum are determined as well as linear measures (apparent bone fraction (app.BV/TV), apparent trabecular separation (app.Tb.Sp), apparent trabecular perimeter per unit area (app.Tb.Perim)). For the spinal [femoral] specimens, R2 for MCS vs. maxH is 0.72 (p=0.004) [0.49 (p<0.001)], R2 for MCS vs. posH is 0.56 (p=0.013) [0.55 (p<0.001)], and R2 for MCS vs. BMD is 0.43 (p=0.041) [0.72 (p<0.001)]. Correlations of the conventional, linear morphometric parameters and MCS are lower than those for the SHT-based measures or BMD, ranging from 0.20 (p=0.003) for app.BV/TV to 0.46 (p<0.001) for app.Tb.Sp. Prediction of MCS by maxH, posH, or BMD alone is improved by combination with the linear morphometric parameters in a linear regressional model (R2 =0.79). In conclusion, the biomechanical strength of human trabecular bone in vitro can effectively be predicted from High-Resolution MR-images by structural measures based on SHT. In the vertebral specimens these are superior to BMD or conventional structural measures in predicting bone strength.
Biomechanical simulation of atrophy in MR images
Andrew D. Castellano Smith, William R. Crum, Derek L. G. Hill, et al.
Progressive cerebral atrophy is a physical component of the most common forms of dementia - Alzheimer's disease, vascular dementia, Lewy-Body disease and fronto-temporal dementia. We propose a phenomenological simulation of atrophy in MR images that provides gold-standard data; the origin and rate of progression of atrophy can be controlled and the resultant remodelling of brain structures is known. We simulate diffuse global atrophic change by generating global volumetric change in a physically realistic biomechanical model of the human brain. Thermal loads are applied to either single, or multiple, tissue types within the brain to drive tissue expansion or contraction. Mechanical readjustment is modelled using finite element methods (FEM). In this preliminary work we apply these techniques to the MNI brainweb phantom to produce new images exhibiting global diffuse atrophy. We compare the applied atrophy with that measured from the images using an established quantitative technique. Early results are encouraging and suggest that the model can be extended and used for validation of atrophy measurement techniques and non-rigid image registration, and for understanding the effect of atrophy on brain shape.
Segmentation I
icon_mobile_dropdown
Large three-dimensional data-set segmentation using a graph-theoretic energy-minimization approach
Brian Parker, Dagan David Feng
A new graph algorithm for the multiscale segmentation of large three-dimensional medical data sets is presented. It is a region-merging segmentation algorithm based on minimizing the Mumford-Shah energy. The Mumford-Shah functional formulation leads to improved segmentation results compared with alternative approaches; and the graph theoretic approach yields improved performance and simplified data structures. Also, the graph algorithm acts on only a subset of the full data set at a given time, allowing its application to large data sets such as whole-body scans. Results on a head MRI data set are presented and compared with a manual segmentation of this data set.
Image enhancement and segmentation of fluid-filled structures in 3D ultrasound images
Vikram Chalana, Stephen Dudycha, Gerald McMorrow
Segmentation of fluid-filled structures, such as the urinary bladder, from three-dimensional ultrasound images is necessary for measuring their volume. This paper describes a system for image enhancement, segmentation and volume measurement of fluid-filled structures on 3D ultrasound images. The system was applied for the measurement of urinary bladder volume. Results show an average error of less than 10% in the estimation of the total bladder volume.
Scatter segmentation in dynamic SPECT images using principal component analysis
Klaus D. Toennies, Anna Celler, Stephan Blinder, et al.
Dynamic single photon emission computed tomography (dSPECT) provides time-varying spatial information about changes of tracer distribution in the body from data acquired using a standard (single slow rotation) protocol. Variations of tracer distribution observed in the images might be due to physiological processes in the body, but may also stem from reconstruction artefacts. These two possibilities are not easily separated because of the highly underdetermined nature of the dynamic reconstruction problem. Since it is expected that temporal changes in tracer distribution may carry important diagnostic information, the analysis of dynamic SPECT images should consider and use this additional information. In this paper we present a segmentation scheme for aggregating voxels with similar time activity curves (TACs). Voxel aggregates are created through region merging based on a similarity criterion on a reduced set of features, which is derived after transformation into eigenspace. Region merging was carried out on dSPECT images from simulated and patient myocardial perfusion studies using various stopping criteria and ranges of accumulated variances in eigenspace. Results indicate that segmentation clearly separates heart and liver tissues from the background. The segmentation quality did not change significantly if more than 99% of the variance was incorporated into the feature vector. The heart behaviour followed an expected exponential decay curve while some variation of time behaviour in liver was observed. Scatter artefacts from photons originating from liver could be identified in long as well as in short studies.
Automatic segmentation of brain structures for radiation therapy planning
Pierre-Francois D D'Haese, Valerie Duay, Rui Li, et al.
Delineation of structures to irradiate (the tumors) as well as structures to be spared (e.g., optic nerve, brainstem, or eyes) is required for advanced radiotherapy techniques. Due to a lack of time and the number of patients to be treated these cannot always be segmented accurately which may lead to suboptimal plans. A possible solution is to develop methods to identify these structures automatically. This study tests the hypothesis that a fully automatic, atlas-based segmentation method can be used to segment most brain structures needed for radiotherapy plans even tough tumors may deform normal anatomy substantially. This is accomplished by registering an atlas with a subject volume using a combination of rigid and non-rigid registration algorithms. Segmented structures in the atlas volume are then mapped to the corresponding structures in the subject volume using the computed transformations. The method we propose has been tested on two sets of data, i.e., adults and children/young adults. For the first set of data, contours obtained automatically have been compared to contours delineated manually by three physicians. For the other set qualitative results are presented.
Bone suppression in CT angiography data by region-based multiresolution segmentation
Multi slice CT (MSCT) scanners have the advantage of high and isotropic image resolution, which broadens the range of examinations for CT angiography (CTA). A very important method to present the large amount of high-resolution 3D data is the visualization by maximum intensity projections (MIP). A problem with MIP projections in angiography is that bones often hide the vessels of interest, especially the scull and vertebral column. Software tools for a manual selection of bone regions and their suppression in the MIP are available, but processing is time-consuming and tedious. A highly computer-assisted of even fully automated suppression of bones would considerably speed up the examination and probably increase the number of examined cases. In this paper we investigate the suppression (or removal) of bone regions in 3D CT data sets for vascular examinations of the head with a visualization of the carotids and the circle of Willis.
Automatic model-based 3D lesion segmentation for evaluation of MR-guided thermal ablation therapy
We are investigating magnetic resosance imaging-guided radiofrequency ablation of pathologic tissue. For many tissues, resulting lesions have a characteristic two-boundary appearance featuring an inner region and an outer hyper-intense margin in both contrast enchanced (CE) T1 and T2 weighted MR images. We created a twelve-parameter, three-dimensional, globally deformable model with two quadratic surfaces that describe both lesion zones. We present an energy minimization approach to automatically fit the model to a grayscale MR image volume. We applied the automatic model to in vivo lesions (n = 5) in a rabbit thigh model, using CE T1 and T2 weighted MR images, and compared the results to multi-operator manually segmented boundaries. For all lesions, the median error was <1.0mm for both the inner and outer regions, values that favorably compare to a voxel width of 0.7 mm. These results suggest that our method provides a precise, automatic approximation of lesion shape. We believe that the method has applications in lesion visualization, volume estimation, image quantification, and volume registration.
Pattern Recognition and Database Retrieval
icon_mobile_dropdown
Application of support vector machines to breast cancer screening using mammogram and clinical history data
Walker H. Land Jr., Dan McKee, Roberto Velazquez, et al.
The objectives of this paper are to discuss: (1) the development and testing of a new Evolutionary Programming (EP) method to optimally configure Support Vector Machine (SVM) parameters for facilitating the diagnosis of breast cancer; (2) evaluation of EP derived learning machines when the number of BI-RADS and clinical history discriminators are reduced from 16 to 7; (3) establishing system performance for several SVM kernels in addition to the EP/Adaptive Boosting (EP/AB) hybrid using the Digital Database for Screening Mammography, University of South Florida (DDSM USF) and Duke data sets; and (4) obtaining a preliminary evaluation of the measurement of SVM learning machine inter-institutional generalization capability using BI-RADS data. Measuring performance of the SVM designs and EP/AB hybrid against these objectives will provide quantative evidence that the software packages described can generalize to larger patient data sets from different institutions. Most iterative methods currently in use to optimize learning machine parameters are time consuming processes, which sometimes yield sub-optimal values resulting in performance degradation. SVMs are new machine intelligence paradigms, which use the Structural Risk Minimization (SRM) concept to develop learning machines. These learning machines can always be trained to provide global minima, given that the machine parameters are optimally computed. In addition, several system performance studies are described which include EP derived SVM performance as a function of: (a) population and generation size as well as a method for generating initial populations and (b) iteratively derived versus EP derived learning machine parameters. Finally, the authors describe a set of experiments providing preliminary evidence that both the EP/AB hybrid and SVM Computer Aided Diagnostic C++ software packages will work across a large population of patients, based on a data set of approximately 2,500 samples from five different institutions.
Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images
Recent developments in imaging technology mean that it is now possible to obtain high-resolution histological image data at multiple wavelengths. This allows pathologists to image specimens over a full spectrum, thereby revealing (often subtle) distinctions between different types of tissue. With this type of data, the spectral content of the specimens, combined with quantitative spatial feature characterization may make it possible not only to identify the presence of an abnormality, but also to classify it accurately. However, such are the quantities and complexities of these data, that without new automated techniques to assist in the data analysis, the information contained in the data will remain inaccessible to those who need it. We investigate the application of a recently developed system for the automated analysis of multi-/hyper-spectral satellite image data to the problem of cancer detection from multispectral histopathology image data. The system provides a means for a human expert to provide training data simply by highlighting regions in an image using a computer mouse. Application of these feature extraction techniques to examples of both training and out-of-training-sample data demonstrate that these, as yet unoptimized, techniques already show promise in the discrimination between benign and malignant cells from a variety of samples.
Design of three-class classifiers in computer-aided diagnosis: Monte Carlo simulation study
For the development of computer-aided diagnosis (CAD) systems, a classifier that can effectively differentiate more than two classes is often needed. For example, a detected object on an image may need to be classified as a malignant lesion, a benign lesion, or normal tissue. Currently, a three-class problem is usually treated as a two-stage, two-class problem, in which the detected object is first differentiated as a lesion or normal tissue, and, in the second stage, the lesion is further classified as malignant or benign. In this work, we explored methods for classification of an object into one of the three classes, and compared the three-class approach with the common two-class approach. We conducted Monte Carlo simulation studies to evaluate the dependence of the performance of 3-class classification schemes on design sample size and feature space configurations. A k-dimensional multivariate normal feature space with three classes having different means was assumed. Linear classifiers and artificial neural networks (ANNs) were examined. ROC analysis for the 3-class approach was explored under simplifying conditions. A performance index representing the normalized volume under the ROC surface (NVUS) was defined. Linear classifiers for classification of three classes and two classes were compared. We found that a 3-class approach with a linear classifier can achieve a higher NVUS than that of a 2-class approach. We further compared the performance of an ANN having three or one output nodes with a linear classifier. At large sample sizes, a 3-output-node ANN was basically the same as that of a one-output-node ANN. When the three class distributions had equal covariance matrices and the distances between pairs of class means were equal, the linear classifiers could reach a higher performance for the test samples than the ANN when the design sample size was small; the linear classifier and the ANNs approached the same performance in the limit of large design sample size. However, under complex feature space configurations such as the class means located along a line, the class in the middle was poorly differentiated from the other two classes by the linear classifiers for any dimensionality; the ANN outperformed the linear classifier at all design sample size studied. This simulation study may provide some useful information to guide the design of 3-class classifiers for various CAD applications.
Content-based image analysis: object extraction by data-mining on hierarchically decomposed medical images
Reliable automated analysis and examination of biomedical images requires reproducible and robust extraction of contained image objects. However, the necessary description of image content as visually relevant objects is context-dependent and determined by parameters such as resolution, orientation, and, of course, the clinical-diagnostic question. Therefore a computer-based approach has to model both examination context and image acquisition as expert knowledge. Generally, static solutions are not satisfying because a change of application will most likely require a redesign of the analysis process. In contrast to non-satisfying statical solution, this paper describes a flexible approach, which allows medical examiners the context-sensitive extraction of sought objects from almost arbitrary medical images, without requiring technical knowledge on image analysis and processing. Since this methodology is applicable to any analysis task on large image sets, it works for general image series analysis as well as image retrieval. The new approach combines classical image analysis with the idea of data mining to close the gap between low abstraction on the technical level and high-level expert knowledge on image content and understanding.
Content-based image retrieval as a computer aid for the detection of mammographic masses
The purpose of the study was to develop and evaluate a content-based image retrieval (CBIR) approach as a computer aid for the detection of masses in screening mammograms. The study was based on the Digital Database for Screening Mammography (DDSM). Initially, a knowledge database of 1,009 mammographic regions was created. They were all 512x512 pixel ROIs with known pathology. Specifically, there were 340 ROIs depicting a biopsy-proven malignant mass, 341 ROIs with a benign mass, and the remaining 328 ROIs were normal. Subsequently, the CBIR algorithm was implemented using mutual information (MI) as the similarity metric for image retrieval. The CBIR algorithm formed the basis of a knowledge-based CAD system. The system operated as follows. Given a databank of mammographic regions with known pathology, a query suspicious mammographic region was evaluated. Based on their information content, all similar cases in the databank were retrieved. The matches were rank-ordered and a decision index was calculated using the query's best matches. Based on a leave-one out sampling scheme, the CBIR-CAD system achieved an ROC area index Az= 0.87±0.01 and a partial ROC area index 0.90Az = 0.45±0.03 for the detection of masses in screening mammograms.
Hierarchical feature clustering for content-based retrieval in medical image databases
Christian Thies, Adam Malik, Daniel Keysers, et al.
In this paper we describe the construction of hierarchical feature clustering and show how to overcome general problems of region growing algorithms such as seed point selection and processing order. Access to medical knowledge inherent in medical image databases requires content-based descriptions to allow non-textual retrieval, e.g., for comparison, statistical inquiries, or education. Due to varying medical context and questions, data structures for image description must provide all visually perceivable regions and their topological relationships, which poses one of the major problems for content extraction. In medical applications main criteria for segmenting images are local features such as texture, shape, intensity extrema, or gray values. For this new approach, these features are computed pixel-based and neighboring pixels are merged if the Euclidean distance of corresponding feature vectors is below a threshold. Thus, the planar adjacency of clusters representing connected image partitions is preserved. A cluster hierarchy is obtained by iterating and recording the adjacency merging. The resulting inclusion and neighborhood relations of the regions form a hierarchical region adjacency graph. This graph represents a multiscale image decomposition and therefore an extensive content description. It is examined with respect to application in daily routine by testing invariance against transformation, run time behavior, and visual quality For retrieval purposes, a graph can be matched with graphs of other images, where the quality of the matching describes the similarity of the images.
Segmentation II
icon_mobile_dropdown
Segmenting the posterior ribs in chest radiographs by iterated contextual pixel classification
Marco Loog, Bram van Ginneken, Max A. Viergever
The task of segmenting the posterior ribs within the lung fields is of great practical importance. For example, delineation of the ribs may lead to a decreased number of false positives in computerized detection of abnormalities, and hence analysis of radiographs for computer-aided diagnosis purposes will benefit from this. We use an iterative, pixel-based, statistical classification method---iterated contextual pixel classification (ICPC). It is suited for a complex segmentation task in which a global shape description is hard to provide. The method combines local gray level and contextual information to come to an overall image segmentation. Because of it generality, it is also useful for other segmentation tasks. In our case, the variable number of visible ribs in the lung fields complicates the use of a global model. Additional difficulties arise from the poor visibility of the lower and medial ribs. Using cross validation, the method is evaluated on 35 radiographs in which all posterior ribs were traced manually. ICPC obtains an accuracy of 83%, a sensitivity of 79%, and a specificity of 86% for segmenting the costal space. Further evaluation is done using five manual segmentations from a second observer, whose performance is compared with the five corresponding images from the first manual segmentation, yielding 83% accuracy, 84% sensitivity, and 83% specificity. On these five images, ICPC attains 82%, 78%, and 86% respectively.
Multi-agent IVUS image segmentation
Ernst Bovenkamp, Jouke Dijkstra, Johan G. Bosch, et al.
A novel knowledge-based multi-agent image interpretation system has been developed which is markedly different from previous approaches in especially its elaborate integration of high-level knowledge-based control with low-level image segmentation algorithms. Each agent in this system is responsible for one type of object and cooperates with other agents to come to a consistent overall image interpretation. Cooperation involves communicating hypotheses and resolving conflicts between individual interpretations. Agents have full control over the underlying segmentation algorithms which they dynamically adapt to the image content given knowledge about global constraints, local information and personal beliefs. The system has been applied to IntraVascular Ultrasound (IVUS)images which are segmented by cooperative agents, specialized in lumen, vessel, calcified- plaque, shadow and sidebranch detection. IVUS image sequences from 7 patients were taken and vessel and lumen contours were detected fully automatically. These were compared with expert-corrected semi-automatic contours. Results show good correlation between agents and observer with r=0.84 for the lumen and r=0.92 for the vessel cross-sectional areas(n=1067). The paired difference between agents and observer was 0.13 ± 2.16 mm2 for vessel,and -0.14 ± 1.01mm2 for lumen cross-sectional areas. These results compare very well with inter-observer variability as reported in the literature.
Hierarchical segmentation of vertebrae from x-ray images
The problem of vertebrae segmentation in digitized x-ray images is addressed with a hierarchical approach that combines three different methodologies. As a starting point, two customized active shape models are trained on data sets of cervical and lumbar images, respectively. Here, a methodology to include edge information in the gray-level modeling part of the active shape models is developed to increase the representativeness of the model and to improve the chances of finding vertebral boundaries. Active shape models' initialization shortcoming is then addressed by a customized implementation of the Generalized Hough Transform, which provides an estimate of the pose of the vertebrae within target images. Active shape models' shortcoming of lack of local deformation is addressed by a customized implementation of the technique of Deformable Models. In this implementation, an energy minimization approach is employed in which the external energy term is extracted from the training set of images and the internal energy terms control the shape of the template. Segmentation results on data sets of cervical and lumbar images show that the proposed hierarchical approach produces errors of less than 3mm in 75% of the cervical images and 6.4mm in 50% of the lumbar images.
IWT-interactive watershed transform: a hierarchical method for efficient interactive and automated segmentation of multidimensional grayscale images
Horst Karl Hahn, Heinz-Otto Peitgen
In this paper we present the Interactive Watershed Transform (IWT) for efficient segmentation of multidimensional grayscale images. The IWT builds upon a fast immersion-based watershed transform (WT) followed by a hierarchical organization of the resulting basins in a tree structure. Each local image minimum is represented as an atomic basin at the lowest hierarchy level. The fast WT consists of two steps. First, all image elements are sorted according to their image intensity using a Bucket Sort algorithm. Second, each element is processed exactly once with respect to its neighborhood (e. g., 4, 6, and 8 direct neighbors for 2d, 3d, and 4d transform, respectively) in the specified order. Sort-ing, processing, and tree generation are of order O(n). After computing the WT, one global parameter, the so-called preflooding height, and an arbitrary number of markers are evaluated in real-time to control tree partitioning and basin merging. The IWT has been successfully applied to a large variety of medical images, e. g., for segmentation and volu-metry of neuroanatomic structures as well as bone segmentation, without making assumptions on the objects’ shapes. The IWT combines automation and efficient interactive control in a coherent algorithm while completely avoiding oversegmentation which is the major problem of the classical WT.
Region-based segmentation using a simulated charged fluid
A computer simulation of a Charged Fluid was developed to segment objects in medical images. A Charged Fluid conceptually consists of charged particles, each of which exerts a repelling electric force upon the others. In our approach, we treat the image gradients as potential wells where a simulated Charged Fluid flows through. The evolution of a Charged Fluid consists of two different procedures. One allows Charged Fluid elements to advance toward new positions in the simulation along the direction of their total effective forces. The other allows Charged Fluid elements to flow along the contour until an electrostatic equilibrium is achieved. The procedure is repeated until all Charged Fluid elements reside on the boundaries of objects being segmented. Preliminary segmentation results were obtained using this new technique to segment irregular objects in medical images.
Tomographic Reconstruction
icon_mobile_dropdown
Evaluation and empirical analysis of an exact FBP algorithm for spiral cone-beam CT
Recently one of the authors proposed a reconstruction algorithm, which is theoretically exact and has the truly shift-invariant filtering and backprojection structure. Each voxel is reconstructed using the theoretically minimum section of the spiral, which is located between the endpoints of the PI segment of the voxel. Filtering is one-dimensional, performed along lines with variable tilt on the detector, and consists of five terms. We will present evaluation of the performance of the algorithm. We will also discuss and illustrate empirically the contributions of the five filtering terms to the overall image. A thorough evaluation proved the validity of the algorithm. Excellent image results were achieved even for high pitch values. Overall image quality can be regarded as at least equivalent to the less efficient, exact, Radon-based methods. However, the new algorithm significantly increases efficiency. Thus, the method has the potential to be applied in clinical scanners of the future. The empirical analysis leads to a simple, intuitive understanding of the otherwise obscure terms of the algorithm. Identification and skipping of the practically irrelevant fifth term allows significant speed-up of the algorithm due to uniform distance weighting.
Curve evolution methods for dynamic tomography with unknown dynamic models
Yonggang Shi, William Clement Karl, David A. Castanon
In this paper, we propose a variational framework for tomographic reconstruction of dynamic objects with unknown dynamic models. This is an extension of our previous work on dynamic tomography using curve evolution methods where the shape dynamics are known a priori. We assume the dynamic model of the shape is a parameterized affine transform and propose a variational framework that incorporates information from observed data, intensity dynamics, spatial smoothness prior, and the dynamical shape model. A coordinate descent algorithm based on a curve evolution method is then proposed for the joint estimation of the intensities, object boundary sequences, and the unknown dynamic model parameters. For implementation of the curve evolution and parameter estimation process, we use efficient level set methods.
Fast calculation of digitally reconstructed radiographs using light fields
Calculating digitally reconstructed radiographs (DRRs)is an important step in intensity-based fluoroscopy-to-CT image registration methods. Unfortunately, the standard techniques to generate DRRs involve ray casting and run in time O(n3),where we assume that n is approximately the size (in voxels) of one side of the DRR as well as one side of the CT volume. Because of this, generation of DRRs is typically the rate-limiting step in the execution time of intensity-based fluoroscopy-to-CT registration algorithms. We address this issue by extending light field rendering techniques from the computer graphics community to generate DRRs instead of conventional rendered images. Using light fields allows most of the computation to be performed in a preprocessing step;after this precomputation step, very accurate DRRs can be generated in time O(n2). Using a light field generated from 1,024 DRRs of resolution 256×256, we can create new DRRs that appear visually identical to ones generated by conventional ray casting. Importantly, the DRRs generated using the light field are computed over 300 times faster than DRRs generated using conventional ray casting(50 vs.17,000 ms on a PC with a 2 GHz Intel Pentium 4 processor).
Generalized quasi-exact algorithms for image reconstruction in helical cone-beam CT
The quasi-exact algorithms developed by Kudo et al. can reconstruct accurate images from data acquired in helical cone-beam configuration. The current formulation of such algorithms, however, prevents their direct application to data acquired in a practical configuration that may be used in helical cone-beam computed tomography (CT) in which the longitudinal axis of the area detector always remains parallel to the longitudinal axis of helical CT system. Interpolation can be used to convert data acquired with the practical helical cone-beam configuration into the form required by the current quasi-exact algorithms. Such an interpolation can reduce spatial resolution in reconstructed images. In this work, we derive a new filtering function that can be used in the quasi-exact algorithms so that they can be used to reconstruct images directly from data acquired with a practical scanning configuration thereby avoiding interpolation. We also performed computer simulation studies, and numerical results from these studies confirm that accurate images can be reconstructed by use of these generalized quasi-exact algorithms. The practical implication of the generalized quasi-exact algorithms is that they can yield images with potentially enhanced spatial resolution by avoiding data interpolation.
Poster Session
icon_mobile_dropdown
Cardiac spiral imaging in computed tomography without ECG using complementary projections for motion detection
Herbert Bruder, Emilie Maguet, Karl Stierstorfer, et al.
We present a novel approach for cardiac motion detection called Cardiac Motion Detection with Complementary Projections (CMDC), which can complement or even supersede the information of the patient's ECG in multi-slice cardiac imaging. High image quality can be obtained using dedicated ECG-correlated cardiac reconstruction algorithms. These algorithms require information about the cardiac motion, usually the simultaneously recorded ECG to correlate it to the reconstruction. We developed a method to estimate cardiac motion directly from the measured CT raw data. In contrast to the ECG that is a global measurement of the heart's electrical excitation CMDC computes a local measure of heart motion at any scanned z-position using complementary projections. Using a series of data processing steps the synchronization information necessary for phase-correlated algorithms is extracted from the motion correlated CMDC signal. Evaluations of both computer simulations and patient measurements using multi-slice scanners have shown a high correlation between ECG and the CMDC signal. However, a significant phase shift between CMDC and ECG signal during scanning of the atria region was observed. Bot hthe ECG function and the CMDC signal were used for phase correlated reconstruction. Only minor differences in image quality between both approaches were detected, hence hte CMDC method might enhance the practicability of cardiac volume imaging in the future. CMDC-correlated reconstructions might also improve imaging of pericardial lung areas.
Tomographic Reconstruction
icon_mobile_dropdown
Convolution reconstruction algorithm for multislice helical CT
Jiang Hsieh, Brian Grekowicz, Piero Simoni, et al.
One of the most recent technological advancements in computed tomography (CT) is the introduction of multi-slice CT (MSCT). The state-of-the-art MSCT contains 16 detector rows and is capable of acquiring 16 projections simultaneously. In this paper, we propose a reconstruction algorithm that makes use of nontraditional reconstruction planes and convolution weighting. To minimize the impact of interpolation on slice-sensitivity-profile (SSP), conjugate samples are used for the projection interpolation. We use multiple convex planes as teh region of construction. This allows the generated weighting function to be smooth and differentiable. In addition, we make use of the fact that projections collected from a subset of detector rows are sufficient to perform a complete reconstruction. A convolution function is applied to the weighting function of each subset to minimize the impact of cone beam effects. The convolution function is chosen so that optimal balance is achieved between image artifact, slice-sensitivity-profile (SSP), and noise. Extensive phantom and clinical studies have been conducted to validate our approach. Our study indicates that compared to other row-interpolation based reconstruction algorithms, a 30% SSP improvement can be achieved with the proposed approach. In addition, image artifact suppression achieved with the proposed approach is on par or slightly better than the existing reconstruction algorithms. Extensive clinical studies have shown that the 16-slice scanner in conjugation with this algorithm produces nearly isotropic spatial resolution and allows much improved diagnostic image quality.
Sieve-regularized image reconstruction algorithm with pose search in transmission tomography
Ryan J. Murphy, Donald L. Snyder, David G. Politte, et al.
We have developed a model for transmission tomography that views the detected data as being Poisson-distributed photon counts. From this model, we derive an alternating minimization (AM) algorithm for the purpose of image reconstruction. This algorithm, which seeks to minimize an objective function (the I-divergence between the measured data and the estimated data), is particularly useful when high-density objects are present in soft tissue and standard image reconstruction algorithms fail. The approach incorporates inequality constraints on the pixel values and seeks to exploit known information about the high-density objects or other priors on the data. Because of the ill-posed nature of this problem, however, the noise and streaking artifacts in the images are not completely mitigated, even under the most ideal conditions, and some form of regularization is required. We describe a sieve-based approach, which constrains the image estimate to reside in a subset of the image space in which all images have been smoothed with a Gaussian kernel. The kernel is spatially varying and does not smooth across known boundaries in the image. Preliminary results show effective reduction of the noise and streak artifacts, but indicate that more work is needed to suppress edge overshoots.
Multiresolution/Multispectral Image Processing
icon_mobile_dropdown
Nonlinear multiresolution gradient adaptive filter for medical images
Dietmar Kunz, Kai Eck, Holger Fillbrandt, et al.
We present a novel method for intra-frame image processing, which is applicable to a wide variety of medical imaging modalities, like X-ray angiography, X-ray fluoroscopy, magnetic resonance, or ultrasound. The method allows to reduce noise significantly - by about 4.5 dB and more - while preserving sharp image details. Moreover, selective amplification of image details is possible. The algorithm is based on a multi-resolution approach. Noise reduction is achieved by non-linear adaptive filtering of the individual band pass layers of the multi-resolution pyramid. The adaptivity is controlled by image gradients calculated from the next coarser layer of the multi-resolution pyramid representation, thus exploiting cross-scale dependencies. At sites with strong gradients, filtering is performed only perpendicular to the gradient, i.e. along edges or lines. The multi-resolution approach processes each detail on its appropriate scale so that also for low frequency noise small filter kernels are applied, thus limiting computational costs and allowing a real-time implementation on standard hardware. In addition, gradient norms are used to distinguish smoothly between “structure” and “noise only” areas, and to perform additional noise reduction and edge enhancement by selectively attenuating or amplifying the corresponding band pass coefficients.
Novel theory and methods for tensor scale: a local morphometric parameter
Scale is a widely used notion in image analysis that evolved in the form of scale-space theory whose key idea is to represent and analyze an image at various resolutions. Recently, the notion of space-variant scale has drawn significant research interest. Previously, we introduced local morphometric scale using a spherical model whose major limitation was that it ignored orientation and anisotrophy making is suboptimal in many biomedical imaging applications where structures are inherently anisotropic and have mixed orientations. Here, we introduce a new idea of local scale, called tensor scale, which, at any image location, is the parametric representation of the largest ellipse (in 2D) or ellipsoid (in 3D) centered at that location that is contained in the same homogeneous region. Tensor scale is useful in spatially adapting neighborhood and controlling parameters in a space-variant and anisotropic fashion complying with orientation, anisotrophy, and thickness of local structures. Results of the method on several 2D images are presented and a few experiments are conducted to examine its behavior under rotation, varying pixel size, background inhomogeneity, and noise and blurring. Similarity of tensor scale images computed from multi-protocol images is studied.
Three-dimensional reconstruction of the human spine from biplanar radiographs: using multiscale wavelets analysis and spline interpolators for semi-automation
Sylvain Deschenes, Benoit Godbout, Dominic Branchaud, et al.
We propose a new fast stereoradiographic 3D reconstruction method for the spine. User input is limited to few points passing through the spine on two radiographs and two line segments representing the end plates of the limiting vertebrae. A 3D spline that hints the positions of the vertebrae in space is then generated. We then use wavelet multi-scale analysis (WMSA) to automatically localize specific features in both lateral and frontal radiographs. The WMSA gives an elegant spectral investigation that leads to gradient generation and edge extraction. Analysis of the information contained at several scales leads to the detection of 1) two curves enclosing the vertebral bodies' walls and 2) inter-vertebral spaces along the spine. From this data, we extract four points per vertebra per view, corresponding to the corners of the vertebral bodies. These points delimit a hexahedron in space where we can match the vertebral body. This hexahedron is then passed through a 3D statistical database built using local and global information generated from a bank of normal and scoliotic spines. Finally, models of the vertebrae are positioned with respect to these landmarks, completing the 3D reconstruction.
Multidimensional multistage wavelet footprints: a new tool for image segmentation and feature extraction in medical ultrasound
We present a new wavelet-based strategy for autonomous feature extraction and segmentation of cardiac structures in dynamic ultrasound images. Image sequences subjected to a multidimensional (2D plus time) wavelet transform yield a large number of individual subbands, each coding for partial structural and motion information of the ultrasound sequence. We exploited this fact to create an analysis strategy for autonomous analysis of cardiac ultrasound that builds on shape- and motion specific wavelet subband filters. Subband selection was in an automatic manner based on subband statistics. Such a collection of predefined subbands corresponds to the so-called footprint of the target structure and can be used as a multidimensional multiscale filter to detect and localize the target structure in the original ultrasound sequence. Autonomous, unequivocal localization by the autonomous algorithm is then done using a peak finding algorithm, allowing to compare the findings with a reference standard. Image segmentation is then possible using standard region growing operations. To test the feasibility of this multiscale footprint algorithm, we tried to localize, enhance and segment the mitral valve autonomously in 182 non-selected clinical cardiac ultrasound sequences. Correct autonomous localization by the algorithm was feasible in 165 of 182 reconstructed ultrasound sequences, using the experienced echocardiographer as reference. This corresponds to a 91% accuracy of the proposed method in unselected clinical data. Thus, multidimensional multiscale wavelet footprints allow successful autonomous detection and segmentation of the mitral valve with good accuracy in dynamic cardiac ultrasound sequences which are otherwise difficult to analyse due to their high noise level.
Interplay between intensity standardization and field inhomogeneity correction in MR image processing
Image intensity standardization is a recently developed post-processing method designed for correcting acquisition-to-acquisition signal intensity variations inherent in MR images. Inhomogeneity correction is a method used to suppress the low frequency background non-uniformities of the image domain that exist in MR images. Both these procedures have important implications for MR image analysis. The effects of these post-processing operations on improvement of image quality in isolation has been well documented [1-11]. However, the combined effects of these two processes on MR images and how the processes influence each other have not been studied thus far. In this paper, we evaluate the effect of inhomogeneity correction followed by standardization on MR images and vice-versa in order to determine the best sequence to follow for enhancing image quality. Our results indicate that improved standardization can be achieved by preceding it with inhomogeneity correction. There is no statistically significant difference in image quality obtained between the results of standardization followed by correction and that of correction followed by standardization from the perspective of inhomogeneity correction. The correction operation was found to bias the effect of standardization. We demonstrated this bias both qualitatively and quantitatively. Standardization on the other hand did not influence the correction operation. It was also found that longer sequences of repeated correction and standardization did not considerably improve image quality.
Mapping of magnetic field inhomogeneity and removal of its artifact from MR images
Inhomogeneity of static magnetic field, induced by object susceptibility, is unavoidable in magnetic resonance imaging (MRI). This inhomogeneity generates distortions in both image geometry and its intensity. Based on node magnetic voltage values, a fast Finite Difference Method (FDM) is developed for susceptibility-induced mapping of magnetic field inhomogeneity and applied to simulated MRI data. Its accuracy and speed of convergence are evaluated by comparing the method to Finite Elements Method (FEM), which had been validated experimentally. Effects of inhomogeneity on Spin Echo (SE) MRI are simulated using the proposed field calculation method. Also, a pixel based (direct) method as well as a grid based (indirect) method for removing the effects are developed. The fast execution of the algorithm stems from the multi-resolution nature of the proposed method. The main advantage of the proposed method is that it does not need any data except for the image itself. Efficiency of both correction methods in distortion removal is investigated.
Correction of multispectral MRI intensity non-uniformity via spatially regularized feature condensing
Uros Vovk, Franjo Pernus, Bostjan Likar
In MRI, image intensity non-uniformity is an adverse phenomenon that increases inter-tissue overlapping. The aim of this study was to provide a novel general framework, named regularized feature condensing (RFC), for condensing the distribution of image features and apply it to correct intensity non-uniformity via spatial regularization. The proposed RCF method is an iterative procedure, which consists of four basic steps. First, creation of a feature space, which consists of multi-spectral image intensities and corresponding second derivatives. Second, estimation of the intensity condensing map in feature space, i.e. the estimation of the increase of feature probability densities by a well-established mean shift procedure. Third, regularization of intensity condensing map in image space, which yields the estimation of intensity non-uniformity. Fourth, applying the estimation of non-uniformity correction to the input image. In this way, the intensity distributions of distinct tissues are gradually condensed via spatial regularization. The method was tested on simulated and real MR brain images for which gold standard segmentations were available. The results showed that the method did not induce additional intensity variations in simulated uniform images and efficiently removed intensity non-uniformity in real MR brain images. The proposed RCF method is a powerful fully automated intensity non-uniformity correction method that makes no a prior assumptions on the image intensity distribution and provides non-parametric non-uniformity correction.
Poster Session
icon_mobile_dropdown
Enhancement measurement of pulmonary nodules with multirow detector CT: precision assessment of a 3D algorithm compared to the standard procedure
Dag Wormanns, Ernst Klotz, Gerhard Kohl, et al.
Precise density measurement of pulmonary nodules with CT is an important prerequisite if the measurement of contrastenhancement is to be used to assess if a nodule is benign or malignant. The precision of a volume-based 3D measurement method was compared to the standard 2D method currently used in clinical practice. Two consecutive low-dose CT scans (inter-scan delay a few minutes) were obtained from 10 patients with 75 pulmonary nodules (size 5 - 32 mm). A four-slice CT was used (Siemens Somatom VZ, collimation 4 x 1 mm, normalized pitch 1.75, slice thickness 1.25 mm, reconstruction interval 0.8 mm). Mean density of each nodule was determined independently from both scans with two methods: 1) an automatic 3D segmentation method; 2) the standard 2D method as proposed in the literature and currently used in clinical practice, (3 mm slice thickness, oval region of interest). ROC analysis was used to compare these methods for the detection of an enhancement of 10, 30 and 50 Hounsfield units (HU). The mean absolute measurement error (± standard deviation) was 9.9 HU (±14.4 HU) for the 3D method and 26.4 HU (±42.0 HU) for the 2D method. ROC analysis yielded AZ values of 0.723 / 0.932 / 0.982 for the 3D method and 0.609 /0.773 / 0.850 for the 2D method for the detection of 10 / 30 / 50 HU enhancement respectively. Volume-based density determination has a significantly higher reproducibility than the currently used 2D ROI approach and should preferentially be used for enhancement measurements in pulmonary nodules.
Automated selection of BI-RADS lesion descriptors for reporting calcifications in mammograms
We are developing an automated computer technique to describe calcifications in mammograms according to the BI-RADS lexicon. We evaluated this technique by its agreement with radiologists' description of the same lesions. Three expert mammographers reviewed our database of 90 cases of digitized mammograms containing clustered microcalcifications and described the calcifications according to BI-RADS. In our study, the radiologists used only 4 of the 5 calcification distribution descriptors and 5 of the 14 calcification morphology descriptors contained in BI-RADS. Our computer technique was therefore designed specifically for these 4 calcification distribution descriptors and 5 calcification morphology descriptors. For calcification distribution, 4 linear discriminant analysis (LDA) classifiers were developed using 5 computer-extracted features to produce scores of how well each descriptor describes a cluster. Similarly, for calcification morphology, 5 LDAs were designed using 10 computer-extracted features. We trained the LDAs using only the BI-RADS data reported by the first radiologist and compared the computer output to the descriptor data reported by all 3 radiologists (for the first radiologist, the leave-one-out method was used). The computer output consisted of the best calcification distribution descriptor and the best 2 calcification morphology descriptors. The results of the comparison with the data from each radiologist, respectively, were: for calcification distribution, percent agreement, 74%, 66%, and 73%, kappa value, 0.44, 0.36, and 0.46; for calcification morphology, percent agreement, 83%, 77%, and 57%, kappa value, 0.78, 0.70, and 0.44. These results indicate that the proposed computer technique can select BI-RADS descriptors in good agreement with radiologists.
Classification of mammographic masses: comparison between backpropagation neural network (BNN) and human readers
Lina Arbach, Darus L. Bennett, Joseph M. Reinhardt, et al.
PURPOSE: We compare mammographic mass classification performance between a backpropagation neural network (BNN), expert radiologists, and residents. Our goal is to reduce false negatives during routine reading of mammograms. METHODS: 160 cases from 3 different institutions were used. Each case contained at least one mass and had an accompanying biopsy result. Masses were extracted using region growing with seed locations identified by an expert radiologist. 10 texture and shape based features (area, perimeter, compactness, radial length, spiculation, mean/standard deviation of radial length, minimum/maximum axis, and boundary roughness) were used as inputs to a three-layer BNN. Shape features were computed on the boundary of the mass region; texture features were computed from the pixel values inside the mass. 140 cases were used for training the BNN and the remaining 20 cases were used for testing. The testing set was diagnosed by three expert radiologists, three residents, and the BNN. We evaluated the human readers and the BNN by computing the area under the ROC curve (AUC). RESULTS: The AUC was 0.923 for the BNN, 0.846 for the expert radiologists, and 0.648 for the residents. These results illustrate the promise of using BNN as a physician’s assistant for breast mass classification.
CAD system for lung cancer screening using low-dose single-slice CT images
We have been developed a computer-aided diagnosis (CAD) system in the lung cancer detection from a low-dose single-slice CT scanner. The objective of this study is to solve three problems of the conventional CAD system; application of image obtained by other CT scanner, diagnostic procedure for the ground glass shadow less than 5 mm in diameters, and diagnostic procedure for nodule in contact with blood vessels. We analyzed characteristics between each CT images, and pattern of blood vessels. The structural analysis procedure using three-dimensional data is the newly added process. The diagnostic rules to detect nodule consist of the four classes, which are divided by size and CT value. We applied two lung cancer databases; 55 nodules of TCT-900S and 67 nodules of Asteion. The present result from the former database achieved a sensitivity of 94.5%, the latter database achieved a sensitivity of 90.0%. Most of false negative cases had two cases which are a nodule overlapped by blood vessels and a nodule on mediastinum.
Volumetric assessment of emphysema on low-dose screening CT scans
William J. Kostis, Simina C. Fluture, Ali O. Farooqi, et al.
A study was performed to test whether automated computer analysis of low-dose helical CT scans can accurately estimate the degree of emphysema. We characterized the severity of emphysema on low-dose high-resolution (2.5 mm slice thickness) CT scans into 4 categories (normal, mild, moderate, and severe) as determined by a thoracic radiologist. From our database we chose 80 cases (20 within each category) for analysis. Our analysis system segments the lung parenchyma from surrounding structures and computes an emphysema index as the volumetric percentage of emphysema for the entire lung volume using a dual thresholding technique. One-way analysis of variance was used to assess the emphysema index. For those cases classified as normal, the emphysema index was 11.74 ± 1.24 (mean ± sem), for mild it was 15.00 ± 1.31, for moderate it was 16.91 ± 1.72, and for severe it was 26.77 ± 1.73. The differences were statistically significant (p < 0.0001) and showed an increasing score with increasing severity of emphysema. Our system provides a useful index of the degree of emphysema present. Use of the system allows subjects undergoing lung cancer screening studies to have the extent of their emphysema quantified on a year-to-year basis.
Separation of malignant and benign masses using image and segmentation features
Lisa M. Kinnard, Shih-Chung Benedict Lo, Paul C. Wang, et al.
The purpose of this study is to investigate the efficacy of image features versus likelihood features of tumor boundaries for differentiating benign and malignant tumors and to compare the effectiveness of two neural networks in the classification study: (1) circular processing-based neural network and (2) conventional Multilayer Perceptron (MLP). The segmentation method used is an adaptive region growing technique coupled with a fuzzy shadow approach and maximum likelihood analyzer. Intensity, shape, texture, and likelihood features were calculated for the extracted Region of Interest (ROI). We performed these studies: experiment number 1 utilized image features used as inputs and the MLP for classification, experiment number 2 utilized image features used as inputs and the neural net with circular processing for classification, and experiment number 3 used likelihood values as inputs and the MLP for classification. The experiments were validated using an ROC methodology. We have tested these methods on 51 mammograms using a leave-one-case-out experiment (i.e., Jackknife procedure). The Az values for the four experiments were as follows: 0.66 in experiment number 1, 0.71 in experiment number 2, and 0.84 in experiment number 3.
Skeleton-based 3D computer-aided detection of colonic polyps
In this paper, we propose a new computer aided detection (CAD) technique to utilize both global and local shape information of the colon wall for detection of colonic polyps. Firstly, the whole colon wall is extracted by our mixture-based image segmentation method. This method uses partial volume percentages to represent the distribution of different materials in each voxel, so it provides the most accurate information on the colon wall, especially the mucosa layer. Local geometrical measure of the colon mucosa layer is defined by the curvature and gradient information extracted from the segmented colon-wall mixture data. Global shape information is provided by applying an improved linear integral convolution operation to the mixture data. The CAD technique was tested on twenty patient datasets. The local geometrical measure extracted from the mixture segmentation represents more accurately the polyp variation than that extracted from conventional label classification, leading to improved detection. The added global shape information further improves the polyp detection.
Computerized lung nodule detection: effect of image annotation schemes for conveying results to radiologists
We have developed a computerized method to automatically identify lung nodules in thoracic computed tomography (CT) scans. Since the ultimate goal of such a method is to improve human detection performance, the process through which computer results are conveyed to the radiologist must be considered. Detection results are presented through an interface that automatically places a circle around the detected structure in only one section in which that structure may appear. Consequently, an inappropriate choice of section could result in an actual nodule detected by the computer but not properly indicated to the radiologist, thus reducing the potential positive impact of that detection on the radiologist’s decision-making process. The automated detection method was applied to 38 diagnostic CT scans with an overall sensitivity of 71% and 0.5 false-positive detections per section; however, when these results were converted automatically to annotations on the output images for human visualization, 8.6% of the computer-detected nodules received annotations that failed to encompass a portion of the actual nodule. Thus, the "effective sensitivity" of the automated detection method (i.e., a performance paradigm that considers the eventual human interaction with system output) was reduced.
Computer-aided detection of polyps and masses for CT colonography
Janne J. Naeppi, Hans Frimmel, Abraham H. Dachman, et al.
We are developing a computer-aided scheme for the detection of colonic polyps and masses in CT colonography. The colon is extracted automatically from CT images by use of a knowledge-guided technique. The detection of polyps and masses is based on shape index and curvedness features. A feature-guided segmentation technique is used to extract the regions of detected polyps. A quadratic discriminant classifier is used for reducing false-positive detections and for determining the final output based on shape index, gradient concentration, and CT value features. To evaluate the technique, we performed CT colonography for 72 patients with cleansed colons and by use of a standard technique with helical CT scanning. Thirteen patients had a total of 20 polyps measuring 5-12mm, and four patients had 4 masses measuring 25-40 mm in diameter. In a by-polyp(by-mass) leave-one-out evaluation, the CAD scheme detected 95% of the polyps(all masses) with an average of 1.5(0.5) false-positive detections per patient. These preliminary results suggest that our CAD scheme is potentially a useful tool for providing rapid interpretation and high diagnostic accuracy for CT colonography.
ROI extraction of chest CT images using adaptive opening filter
We have already developed a prototype of computer-aided diagnosis (CAD) system that can automatically detect suspicious shadows from Chest CT images. But the CAD system cannot detect Ground-Grass-Attenuation perfectly. In many cases, this reason depends on the inaccurate extraction of the region of interests (ROI) that CAD system analyzes, so we need to improve it. In this paper, we propose a method of an accurate extraction of the ROI, and compare proposed method to ordinary method that have used in CAD system. Proposed Method is performed by application of the three steps. Firstly we extract lung area using threshold. Secondly we remove the slowly varying bias field using flexible Opening Filter. This Opening Filter is calculated by the combination of the ordinary opening value and the distribution which CT value and contrast follow. Finally we extract Region of Interest using fuzzy clustering. When we applied proposal method to Chest CT images, we got a good result in which ordinary method cannot achieve. In this study we used the Helical CT images that are obtained under the following measurement: 10mm beam width; 20mm/sec table speed; 120kV tube voltage; 50mA tube current; 10mm reconstruction interval.
Automated detection of polyps from multislice CT images using 3D morphologic matching algorithm: phantom study
A colon polyp phantom, 28 cm long and 5 cm in diameter, was constructed by inflating a latex ultrasound transducer cover. Four round pieces of ham (3, 6, 9, 12 mm diameter) were imbedded in the outer membrane surface of the phantom and then were tied by string at the base to simulate pedunculated polyps. Three more pieces of ham (3, 6, 9 mm) were impressed and taped on the outer surface to simulate sessile polyps. The circumference of the phantom was constricted by string at four evenly spaced locations to simulate haustral folds. The phantom was placed in a water bath and was modified by infusing water into the lumen or by partially deflating the lumen, and then rescanned. CT images were obtained in a multi-slice CT (4x1 mm collimation, 0.5s scan, 120 Kvp, 90 mAs, 1 mm slice thickness). CT images were processed with our computer-aided detection program. First, the three-dimensional colonic boundary and inner structure were segmented. From this segmented region, soft-tissue structures were extracted and labeled to generate candidates. Shape features were evaluated along with geometric constraints. Three-dimensional region-growing and morphologic matching processes were applied to refine and classify the candidates. The detected polyps were compared with the true polyps in the phantom or known polyps in clinical cases to calculate the sensitivity and false positives.
Computer-aided classification of breast microcalcification clusters: merging of features from image processing and radiologists
We developed an ensemble classifier for the task of computer-aided diagnosis of breast microcalcification clusters,which are very challenging to characterize for radiologists and computer models alike. The purpose of this study is to help radiologists identify whether suspicious calcification clusters are benign vs. malignant, such that they may potentially recommend fewer unnecessary biopsies for actually benign lesions. The data consists of mammographic features extracted by automated image processing algorithms as well as manually interpreted by radiologists according to a standardized lexicon. We used 292 cases from a publicly available mammography database. From each cases, we extracted 22 image processing features pertaining to lesion morphology, 5 radiologist features also pertaining to morphology, and the patient age. Linear discriminant analysis (LDA) models were designed using each of the three data types. Each local model performed poorly; the best was one based upon image processing features which yielded ROC area index AZ of 0.59 ± 0.03 and partial AZ above 90% sensitivity of 0.08 ± 0.03. We then developed ensemble models using different combinations of those data types, and these models all improved performance compared to the local models. The final ensemble model was based upon 5 features selected by stepwise LDA from all 28 available features. This ensemble performed with AZ of 0.69 ± 0.03 and partial AZ of 0.21 ± 0.04, which was statistically significantly better than the model based on the image processing features alone (p<0.001 and p=0.01 for full and partial AZ respectively). This demonstrated the value of the radiologist-extracted features as a source of information for this task. It also suggested there is potential for improved performance using this ensemble classifier approach to combine different sources of currently available data.
CAD system for lung cancer CT screening
Yuya Takeda, Masaaki Tamaru, Yoshiki Kawata, et al.
Lung Cancer is known as one of the most difficult cancers to cure. The detection of lung cancer in its early stage can be helpful for medical treatment to danger. However, mass screening based on helical CT images brings a considerable number of images to diagnosis, the time-consuming fact makes it difficult to be used in the clinic. To increase the efficiency of the mass screening process, we developed a Computer-aided diagnosis (CAD) system, which can detect nodules at high speed. It takes 17 seconds per case (35 images) to detect nodules. In this paper, we describe the development of this CAD system and specifications.
Improving the predictive value of mammography using a specialized evolutionary programming hybrid and fitness functions
Mammography is an effective tool for the early detection of breast cancer; however, most women referred for biopsy based on mammographic findings do not, have cancer. This study is part of an ongoing effort to reduce the number of benign cases referred for biopsy by developing tools to aid physicians in classifying suspicious lesions. Specifically, this study examines the use of an Evolutionary Programming (EP)/Adaptive Boosting (AB) hybrid, specifically modified to focus on improving the performance of computer-assisted diagnostic (CAD) tools at high specificity levels (missing few or no cancers). An EP/AB hybrid developed by the authors and used in previous studies was modified with two new fitness functions: 1) a function which favored networks with the high PPV values at thresholds corresponding to high sensitivities and 2) a function which favored networks with the highest partial ROC Az (normalized area about 90% sensitivity). The modified hybrid with specialized fitness functions was evaluated using k-fold cross-validation against two real-word mammogram data sets. Results indicate that the number of benign cases referred for biopsy might be reduced by over a third, while missing no cancers. If sensitivity is allowed to decrease to 97% (missing 3% of the cancers), the number of spared biopsies could be raised to over half.
Classification of nodules in mammogram image by using wavelet transform
This work presents a classifier for mammographic masses using the wavelet transform as characteristics generator. It considers the BI-RADS classification, dividing mass according to their shapes: circulate, nodular and speculate. We developed procedures with two steps: the first involves a model applying one wavelet technique performing the contours analysis with simulated mass images. This procedure was used to choose the best wavelet that could generate the desired characteristics. The second procedure had the objective of applying the chosen wavelet to masses from segmented images. Both methods have as answers three classes of shape. A root-mean-square function is applied to obtain the energy measure for each level of wavelet decomposition. Thus the shape feature vectors are formed with the coefficients of the details and coefficients of approximation extracted by the energy of wavelet decomposition levels. Linear Discriminan Analysis (LDA) by using Fischer Discriminant was used to reduce the number of characteristics for the feature vector. The Mahalanobis distance was used by the classifier to verify aimed the pertinence of the images for each one the previously given classes. To test actual images, the leave-one-out method was used to the classifier training. The classifier has registered good results, compared to others reports in the corresponding literature.
Feature-based differences between mammograms
Walter F. Good, Xiao Hui Wang, Glenn S. Maitz
A novel technique for assessing local and global differences between mammographic images was developed. This method uses correlations between abstract features extracted from corresponding views to compare image properties without resorting to processes that depend on exact geometrical congruence, such as image subtraction, which have a tendency to produce excessive artifact. The method begins by normalizing both digitized mammograms, after which a series of global and local feature filters are applied to each image. Each filter calculates values characterizing a particular property of the given image, and these values, for each property of interest are arranged in a feature vector. Corresponding elements in the two feature vectors are combined to produce a difference vector that indicates the change in the particular properties between images. Features are selected which are expected to be relatively invariant with respect to breast compression.
Computer-aided detection (CAD) of breast cancer on full-field digital and screening film mammograms
Xuejun Sun, Wei Qian, Xiaoshan Song, et al.
Full-field digital mammography (FFDM) as a new breast imaging modality has potential to detect more breast cancers or to detect them at smaller sizes and earlier stages compared with screening film mammography (SFM). However, its performance needs verification, and it would pose new problems for the development of CAD methods for breast cancer detection and diagnosis. Performance evaluation of CAD systems on FFDM and SFM has been conducted in this study, respectively. First, an adaptive CAD system employing a series of advanced modules has been developed on FFDM. Second, a standardization approach has been developed to make the CAD system independent of characteristics of digitizer or imaging modalities for mammography. CAD systems developed previously for SFM and developed in this study for FFDM have been evaluated on FFDM and SFM images without and with standardization, respectively, to examine the performance improvement of the CAD system developed in this study. Computerized free-response receiver operating characteristic (FROC) analysis has been adopted as performance evaluation method. Compared with previous one, the CAD system developed in this study demonstrated significantly performance improvements. However, the comparison results have shown that the performances of final CAD system in this study are not significantly different on FFDM and on SFM after standardization. It needs further study on the assessment of CAD system performance on FFDM and SFM modalities.
Classification of masses on mammograms using support vector machine
Mammography is the most effective method for early detection of breast cancer. However, the positive predictive value for classification of malignant and benign lesion from mammographic images is not very high. Clinical studies have shown that most biopsies for cancer are very low, between 15% and 30%. It is important to increase the diagnostic accuracy by improving the positive predictive value to reduce the number of unnecessary biopsies. In this paper, a new classification method was proposed to distinguish malignant from benign masses in mammography by Support Vector Machine (SVM) method. Thirteen features were selected based on receiver operating characteristic (ROC) analysis of classification using individual feature. These features include four shape features, two gradient features and seven Laws features. With these features, SVM was used to classify the masses into two categories, benign and malignant, in which a Gaussian kernel and sequential minimal optimization learning technique are performed. The data set used in this study consists of 193 cases, in which there are 96 benign cases and 97 malignant cases. The leave-one-out evaluation of SVM classifier was taken. The results show that the positive predict value of the presented method is 81.6% with the sensitivity of 83.7% and the false-positive rate of 30.2%. It demonstrated that the SVM-based classifier is effective in mass classification.
Disease characterization of active appearance model coefficients
We previously reported on 2D and 3D Active Appearance Models (AAM) for automated segmentation of cardiac MR. AAMs are shown useful for such segmentations because they exploit prior knowledge about cardiac shape and image appearance, yet segmentation of object borders might not be the only benefit of AAMs. An AAM represents objects as a linear combination of shape and texture variations applied to a mean object via Principal Component Analysis (PCA) to form an integrated model. This model captures enough shape, texture, and motion variations to accurately synthesize reconstructions of target objects from a finite set of parameters. Because of this, we hypothesize that AAM coefficients may be used for the classification of disease abnormalities. PCA is useful for reducing the dimensionality of vectors, however it does not produce vectors optimal for the separation of classes needed for disease classification. Discriminate analysis such as Linear Discriminate Analysis (LDA) and Kernel Discriminate Analysis (KDA) are dimension reducing techniques with the added benefit of supervised learning for the purpose of classification. Once AAM segmentation is complete, disease probabilities are computed from model coefficients via discriminate analysis. Preliminary results on model coefficients show tendency of disease separation for certain disease classes.
Mojette cryptomarking scheme for medical images
This paper describes a new kind of use for image watermarking. A stream watermarking method is presented, in which a key allows the authorized users to recover the original image. Our algorithm exploits the redundancy properties of the Mojette Transform. This transform is based on a specific discrete version of the Radon transform with an exact inversion. Anyone whom knows the watermark key will be able to decode the original image whereas only a marked image can be decoded without this key. The presented algorithm is suitable for different applications when fragile and reversible watermarks are mandatory such as medical image watermarking, and it could also be used for a data access scheme (cryptography). A multiscale watermark variation is presented and can be used when different user profile levels are encountered.
Deblurring using iterative multiplicative regularization technique
Aria Abubakar, Peter M. van den Berg, Tarek M. Habashy, et al.
In this work a new deblurring algorithm for a special deconvolution problem, where a parameter describes the degree of blurring, is considered. The algorithm is based on the Conjugate Gradient technique and uses the so-called weighted L2-norm regularizer to obtain a reasonable solution. In order to avoid the necessity of determining the appropriate regularization parameter for this regularizer, this regularizer is included as a multiplicative constraint. In this way, the appropriate regularization parameter will be controlled by the inversion process itself. Numerical testing shows that the proposed algorithm works very effectively.
Mammographic thickness compensation for image analysis and display enhancement
Dan Rico, Martin Joel Yaffe, Bindu J. Augustine, et al.
This paper proposes a novel algorithm for mammographic image enhancenment, based on identifying the peripheral region of the breast and suppressing the large change in signal caused by reduction of thickness there, while maintaining the local contrast information related to tissue composition. The thickness compensation algorithm consists of three processing steps. The first step is to generate a thickness map using two phantoms, one which simulates the shape of the breast in the cranio-caudal projection and a second one as a triangular attenuator. The second step is to warp the phantom thickness map in the peripheral region to that of the breast image. The third step is to equalize the signal values in the peripheral region relative to the signal in the uniform thickness area using the warped thickness map data. Examples are presented to show the effectiveness of the proposed method in effectively suppressing the large range of signal caused by thickness changes in the peripheral region, thereby facilitating image presentation and analysis. The performance of the proposed algorithm was also evaluated on clinical mammograms by computing volumetric breast density.
Method for intensity correction in CR mosaic by combined nonlinear and linear transformations
Guo-Qing Wei, JianZhong Qian, Helmuth F. Schramm, et al.
In this paper, we present a method to correct for intensity artifacts in mosaic composition of Computed Radiography (CR) images. The white band artifacts not only distort diagnostic information, but also cause visual disturbances in the examination by physicians. We propose a hybrid method to enhance the image intensity and to correct the brightness differences. A nonlinear transformation method is presented for enhancement, whereas a linear regression method is utilized to compensate for the intensity differences between the white band and normal exposure regions. A knowledge-based method is proposed which can autonomously decide whether the nonlinear enhancement step needs to be bypassed, since in some cases over-enhancement may result from the correction algorithm. Experimental results with different images are presented to show the effectiveness of the proposed method.
Correction for partial volume effects in brain perfusion ECT imaging
The accurate quantification of brain perfusion for emission computed tomography data (PET-SPECT) is limited by partial volume effects (PVE). This study presents a new approach to estimate accurately the true tissue tracer activity within the grey matter tissue compartment. The methodology is based on the availability of additional anatomical side information and on the assumption that activity concentration within the white matter tissue compartment is constant. Starting from an initial estimate for the white matter grey matter activity, the true tracer activity within the grey matter tissue compartment is estimated by an alternating ML-EM-algorithm. During the updating step the constant activity concentration within the white matter compartment is modelled in the forward projection in order to reconstruct the true activity distribution within the grey matter tissue compartment, hence reducing partial volume averaging. Consequently the estimate for the constant activity in the white matter tissue compartment is updated based on the new estimated activity distribution in the grey matter tissue compartment. We have tested this methodology by means of computer simulations. A T1-weighted MR brainscan of a patient was segmented into white matter, grey matter and cerebrospinal fluid, using the segmentation package of the SPM-software (Statistical Parametric Mapping). The segmented grey and white matter were used to simulate a SPECT acquisition, modelling the noise and the distance dependant detector response. Scatter and attenuation were ignored. Following the above described strategy, simulations have shown it is possible to reconstruct the true activity distribution for the grey matter tissue compartment (activity/tissue volume), assuming constant activity in the white matter tissue compartment.
Inhomogeneity correction for magnetic resonance images with fuzzy C-mean algorithm
Segmentation of magnetic resonance (MR) images plays an important role in quantitative analysis of brain tissue morphology and pathology. However, the inherent effect of image-intensity inhomogeneity renders a challenging problem and must be considered in any segmentation method. For example, the adaptive fuzzy c-mean (AFCM) image segmentation algorithm proposed by Pham and Prince can provide very good results in the presence of the inhomogeneity effect under the condition of low noise levels. Their results deteriorate quickly as the noise level goes up. In this paper, we present a new fuzzy segmentation algorithm to improve the noise performance of the AFCM algorithm. It achieves accurate segmentation in the presence of inhomogeneity effect and high noise levels by incorporating the spatial neighborhood information into the objective function. This new algorithm was tested by both simulated experimental and real clinical MR images. The results demonstrated the improved performance of this new algorithm over the AFCM in the clinical environment where the inhomogeneity and noise levels are commonly encountered.
Impulse noise reduction in MR images using one rule-base merging method of fuzzy weighted mean filters
Mohammad Sabati, Maitham Sabati, S. Abdolkarim Hosseini Ravandi, et al.
Impulse noise contamination can affect the interpretability of the magnetic resonance (MR) images. Nonlinear adaptive techniques are often computationally expensive in reducing the noise while retaining the image details. Due to their lack of adaptability, the median filters do not always perform well when the noise probability is relatively high. To provide simplicity and adaptability, here we present a fuzzy weighted mean (FWM) filter that uses both numerical data and linguistic information. The FWM filter determines the weight for each pixel in the neighborhood in response to the local features. In this study, the training data were calculated from twenty impulse noise-free MR images obtained from different regions in humans. A 5 x 5 window was used to scan across the images. The fuzzy system was constructed using the learning from example method and was then merged with Takagi-Sugeno fuzzy system based on information obtained from experts using a one rule-base merging method. Preliminary assessment of the method on twenty noisy images showed encouraging results in effectively reducing the error in the sense of mean square (compare to median filters) and preserving edges and small structures, although the appearance of the original images was not always faithfully recovered.
Scalable image enhancement for head CT scans with reduced iodine
Lijun Yin, Ahmad I. Zainal Abidin, Ja Kwei Chang
The use of iodinated contrast agent is common in CT scanning of the head. However, the use of the agent is relatively costly, and may result in an adverse reaction in some patients. It is of interest to investigate the possibility of reducing the amount of contrast material without sacrificing the diagnostic information. We developed an algorithm and software to simulate "intermediate" images based on two input images of head CT scans performed on the same patient, captured at the same angle. A non-linear relationship between CT image intensity and the amount of contrast agent necessitates a guideline curve derived from the image statistics to generate visually realistic simulated CT images. The study shows that the saturation time detected is equivalent to 60 - 70 % of total injected contrast agent. The development also includes adjustable enhancement to the lesion area and its edges. By using the iodine-contrast-agent guided curves, the lesion area can be enhanced adjustably. For flexibility and immediate display, the edge can be scalable by a certain factor to add on the original image for image enhancement. Enhancement on lesion area and its edges improves the visualization quality.
Interaction between noise suppression and inhomogeneity correction in MRI
Albert Montillo, Jayaram K. Udupa, Leon Axel, et al.
While cardiovascular disease is the leading cause of death in most developed countries, SPAMM-MRI can reduce morbidity by facilitating patient diagnosis. An image analysis method with a high degree of automation is essential for clinical adoption of SPAMM-MRI. The degree of this automation is dependent on the amount of thermal noise and surface coil-induced intensity inhomogeneity that can be removed from the images. An ideal noise suppression algorithm removes thermal noise yet retains or enhances the strength of the edges of salient structures. In this paper, we quantitatively compare and rank several noise suppression algorithms in images from both normal and diseased subjects using measures of the residual noise and edge strength and the statistical significance levels and confidence intervals of these measures. We also investigate the interrelationship between inhomogeneity correction and noise suppression algorithms and compare the effect of the ordering of these algorithms. The variance of thermal noise does not tend to change with position, however, inhomogeneity correction increases noise variance in deep thoracic regions. We quantify the degree to which an inhomogeneity estimate can improve noise suppression and how well noise suppression can facilitate the identification of homogeneous tissue regions and thereby, assist in inhomogeneity correction.
Combination of automatic non-rigid and landmark-based registration: the best of both worlds
Bernd Fischer, Jan Modersitzki
Automatic, parameter-free, and non-rigid registration schemes are known to be valuable tools in various (medical) image processing applications. Typically, these approaches aim to match intensity patterns in each scan by minimizing an appropriate distance measure. The outcome of an automatic registration procedure in general matches the target image quite good on the average. However, it may be inaccurate for specific, important locations as for example anatomical landmarks. On the other hand, landmark based registration techniques are designed to accurately match user specified landmarks. A drawback of landmark based registration is that the intensities of the images are completely neglected. Consequently, the registration result away from the landmarks may be very poor. Here we propose a framework for novel registration techniques which are capable to combine automatic and landmark driven approaches in order to benefit from the advantages of both strategies. We also propose a general, mathematical treatment of this framework and a particular implementation. The procedure computes a displacement field which is guaranteed to produce a one-to-one match between given landmarks and at the smae time minimizes an intensity based measure for the remaining parts of the images. The properties of the new scheme are demonstrated for a variety of numerical example. It is worthwhile noticing, that we not only present a new approach. Instead, we propose a general framework for a variety of different approaches. The choice of the main building blocks, the distance measure and the smoothness constraint, is essentially free.
Multiresolution-based registration of a volume to a set of its projections
We have developed an algorithm for the rigid-body registration of a 3D CT to a set of C-arm images by matching them to computed cone-beam projections of the CT (DRRs). We precomputed rescaled versions (pyramid) of the CT volume and of the C-arm images. We perform the registration of the CT to the C-arm images starting from their coarsest resolution until we reach some finer resolution that offers a good compromise between time and accuracy. To achieve precision, we use a cubic-spline data model to compute the data pyramids, the DRRs, and the gradient and the Hessian of the cost function. We validate our algorithm on a 3D CT and on C-arm images of a cadaver spine using fiducial markers. When registering the CT to two C-arm images, our algorithm operates safely if the angle between the two image planes is larger than 10°. It achieves an accuracy with a mean and a standard deviation of approximately 2.0±1.0 mm.
Fast registration algorithm using a variational principle for mutual information
Murray E. Alexander, Randy Summers
A method is proposed for cross-modal image registration based on mutual information (MI) matching criteria. Both conventional and "normalized" MI are considered. MI may be expressed as a functional of a general image displacement field u. The variational principle for MI provides a field equation for u. The method employs a set of "registration points" consisting of a prescribed number of strongest edge points of the reference image, and minimizes an objective function D defined as the sum of the square residuals of the field equation for u at these points, where u is expressed as a sum over a set of basis functions (the affine model is presented here). D has a global minimum when the images are aligned, with a “basin of attraction” typically of width ~0.3 pixels. By pre-filtering with a low-pass filter, and using a multiresolution image pyramid, the basin may be significantly widened. The Levenberg-Marquardt algorithm is used to minimize D. Tests using randomly distributed misalignments of image pairs show that registration accuracy of 0.02 - 0.07 pixels is achieved, when using cubic B-splines for image representation, interpolation, and Parzen window estimation.
Non-rigid registration of medical image using a B-spline transformation
Songyuan Tang, Jianzhe Wang, Tianzi Jiang
A new nonrigid registration method has been developed in this paper. In the proposed algorithm, we use αB-Spline transformation based on free-form deformation model to register images. The αB-Spline not only possesses many desirable geometrical and computational properties as B-spline, but also enhances the shape-control capability. It uses the linear singular blending technique, which is derived from the blending parameters defined at the B-spline control vertices. The volume can be transformed by changing the positions of control vertices and the value of blending parameters. We first use affine transformation to coarsely match two images, then warp the image by altering the positions of the control points, at last, adjust the values of blending parameters to change the image finely. We combine the SSD similarity measure with the regularization of Laplacian model as the cost function. Compared with that of the affine and B-spline transformation, the result of the proposed method is better.
Normalized mutual information-based registration using K-means clustering-based histogram binning
Zeger F. Knops, J. B. Antoine Maintz, Max A. Viergever, et al.
A new method for the estimation of the intensity distributions of the images prior to normalized mutual information (NMI) based registration is presented. Our method is based on the K-means clustering algorithm as opposed to the generally used equidistant binning method. K-means clustering is a binning method with a variable size for each bin which is adjusted to achieve a natural clustering. Registering clinical MR-CT and MR-PET images with K-means clustering based intensity distribution estimation shows that a significant reduction is computational time without loss of accuracy as compared to the standard equidistant binning based registration is possible. Further inspection shows a reduction in the NMI variance and a reduction in local maxima for K-means clustering based NMI registration as opposed to equidistant binning based NMI registration.
Registration and fusion of multimodal vascular images: a phantom study
Nicolas Boussion, Jacques A. de Guise, Gilles Soulez, et al.
The aim of this work was to compare the geometric accuracy of X-ray angiography, MRI, X-ray computed tomography (XCT), and ultrasound imaging (B-mode and IVUS) for measuring the lumen diameters of blood vessels. An image fusion method also was developed to improve these measurements. The images were acquired from a realistic phantom mimicking normal vessels of known internal diameters. After acquisition, the multimodal images were coregistered, by manual alignment of fiducial markers and then by automatic maximization of mutual information. The fusion method was performed by means of a fuzzy logic modeling approach followed by a combination process based on possibilistic logic. The data showed (i) the good geometric accuracy of XCT compared to the other methods for all studied diameters; and (ii) the good results of fused images compared to single modalities alone. For XCT, the error varied from 1.1% to 9.7%, depending on the vessel diameter that ranged from 0.93 to 6.24 mm. MRI-IVUS fusion allowed variability of measurements to be reduced up to 78%. To conclude, this work underlined both the usefulness of the vascular phantom as a validation tool and the utility of image fusion in the vascular context. Future work will consist of studying pathological vessel shapes, image artifacts and partial volume effect correction.
Similarity metrics based on nonadditive entropies for 2D-3D multimodal biomedical image registration
Information theoretic similarity metrics, including mutual information, have been widely and successfully employed in multimodal biomedical image registration. These metrics are generally based on the Shannon-Boltzmann-Gibbs definition of entropy. However, other entropy definitions exist, including generalized entropies, which are parameterized by a real number. New similarity metrics can be derived by exploiting the additivity and pseudoadditivity properties of these entropies. In many cases, use of these measures results in an increased percentage of correct registrations. Results suggest that generalized information theoretic similarity metrics, used in conjunction with other measures, including Shannon entropy metrics, can improve registration performance.
Noninvasive MR to 3D rotational x-ray registration of vertebral bodies
Everine Brenda van de Kraats, Theo van Walsum, Jorrit-Jan Verlaan, et al.
3D Rotational X-ray (3DRX) imaging can be used to intraoperatively acquire 3D volumes depicting bone structures in the patient. Registration of 3DRX to MR images, containing soft tissue information, facilitates image guided surgery on both soft tissue and bone tissue information simultaneously. In this paper, automated noninvasive registration using maximization of mutual information is compared to conventional interactive and invasive point-based registration using the least squares fit of corresponding point sets. Both methods were evaluated on 3DRX images (with a resolution of 0.62x0.62x0.62 mm3) and MRI images (with resolutions of 2x2x2 mm3, 1.5x1.5x1.5 mm3 and 1x1x1 mm3) of seven defrosted spinal segments implanted with six or seven markers. The markers were used for the evaluation of the registration transformations found by both point- and maximization of mutual information based registration. The root-mean-squared-error on markers that were left out during registration was calculated after transforming the marker set with the computed registration transformation. The results show that the noninvasive registration method performs significantly better (p≤0.01) for all MRI resolutions than point-based registration using four or five markers, which is the number of markers conventionally used in image guided surgery systems.
Prospective registration of inter-examination MR images
Egbert Gedat, Ingolf Sack, Juergen Braun, et al.
The monitoring of the development of cerebral diseases such as stroke or brain tumors with MRI requires high-precision comparison of initial and follow-up images. Retrospective registration often produces artifacts, especially at boundariesbetween different tissue structures. However, by manipulating the gradients, MRI scanners offer the possibility of shifting and rotating image planes fast and without removing the patient. Two approaches for prospective registration were implemented and tested on phantoms and healthy volunteers. To speed up calculation, both registration algorithms used the three orthogonal two-dimensional localizer images that were acquired prior to each measurement. In the first approach, the image is projected onto one axis to determine the rotation between initial and follow-up examination. The second algorithm uses cross-correlation for rotational correction. Both algorithms maximize the cross-correlation for correction of the shifts. After 2-D registration in each orientation, the gradients of the tomograph are adapted according to the calculated transformation matrix. The results were evaluated with a 3-D rigid-body registration using Automated Image Registration. The cross-correlation method was found to be very robust, while the 1-D projection algorithm was sufficiently fast but registration results depended on the shape of the head.
A multiple-layer flexible mesh template matching method for non-rigid registration between a pelvis model and CT images
Jianhua Yao, Russell H. Taylor
A robust non-rigid registration method has been developed to deform a pelvis model to match with anatomical structures in a CT image. A statistical volumetric model is constructed from a collection of training CT datasets. The model is represented as a hierarchical tetrahedron. The prior information of both shape properties and density properties is incorporated in the model. The non-rigid registration process consists of three stages: affine transformation, global deformation, and local deformation. A multiple-layer flexible mesh template matching method is developed to adjust the location of each vertex on the model to achieve an optimal match with the anatomical structure. The mesh template is retrieved directly from the tetrahedral mesh structure, with multiple-layer structure for different scales of anatomical features and flexible searching sphere for robust template matching. An adaptive deformation focus strategy is adopted to gradually deform each vertesx to its matched destination. Several constraints are applied to guarantee the smoothness and continuity. A "leave-one-out" validation showed that the method can achieve about 94% volume overlap and 5.5% density error between the registered model and the ground truth model.
A method for fully automated measurement of neurological structures in MRI
Edward A. Ashton, Jonathan K. Riek, Larry Molinelli, et al.
A method for fully automating the measurement of various neurological structures in MRI is presented. This technique uses an atlas-based trained maximum likelihood classifier. The classifier requires a map of prior probabilities, which is obtained by registering a large number of previously classified data sets to the atlas and calculating the resulting probability that each represented tissue type or structure will appear at each voxel in the data set. Classification is then carried out using the standard maximum likelihood discriminant function, assuming normal statistics. The results of this classification process can be used in three ways, depending on the type of structure that is being detected or measured. In the most straightforward case, measurement of a normal neural sub-structure such as the hippocampus, the results of the classifier provide a localization point for the initiation of a deformable template model, which is then optimized with respect to the original data. The detection and measurement of abnormal structures, such as white matter lesions in multiple sclerosis patients, requires a slightly different approach. Lesions are detected through the application of a spectral matched filter to areas identified by the classifier as white matter. Finally, detection of unknown abnormalities can be accomplished through anomaly detection.
Alignment of multimodality, 2D, and 3D breast images
In a larger effort, we are studying methods to improve the specificity of the diagnosis of breast cancer by combining the complementary information available from multiple imaging modalities. Merging information is important for a number of reasons. For example, contrast uptake curves are an indication of malignancy. The determination of anatomical locations in corresponding images from various modalities is necessary to ascertain the extent of regions of tissue. To facilitate this fusion, registration becomes necessary. We describe in this paper a framework in which 2D and 3D breast images from MRI, PET, Ultrasound, and Digital Mammography can be registered to facilitate this goal. Briefly, prior to image acquisition, an alignment grid is drawn on the breast skin. Modality-specific markers are then placed at the indicated grid points. Images are then acquired by a specific modality with the modality specific external markers in place causing the markers to appear in the images. This is the first study that we are aware of that has undertaken the difficult task of registering 2D and 3D images of such a highly deformable (the breast) across such a wide variety of modalities. This paper reports some very preliminary results from this project.
Towards the automatic detection of large misregistrations
In many cases three-dimensional anatomical and functional images (SPECT, PET, MRI, CT) ought to be combined to determine the precise nature and extent of lesions in many parts of the body. The images must be adequately aligned prior to any addition, substraction, or any other combination; registration can be done by experienced radiologists via visual inspection, mental reorientation and overlap of slices, or by an automated registration algorithm. To be useful clinically, the latter case requires validation. The human capacity to evaluate registration results visually is limited and the process is time consuming. This paper describes an algorithmic procedure that distinguishes between badly misregistered pairs and those likely to be clinically useful. Our algorithm used brain and/or skin/air contours and a function based on the principal axes of the contour volumes. The results of the present study indicate that the measure based on the combination of brain and skin contours and a principal-axis function is a good first step to reduce the number of badly registered images reaching the clinician.
Clinical and quantitative assessment of multimodal retinal image fusion
Balaji Raman, Mark P. Wilson, Sheila Coyne Nemeth, et al.
The fusion of multi-modal medical images provides a new diagnostic tool with clinical applications. Over the years, image fusion has been used in a number of medical disciplines. However, little fusion work in ophthalmic imaging appears in the literature. With the advent of multi-modal digital information of the retina and advanced image registration programs, the possibility of displaying complementary information in one fused retinal image becomes visually and clinically exciting. The objective of this research was to demonstrate that through fusion of multi-modal retinal information one could increase the information content of retinal pathologies on a fused image. Two aspects of image fusion were addressed in this study: image registration and image fusion of two distinctly different modalities, Fluorescein Angiography (FA) videos and standard color photography. Quantitative analysis of the fusion results was performed using entropy and image noise index. Qualitative analysis was performed by simultaneous visual comparison of two modalities (FA and color) of all registered unfused modes and the fused modes.
Four-dimensional multimodality image registration applied to gated SPECT and gated MRI
Usaf E. Aladl, Gilbert A. Hurwitz, Damini Dey, et al.
An automatic registration technique for gated cardiac SPECT to gated MRI is presented. Prior to registration, the MRI data set is subjected to a preprocessing technique that automatically isolates the heart. During preprocessing, voxels in the MRI volume are designated as either dynamic or static based on their change in intensity over the course of the cardiac cycle. This allows the elimination of the external organs in the MRI dataset, leaving the heart as the main feature of the volume. To separate the left ventricle (LV) from the remainder of the heart, optimal thresholding is used. A mutual-information-based algorithm is used to register the two studies. The registration technique was tested with fourteen patient data sets, and the results were compared to those of manual registration by an expert. The pre-processing step significantly improved the accuracy of the registration when compared to automatic registration performed without pre-processing.
Marker orientation in fiducial registration
Fiducial markers are often employed in image-guided surgical procedures to provide positional information based on pre-operative images. In the standard technique, centroids of three or more markers are localized in both image space and physical space. The localized positions are used in a closed-form algorithm to determine the three-dimensional rigid-body transformation that will register the two spaces in the least-squares sense. In this work we present (1) a method for determining the orientation of the axis of symmetry of a cylindrical marker in a tomographic image and (2) an extension to the standard approach to rigid-body registration that utilizes the orientation of marker axes as an adjunct to the positions of their centroids. The extension is a closed-form, least-squares solution. Unlike the standard approach, the extension is capable of three-dimensional registration with only two markers. We evaluate the accuracy of the former method by means of CT and MR images of markers attached to a phantom and the accuracy of the latter method by means of computer simulations.
A method for non-rigid registration of diffusion tensor magnetic resonance images
Jeffrey T. Duda, Mariano Rivera, Daniel C. Alexander, et al.
The aim of this study was to examine the registration of diffusion tensor magnetic resonance images. A method for estimating a smooth, continuous mapping between two tensor images is presented. This method includes a tensor-to-tensor measure of similarity as well as a neighborhood similarity measure intended to preserve the relative position of adjacent structures. Additionally, tensor reorientation is integrated into the algorithm in order to insure that the structural information provided by the diffusion tensor is retained. This method was tested on a variety of synthetic data sets. Experiments indicate that the orientation similarity term plays an important role in both accuracy and speed. Additionally, an investigation of the effect of signal to noise ratio (SNR) was conducted to insure the usefulness of the methods at clinically obtainable values. Qualitative examination of the results obtained with this method suggest its potential usefulness in the examination of in vivo human data, but some extension of the method as well as further testing will be necessary to fully understand its limitations for use on clinical data.
Assessing the skeletal age from a hand radiograph: automating the Tanner-Whitehouse method
Meindert Niemeijer, Bram van Ginneken, Casper A. Maas, et al.
The skeletal maturity of children is usually assessed from a standard radiograph of the left hand and wrist. An established clinical method to determine the skeletal maturity is the Tanner-Whitehouse (TW2) method. This method divides the skeletal development into several stages (labelled A, B, ...,I). We are developing an automated system based on this method. In this work we focus on assigning a stage to one region of interest (ROI), the middle phalanx of the third finger. We classify each ROI as follows. A number of ROIs which have been assigned a certain stage by a radiologist are used to construct a mean image for that stage. For a new input ROI, landmarks are detected by using an Active Shape Model. These are used to align the mean images with the input image. Subsequently the correlation between each transformed mean stage image and the input is calculated. The input ROI can be assigned to the stage with the highest correlation directly, or the values can be used as features in a classifier. The method was tested on 71 cases ranging from stage E to I. The ROI was staged correctly in 73.2% of all cases and in 97.2% of all incorrectly staged cases the error was not more than one stage.
Interactive shape models
Supervised segmentation methods in which a model of the shape of an object and its gray-level appearance is used to segment new images have become popular techniques in medical image segmentation. However, the results of these methods are not always accurate enough. We show how to extend one of these segmentation methods, active shape models (ASM) so that user interaction can be incorporated. In this interactive shape model (iASM), a user drags points to their correct position thus guiding the segmentation process. Experiments for three medical segmentation tasks are presented: segmenting lung fields in chest radiographs, hand outlines in hand radiographs and thrombus in abdominal aorta aneurysms from CTA data. By only fixing a small number of points, the part of sufficiently accurate segmentations can be increased from 20-70% for no interaction to over 95%. We believe that iASM can be used in many clinical applications.
Karhunen-Loeve transform for analysis of cardiac function in myocardial gated SPECT
Oleg Blagosklonov, Remy Sabbah, Pascal Berthout, et al.
Mycardial gated SPECT (gSPECT) is widely used to evaluate different parameters of cardiac function. Semi-quantitative analysis of the images (a visual analysis of the images with a simple scaling of function) can be performed after image segmentation using 4- or 5-point scale. The purpose of this study was to compare two functional images (KL0 and KL1) obtained by KLT (Karhunen-Loeve transform) and "clinical" images of gSPECT and to test feasibility of semi-quantitative analysis of perfusion and contraction by KLT. 99mTc-gSPECT studies were performed in 105 patients with suspected coronary artery disease. We performed visual and semi-quantitative of gSPECT and KLT images (KLT was applied to images from central slices of 3 axes). Our results showed that KL0 images match with myocardial perfusion images; KL1 images combine the data on spatial and temporal evolution of each pixel and regroup pixels by families. We suggest that opposite parts of KL1 images characterize two components of myocardial contraction: wall motion and thickening. These preliminary results showed relations between KLT images and myocardial function: KL0 images and myocardial perfusion, KL1 images and cardiac mechanics. These findings prove the potential of KLT for yielding additional useful clinical information.
Segmentation of 3D medical image data sets with a combination of region-based initial segmentation and active surfaces
Segmentation is an essential step in the analysis of medical images. For segmentation of 3-D data sets in clinical practice segmentation methods are necessary which have a small user interaction time and which are highly flexible. For this purpose we propose a two-step segmentation approach. The first step results in a coarse segmentation using the Image Foresting Transformation. In the second step an active surface creates the final segmentation. Our segmentation method was tested for segmentation on real CT images. The performance was compared with the manual segmentation. We found our work method reliable.
Improving specificity to image features for segmentation with deformable models
Jens von Berg, Vladimir Pekar, Roel Truyen, et al.
The model of image features is critical to the robustness and accuracy of deformable models. Usually, an edge detector is used for this purpose, because the object boundary is expected to correspond with a strong directed gradient in the image. Two methods are presented to make a feature model more specific and suitable for a given object class for which this assumption is too weak. One aims at a better conformance of the model with the image features by a spatially varying parameterisation of clustered features that is learnt from a training set. The other discriminates the object surface from adjacent false attractors that have similar gradient properties by additional grey value properties. The clustered feature model was successfully applied in left ventricle segmentation to delineate the epicardium in cardiac MR images for which the image gradient reverses sign along the surface. The discriminating feature approach successfully prevented false attractions in CT bone segmentation to strong edges within other nearby bones (shown for femur head). In this case, the grey value beyond the attempted gradient position discriminated well the desired bone surface edges from these false edges.
Coupled deformable models with spatially varying features for quantitative assessment of left ventricular function from cardiac MRI
Cardiac MRI has improved the diagnosis of cardiovascular diseases by enabling the quantitative assessment of functional parameters. This requires an accurate identification of the myocardium of the left ventricle. This paper describes a novel segmentation technique for automated delineation of the myocardium. We propose to use prior knowledge by integrating a statistical shape model and a spatially varying feature model into a deformable mesh adaptation framework. Our shape model consists of a coupled, layered triangular mesh of the epi- and endocardium. It is adapted to the image by iteratively carrying out i) a surface detection and ii) a mesh reconfiguration by energy minimization. For surface detection a feature search is performed to find the point with the best feature combination. To accommodate the different tissue types the triangles of the mesh are labeled, resulting in a spatially varying feature model. The energy function consists of two terms: an external energy term, which attracts the triangles towards the features, and an internal energy term, which preserves the shape of the mesh. We applied our method to 40 cardiac MRI data sets (FFE-EPI) and compared the results to manual segmentations. A mean distance of about 3 mm with a standard deviation of 2 mm to the manual segmentations was achieved.
Automated landmark generation for constructing statistical shape models
In this paper, a novel method is provided for automatic generation of landmarks to construct statistical shape models. The method generates a sparse polygonal approximation for each shape example in the training set and then automatically aligns the shape polygons by minimizing the L2 distance of the turning functions of their polygonal approximations. The turning function measures the angle of counterclockwise tangent as a function of the arc-length and is especially suitable for shape alignment since it is piecewise constant for a polygon, and invariant under translation, rotation and scaling of the polygon. Based on the minimal L2 distance, a shape classifier is used to remove the shapes very different from the training set to prevent undesirable distortion of the mean shape. For some shapes with non-rigid deformation, such as hands, a local alignment is performed by using a visual part decomposition scheme and a partial match algorithm. Finally, a set of salient match pairs are detected and used to generate the landmarks. This method has been successfully applied to various anatomical structures. As expected, a large portion of shape variability is captured.
Pre-clinical evaluation of implicit deformable models for three-dimensional segmentation of brain aneurysms from CTA images
Monica Hernandez, Rosario Barrena, Gabriel Hernandez, et al.
Knowledge of brain aneurysm dimensions is essential during the planning stage of minimally invasive surgical interventions using Guglielmi Detachable Coils (GDC). These parameters are obtained in clinical routine using 2D Maximum Intensity Projection images from Computed Tomographic Angiography (CTA). Automated quantification of the three dimensional structure of aneurysms directly from the 3D data set may be used to provide accurate and objective measurements of the clinically relevant parameters. The properties of Implicit Deformable Models make them suitable to accurately extract the three dimensional structure of the aneurysm and its connected vessels. We have devised a two-stage segmentation algorithm for this purpose. In the first stage, a rough segmentation is obtained by means of the Fast Marching Method combining a speed function based on a vessel enhancement filtering and a freezing algorithm. In the second stage, this rough segmentation provides the initialization for Geodesic Active Contours driven by region-based information. The latter problem is solved using the Level Set algorithm. This work presents a comparative study between a clinical and a computerized protocol to derive three geometrical descriptors of aneurysm morphology that are standard in assessing the viability of surgical treatment with GDCs. The study was performed on a data base of 40 brain aneurysms. The manual measurements were made by two neuroradiologists in two independent sessions. Both inter- and intra-observer variability and comparison with the automated method are presented. According to these results, Implicit Deformable Models are a suitable technique for this application.
Automatic detection of the view position of chest radiographs
Automatic identification of frontal (posteroanterior/anteroposterior) vs. lateral chest radiographs is an important preprocessing step in medical imaging. A recent approach by Amura et al. (Procs SPIE 2002; 4684: 308-315) is based on manual selection and combination of about 500 radiographs to generate as much as 24 templates by pixel-wise summing up the references, and a correctness rate of 99,99 % is reported. In order to design a fully automated procedure, 1,867 images were arbitrarily selected from clinical routine as reference for this work: 1,266 in frontal and 601 in lateral view position. The size of the radiographs varies between 2,000 and 4,000 pixels in each direction. Automatic categorization is done in two steps. At first, the image is reduced substantially in size. Regardless of the initial aspect ratio, a squared version is obtained, where the number h of pixels in both directions is a power of two. In the second step, the normalized cross correlation function at the optimal displacement is used for 5-nearest-neighbor classification. Leaving-one-out experiments were performed for h = 4, 8, 16, 32, and 64 resulting in mean correctness of 92.0 %, 99.3 %, 99.3 %, 99.6 % and 99.4 %, respectively. With respect to the approach of Amura et al., these results show that the determination of the view position of chest radiographs can be fully automated and substantially simplified if the correlation function is used directly for 5-NN classification.
Similarity measurement using polygon curve representation and Fourier descriptors for shape-based vertebral image retrieval
Shape-based retrieval of vertebral x-ray images is a challenging task because of high similarity among the vertebral shapes. Most techniques, such as global shape properties or scale space filtering, lose or fail to detect local details. As the result of this shortfall, the number of retrieved images is so high that the retrieval result is sometimes meaningless. To retrieve a small number of best matched images, shape representation and similarity measurement techniques must distinguish shapes with minor variations. The main challenge of shape-based retrieval is to define a shape representation method that is invariant with respect to rotation, translation, scaling, and the curve starting point shift. In this research, a polygon curve evolution technique was developed for smoothing polygon curves and reducing the number of data points while preserving the significant pathology of the shape. The x and y coordinates of the simplified boundary points were then converted into a bend angle versus normalized curvature length function to represent the curve. Finally, the Fourier descriptors of the shape representation were calculated for similarity measurement. This approach meets the invariance requirements and has been proved to be efficient and accurate.
An improved fast marching method for detection of endocardial boundary in echocardiographic images
Jia-yong Yan, Tian-ge Zhuang
Fast marching method is an important approach for boundary detection in medical images. However, it is difficult to be applied into echocardiographic images for the inevitable noise and artifacts. This paper presents an improved fast marching approach for boundary detection of echocardiographic images, and validates this approach by detecting and tracking the endocardial boundary in echocardiographic images. Firstly, the traditional fast marching algorithm is applied to the echocardiographic images and the existing problems of the traditional fast marching algorithm are given. Then, the algorithm is improved by introducing the advancing front’s average energy into the speed term, instead of determining the speed term only with the local image features. The experimental results show that the improved algorithm is very effective and robust.
Development of event-based motion correction technique for PET study using list-mode acquisition and optical motion tracking system
Sang-Keun Woo, Hiroshi Watabe, Yong Choi, et al.
Since recent Positron Emission Tomography (PET) scanner has a high spatial resolution, head motion during brain PET study could cause motion artifact on the image, which might make serious problem in terms of image quality as well as image quantity. Several techniques have been proposed to correct head movement in PET images, for example SPM and AIR software packages. However these techniques are only applicable for correcting the motion between two scans and assume no head movement during scanning. The aim of this study is to develop a technique to correction head motion in event-by-event base during a PET scan using a list-mode data acquisition and optical motion tracking system (POLARIS). This system uses a rebinning procedure whereby the lines of response (LOR) are geometrically transformed according to six-dimensional motion data detected by the POLARIS. A motion-corrected Michelogram was directly composed using the reoriented LOR. In the motion corrected image, the blurring artifact due to the motion was reduced by the present technique. Since the list-mode acquisition stores data as event-by-event base, the present technique makes it possible to correct head movement during PET scanning and has a potential for real-time motion correction of head movement.
Wavelet-morphology for mass detection in digital mammogram images
Golshah A. Naghdy, Yue Li, Jian Wang
In this paper, a novel wavelet-morphology method for the detection of mass abnormalities in digital mammograms is presented. The new scheme utilizes the feature extraction capability of the wavelet transform followed by a novel recursive-enhancement morphology algorithm to detect the masses. A morphology-based segmentation algorithm is finally applied to the enhanced image to separate the mass from the normal breast tissues. This technique outlines the shape of the region of interest (mass in mammograms). Tests results have confirmed the efficacy of the technique in automated detection of abnormalities in wavelet based compressed mammograms.
The effect of image enhancement on the statistical analysis of functional neuroimages: wavelet-based denoising and Gaussian smoothing
Alle Meije Wink, Jos B. T. M. Roerdink
The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising schemes are extensions of WaveLab routines, using the symmetric orthogonal cubic spline wavelet basis. In a first study, activity in a time series is simulated by superimposing a timedependent signal on a selected region. We add noise with a known signal-to-noise ratio (SNR) and spatial correlation. After denoising, the statistical analysis, performed with SPM, is evaluated. We compare the shapes of activations detected after applying the wavelet-based methods with the shapes of activations detected after Gaussian smoothing. In a second study, the denoising schemes are applied to a real functional MRI time series, where signal and noise cannot be separated. The denoised time series are analysed with SPM, while false discovery rate (FDR) control is used to correct for multiple testing. Wavelet-based denoising, combined with FDR control, yields reliable activation maps. While Gaussian smoothing and wavelet-based methods producing smooth images work well with very low SNRs, less smoothing wavelet-based methods produce better results for time series of moderate quality.
A multiscale approach for the extraction of vessels
Benoit Tremblais, Bertrand Augereau, Michel Leard
In this communication we propose a new and automatic strategy for the multiscale centerlines detection of vessels. So we wish to obtain a good representation of the vessels, that is a precise characterization of their centerlines and their diameters. The adopted solution requires the generation of an image scale-space in which the various levels of details allow to treat arteries of any diameter. The method proposed here is implemented using the Partial Differential Equations (PDE) formalism and those of differential geometry. The differential geometry permits by the computation of a new measure of valley to characterize locally the centerlines of vessels as the image surface bottom lines of valleys. The informations given by the centerlines and valley measure scale spaces are used to obtain the 2D multiscale centerlines of the coronary arteries. In that purpose we construct a multiscale adjacency graph which permits to keep the K strongest (according to the valley measure) detections. Then the obtained detection is coded as an attributed graph. So the medical practitioner can act and choose the most interesting arteries for the future 3D reconstruction. Finally, we test our process on several digital coronary arteriograms, and some retinal angiographies.
Incremental method for computing the intersection of discretely sampled m-dimensional images with n-dimensional boundaries
This paper describes an algorithm for clipping of m-dimensional objects that intersect a compact n-dimensional rectangular area. The new algorithm is an extension of a method for line clipping in three dimensions. Motivated by the need for efficient algorithms for example when comparing three-dimensional (3-D) images to each other, our method allows for the incremental computation of the subset of voxels in a discretely sampled image which are located inside a second image. Limited fields of view (rectangular regions of interest) in either image are easily supported. Application of our algorithm does not require the generation of an explicit geometrical description of the image intersection. Besides its generality with respect to the dimensions of the objects under consideration, our clipping method solves the problem of discriminating between points inside the clipping region and points on its edge, which is important when problems such as voxel intensity interpolation are only well-defined within the clipping area.
Effect of a small number of training cases on the performance of massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in low-dose CT
In this study, we investigated a pattern-classification technique which can be trained with a small number of cases with a massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in low-dose CT (LDCT). The MTANN consists of a modified multilayer artificial neural network (ANN), which is capable of operating on image data directly. The MTANN is trained by use of a large number of sub-regions extracted from input images together with the teacher images containing the distribution for the "likelihood of being a nodule." The output image is obtained by scanning of an input image with the MTANN. In the MTANN, the distinction between nodules and non-nodules is treated as an image-processing task, in other words, as a highly nonlinear filter that performs both nodule enhancement and non-nodule suppression. This allows us to train the MTANN not on a case basis, but on a sub-region basis. Therefore, the MTANN can be trained with a very small number of cases. Our database consisted of 101 LDCT scans acquired from 71 patients in a lung cancer screening program. The scans consisted of 2,822 sections, and contained 121 nodules including 104 nodules representing confirmed primary cancers. With our current CAD scheme, a sensitivity of 81.0% (98/121 nodules) with 0.99 false positives per section (2,804/2,822) was achieved. By use of the MTANN trained with a small number of training cases (n=10), i.e., five pairs of nodules and non-nodules, we were able to remove 55.8% of false positives without a reduction in the number of true positives, i.e., a classification sensitivity of 100%. Thus, the false-positive rate of our current CAD scheme was reduced from 0.99 to 0.44 false positive per section, while the current sensitivity (81.0%) was maintained.
Detection of multiple sclerosis lesions in MRIs with an SVM classifier
Annarita D'Addabbo, Nicola Ancona, Palma N. Blonda, et al.
The purpose of this paper is to test the effectiveness of a Support Vector Machine (SVM) classifier, with gaussian kernel function, in the automatic detection of small lesions from Magnetic Resonance Images (MRIs) of a patientt affected by multiple sclerosis. The data set consists of Proton Density, T2 (the spin-spin relaxation time) Spin-Echo images and a three-dimensional T1-weighted gradient echo sequence, called Magnetization-Prepared RApid Gradient Echo, that can be generated from contiguous and very thin sections, allowing detection of small lesions typically affected by partial volume effects and intersection gaps in T1 weighted Spin-Echol sequences. In this context of classification, SVM with Gaussian kernel function exhibited a good classification accuracy, higher than accuracies obtained, on the same data set, with a traditional RBF, confirming its high generalization capability and its effectiveness when applied to low-dimensional multi-spectral images.
A visual data-mining approach using 3D thoracic CT images for classification between benign and malignant pulmonary nodules
Yoshiki Kawata, Noboru Niki, Hironobu Ohamatsu, et al.
This paper presents a visual data-mining approach to assist physicians for classification between benign and malignant pulmonary nodules. This approach retrieves and displays nodules which exhibit morphological and internal profiles consistent to the nodule in question. It uses a three-dimensional (3-D) CT image database of pulmonary nodules for which diagnosis is known. The central module in this approach makes possible analysis of the query nodule image and extraction of the features of interest: shape, surrounding structure, and internal structure of the nodules. The nodule shape is characterized by principal axes, while the surrounding and internal structure is represented by the distribution pattern of CT density and 3-D curvature indexes. The nodule representation is then applied to a similarity measure such as a correlation coefficient. For each query case, we sort all the nodules of the database from most to less similar ones. By applying the retrieval method to our database, we present its feasibility to search the similar 3-D nodule images.
Prediction of breast biopsy outcome using a likelihood ratio classifier and biopsy cases from two medical centers
Anna O. Bilska-Wolak, Carey E. Floyd Jr., Joseph Y. Lo
Potential malignancy of a mammographic lesion can be assessed using the mathematically optimal likelihood ratio (LR) from signal detection theory. We developed a LR classifier for prediction of breast biopsy outcome of mammographic masses from BI-RADS findings. We used cases from Duke University Medical Center (645 total, 232 malignant) and University of Pennsylvania (496, 200). The LR was trained and tested alternatively on both subsets. Leave-one-out sampling was used when training and testing was performed on the same data set. When tested on the Duke set, the LR achieved a Received Operating Characteristic (ROC) area of 0.91± 0.01, regardless of whether Duke or Pennsylvania set was used for training. The LR achieved a ROC area of 0.85± 0.02 for the Pennsylvania set, again regardless of which set was used for training. When using actual case data for training, the LR's procedure is equivalent to case-based reasoning, and can explain the classifier's decisions in terms of similarity to other cases. These preliminary results suggest that the LR is a robust classifier for prediction of biopsy outcome using biopsy cases from different medical centers.
Classification of mammographic masses using generalized dynamic fuzzy neural networks
Wei Keat Lim, Meng Joo Er
In this paper, computer-aided classification of mammographic masses using generalized dynamic fuzzy neural networks (GDFNN) is presented. The texture parameters, derived from first-order gradient distribution and gray-level co-occurrence matrices (GCMs), were computed from the regions of interest (ROIs). A total of 77 images containing 38 benign cases and 39 malignant cases from the Digital Database for Screening Mammography (DDSM) were analyzed. A fast approach of automatically generating fuzzy rules from training samples was implemented to classify tumors. The novelty of this work is that it alleviates the problem of the conventional computer-aided diagnosis (CAD) system that requires a designer to examine all the input-output relationships of a training database in order to obtain the most appropriate structure for the classifier. In this approach, not only the connection weights can be adjusted, but also the structure can be self-adaptive during the learning process. With the classifier automatically generated by the GDFNN learning algorithm, the area under the receiver-operating characteristic (ROC) curve, Az, reached 0.9289, which corresponded to a true-positive fraction of 94.9% at a false positive fraction of 73.7%. The corresponding accuracy was 84.4%, the positive predictive value was 78.7\% and the negative predictive value was 93.3%.
Improved classification accuracy by feature extraction using genetic algorithms
Julia Patriarche, Armando Manduca, Bradley J. Erickson M.D.
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
Extension of 2D segmentation methods into 3D by means of Coons-patch interpolation
Ivo Wolf, Amir Eid, Marcus Vetter, et al.
In medical imaging, segmentation is an important step for many visualization tasks and image-guided procedures. Except for very rare cases, automatic segmentation methods cannot guarantee to provide the correct segmentation. Therefore, for clinical usage, physicians insist on full control over the segmentation result, i.e., to verify and interactively correct the segmentation (if necessary). Display and interaction in 2D slices (original or multi-planar reformatted) are more precise than in 3D visualizations and therefore indispensable for segmentation, verification and correction. The usage of slices in more than one orientation (multi-planar reformatted slices) helps to avoid inconsistencies between 2D segmentation results in neighboring slices. For the verification and correction of three-dimensional segmentations as well as for generating a new 3D segmentation, it is therefore desirable to have a method that constructs a new or improved 3D segmentation from 2D segmentation results. The proposed method enables to quickly extend segmentations performed on intersecting slices of arbitrary orientation to a three-dimensional surface model by means of interpolation with specialized Coons patches. It can be used as a segmentation tool of its own as well as for making more sophisticated segmentation methods (that need an initialization close to the boundary to detect) feasible for clinical routine.
Hybrid segmentation framework for 3D medical image analysis
Ting Chen, Dimitri N. Metaxas
Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.
Nonparametric MRI segmentation using mean shift and edge confidence maps
In this paper, a nonparametric statistical segmentation procedure based on the computation of the mean shift within the joint space-range feature representation of brain MR images is presented. The mean shift is a simple, nonparametric estimator, which can be implemented in a data-driven approach. The number of classes and other initialization parameters are not needed to compute the mean shift. The procedure estimates the local modes of the probability density function in order to define the cluster centers on the feature space. Local segmentation quality is improved by including a measure of edge confidence among adjacent segmented regions. This measure drives the iterative application of transitive closure operations on the region adjacency graph until convergence to a stable set of regions. In this manner, edge detection and region segmentation techniques are combined for the extraction of weak but significant edges from brain images. With the proposed methodology, the modes of the classes' distribution can be robustly estimated and homogeneous regions defined, but also fine borders are preserved. The main contribution of this work is the combined use of mean shift estimation, together with a robust, edge-oriented region fusion technique to delineate structures in brain MRI.
Fuzzy region growing of fMRI activation areas
Rodrigo A. Vivanco, Nicolino J. Pizzi
Conventional analysis of fMRI responses in neuroimaging experiments is typically voxel-wise, i.e. independent of spatial neighbourhood information. However, valid responses are likely to be spatially clustered and connected in 3D space. Identifying spatial relations is commonly considered a pre-processing step, isotropic Gaussian filtering for noise reduction for example. Current post-processing methods consider spatial information but not temporal information; once an activation map is obtained, voxels that do not have a sufficient number of spatial neighbors are simply removed. This paper describes how we have successfully incorporated fuzzy region growing into EvIdent®, an fMRI data analysis application. The method uses spatial-temporal information to enhance spatially connected temporally related activation regions.
White matter lesion segmentation using robust parameter estimation algorithms
White matter lesions are common brain abnormalities. In this paper, we introduce an automatic algorithm for segmentation of white matter lesions from brain MRI images. The intensities of each tissue is assumed to be Gaussian distributed, whose parameters (mean vector and covariance matrix) are estimated using a tissue distribution model. And then a measure is defined to indicate in how much content a voxel belongs to the lesions. Experimental results demonstrate that our algorithm works well.
A simple method for automated lung segmentation in x-ray CT images
Bin Zheng, J. Ken Leader III, Glenn S. Maitz, et al.
We developed and tested an automated scheme to segment lung areas depicted in CT images. The scheme includes a series of six steps. 1) Filtering and removing pixels outside the scanned anatomic structures. 2) Segmenting the potential lung areas using an adaptive threshold based on pixel value distribution in each CT slice. 3) Labeling all selected pixels ingo segmented regions and deleting isolated regions in non-lung area. 4) Labeling and filling interior cavities (e.g., pleural nodules, airway wall, and major blood vessels) inside lung areas. 5) Detecting and deleting the main airways (e.g., trachea and central bronchi) connected to the segmented lung areas. 6) Detecting and separating possible anterior or posterior junctions between the lungs. Five lung CT cases (7-10 mm in slice thickness) with variety of disease patterns were used to train or set up the classification rules in the scheme. Fifty examinations of emphysema patients were then used to test the scheme. The results were compared with the results generated from a semi-automated method with manual interaction by an expert observer. The experimental results showed that the average difference in estimated lung volumes between the automated scheme and manually corrected approach was 2.91%±0.88%. Visual examination of segmentation results indicated that the difference of the two methods was larger in the areas near the apices and the diaphragm. This preliminary study demonstrated that a simple multi-stage scheme had potential of eliminating the need for manual interaction during lunch segmentation. Hence, it can ultimately be integrated into computer schemes for quantitative analysis and diagnosis of lung diseases.
The iterative image foresting transform and its application to user-steered 3D segmentation
Segmentation and 3D visualization at interactive speeds are highly desirable for routine use in clinical settings. We circumvent this problem in the framework of the image foresting transform (IFT) - a graph-based approach to the design of image processing operators. In this paper we introduce the iterative image foresting transform (IFT+), which computes sequences of IFTs in a differencial way, present the general IFT+ algorithm, and instantiate it to be a watershed transform. The IFT+-watershed transform is evaluated in the context of interactive segmentation, where the user makes corrections by adding/removing scene regions with mouse clicks. The IFT+-watershed requires time proportional to the number of voxels in the modified regions, while the conventional algorithm computes one watershed transform over the entire scene for each iteration. The IFT+-watershed is 5.75 times faster than the watershed and considerably reduces from 17.7 to 3.16 seconds the user's waiting time in segmentation with 3D visualization. These results were obtained in an 1.5GHz Pentium-IV PC over 10 MR scenes of the head, requiring 12 to 28 corrections to segment cerebellum, pons-medulla, ventricle, and the rest of the brain, simultaneously. These results indicate that the IFT+ is a significant contribution toward interactive segmentation and 3D visualization.
Knowledge-guided information fusion for segmentation of multiple sclerosis lesions in MRI images
In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.
Lung lobe segmentation by anatomy-guided 3D watershed transform
Jan-Martin Kuhnigk, Horst Hahn, Milo Hindennach, et al.
Since the lobes are mostly independent anatomic compartments of the lungs, they play a major role in diagnosis and therapy of lung diseases. The exact localization of the lobe-separating fissures in CT images often represents a non-trivial task even for experts. Therefore, a lung lobe segmentation method suitable to work robustly under clinical conditions must take advantage of additional anatomic information. Due to the absence of larger blood vessels in the vicinity of the fissures, a distance transform performed on a previously generated vessel mask allows a reliable estimation of the boundaries even in cases where the fissures themselves are invisible. To make use of image regions with visible fissures, we linearly combine the original data with the distance map. The segmentation itself is performed on the combined image using an interactive 3D watershed algorithm which allows an iterative refinement of the results. The proposed method was successfully applied to CT scans of 24 patients. Preliminary intra- and inter-observer studies conducted for one of the datasets showed a volumetric variability of well below 1%. The achieved structural decomposition of the lungs not only assists in subsequent image processing steps but also allows a more accurate prediction of lobe-specific functional parameters.
3D vessel axis extraction using 2D calibrated x-ray projections for coronary modeling
Stewart Young, Babak Movassaghi, Juergen Weese, et al.
A new approach for 3D vessel centreline extraction using multiple, ECG-gated, calibrated X-ray angiographic projections of the coronary arteries is described. The proposed method performs direct extraction of 3D vessel centrelines, without the requirement to either first compute prior 2D centreline estimates, or perform a complete volume reconstruction. A front propagation-based algorithm, initialised with one or more 3D seed points, is used to explore a volume of interest centred on the projection geometry's isocentre. The expansion of a 3D region is controlled by forward projecting boundary points into all projection images to compute vessel response measurements, which are combined into a 3D propagation speed so that the front expands rapidly when all projection images yield high vessel responses. Vessel centrelines are obtained by reconstructing the paths of fastest propagation. Based on these axes, a volume model of the coronaries can be constructed by forward projecting axis points into the 2D images where the borders are detected. The accuracy of the method was demonstrated via a comparison of automatically extracted centrelines with 3D centrelines derived from manually segmented projection data.
Automatic classification of sulcal regions of the human brain cortex using pattern recognition
Parcellation of the cortex has received a great deal of attention in magnetic resonance (MR) image analysis, but its usefulness has been limited by time-consuming algorithms that require manual labeling. An automatic labeling scheme is necessary to accurately and consistently parcellate a large number of brains. The large variation of cortical folding patterns makes automatic labeling a challenging problem, which cannot be solved by deformable atlas registration alone. In this work, an automated classification scheme that consists of a mix of both atlas driven and data driven methods is proposed to label the sulcal regions, which are defined as the gray matter regions of the cortical surface surrounding each sulcus. The premise for this algorithm is that sulcal regions can be classified according to the pattern of anatomical features (e.g. supramarginal gyrus, cuneus, etc.) associated with each region. Using a nearest-neighbor approach, a sulcal region is classified as being in the same class as the sulcus from a set of training data which has the nearest pattern of anatomical features. Using just one subject as training data, the algorithm correctly labeled 83% of the regions that make up the main sulci of the cortex.
Segmentation of medical images based on homogram thresholding
Homogram, or histogram based on homogeneity is employed in our algorithm. Histogram thresholding is a classical and efficient method for the segmentation of various images, especially of CT images. However, MR images are difficultly segmented via this method; as the gray levels of their pixels are too similar to distinguish. The regular histogram of a MR image is usually plain, thus the peaks and valleys of the histogram are hard to find and locate precisely. We proposed a new definition of homogeneity for which a series of sub-images are employed to compute. Therefore, both local and global information are taken in accounted. Then the image is updated with the homogeneity weighted original and average gray levels. The more homogeneous the pixel is, the closer the updated gray level is to the average. The new histogram is calculated based on the updated image. It is much steeper than the regular one. Some indiscernible peaks in the regular histogram can be recognized easily from the new histogram. Therefore a simple but agile peak-finding approach is able to determine objects to segment and corresponding thresholds exactly. Segmentation via thresholding is feasible now even in MR images. Moreover, our algorithm remains speedy even though the accuracy of segmentation advances.
Pulmonary nodule segmentation in thoracic 3D CT images integrating boundary and region information
Yoshiki Kawata, Noboru Niki, Hironobu Ohamatsu, et al.
Accurately segmenting and quantifying pulmonary nodules structure is a key issue in three-dimensional (3-D) computer-aided diagnosis (CAD) schemes. This paper presents a segmentation approach of pulmonary nodules in thoracic 3-D images. This approach consists of two processes such as a pre-process for removing vessels attached and a surface deformation process. The pre-process is performed by 3-D gray-scale morphological operations. The surface deformation model used here integrates boundary and region information to deal with inappropriate position or size of an initial surface. This approach is derived through a 3-D extension of the geodesic active region model developed by Paragios and Deriche. First, in order to measure differences between the nodule and other regions a statistical analysis of the observed intensity is performed. Based on this analysis, the boundary and region information are represented by boundary and region likelihood, respectively. Second, an objective function is defined by integrating boundary and region-based segmentation modules. This integration aims at seeking surfaces that provide high boundary likelihood and high posterior segmentation probability. Finally, the deformable surface model is obtained by minimizing the objective function and, is implemented by a level set approach. We demonstrate an advantage of the proposed segmentation approach in comparison with the conventional deformable surface model using a practical 3-D pulmonary image.
Automatic segmentation of brain infarction in diffusion-weighted MR images
It is important to detect the site and size of infarction volume in stroke patients. An automatic method for segmenting brain infarction lesion from diffusion weighted magnetic resonance (MR) images of patients has been developed. The method uses an integrated approach which employs image processing techniques based on anisotropic filters and atlas-based registration techniques. It is a multi-stage process, involving first images preprocessing, then global and local registration between the anatomical brain atlas and the patient, and finally segmentation of infarction volume based on region splitting and merging and multi-scale adaptive statistical classification. The proposed multi-scale adaptive statistical classification model takes into account spatial, intensity gradient, and contextual information of the anatomical brain atlas and the patient. Application of the method to diffusion weighted imaging (DWI) scans of twenty patients with clinically determined infarction was carried out. It shows that the method got a satisfied segmentation even in the presence of radio frequency (RF) inhomogeneities. The results were compared with lesion delineations by human experts, showing the identification of infarction lesion with accuracy and reproducibility.
Segmentation of burn images based on color and texture information
Carmen Serrano, Begona Acha, Jose Ignacio Acha
In this paper a color image segmentation algorithm for its application to burn wound images is proposed. It takes into accoutn both color and texture information to perform the segmentation. We used the perceptually uniform CIE L*u*v* color space. Texture information is considered by extracting a small trimming from the part to be segmented. Then this mask is slid along the image and a transformed image is calculated, where each pixel is the sum of Euclidean distances in the L*u*v* color coordinates between all the color values in the mask and the pixels under it. Afterwards the transformed image must be thresholded to obtain the segmented image. The threshold is automatically determined by a modification of Otsu's method. We have tested the algorithm with 30 images, obtaining very good results in most of them.
A probabilistic framework for the spatiotemporal segmentation of multiple sclerosis lesions in MR images of the brain
Hayit Greenspan, Arnaldo Mayer, Allon Shahar
In this paper we describe the application of a novel statistical image-sequence (video) modeling scheme to sequences of multiple sclerosis (MS) images taken over time. A unique key feature of the proposed framework is the analysis of the image-sequence input as a single entity as opposed to a sequence of separate frames. The extracted space-time regions allow for the detection and identification of disease events and processes, such as the appearance and progression of lesions. According to the proposed methodology, coherent space-time regions in the feature space, and corresponding coherent segments in the video content are extracted by unsupervised clustering via Gaussian mixture modeling (GMM). The parameters of the GMM are determined via the maximum likelihood principle and the Expectation-Maximization (EM) algorithm. The clustering of the image sequence yields a collection of regions (blobs) in a four-dimensional feature space (including intensity, position (x,y), and time). Regions corresponding to MS lesions are automatically identified based on criteria regarding the mean intensity and the size variability over time. The proposed methodology was applied to a registered sequence of 24 T2-weighted MR images acquired from an MS patient over a period of approximately a year. Examples of preliminary qualitative results are shown.
Model-based segmentation of abdominal aortic aneurysms in CTA images
Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets. In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation. A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.
Membership-based multiprotocol MR brain image segmentation
A simple,non-iterative,membership-based method for multiprotocol brain magnetic resonance image segmentation has been developed. The intensity in homogeneity correction and MR intensity standardization techniques are used first to make the MR image intensities have a tissue-specific meaning. The mean intensity vector and covariance matrix of each brain tissue are then estimated and fixed. Vectorial scale-based fuzzy connectedness and certain morphological operations are utilized to geernate the brain intracranial mask. The fuzzy membership value of each voxel for each brain tissue is then estimated within the intracranial mask via a multivariate Gaussian model. Finally, a maximum likelihood criterion with spatial constraints taken into account is utilized in classifying all voxels in the intracranial mask into gray matter, white matter, and cerebrospinal fluid. This method has been tested on 10 clinical MR data sets. These tests and a comparison with the method of C-means and fuzzy C-means clustering indicated the effectiveness of the method.
Tensor scale-based fuzzy connectedness image segmentation
Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.
Blind source separation in retinal videos
Eduardo S. Barriga, Paul W. Truitt, Marios S. Pattichis, et al.
An optical imaging device of retina function (OID-RF) has been developed to measure changes in blood oxygen saturation due to neural activity resulting from visual stimulation of the photoreceptors in the human retina. The video data that are collected represent a mixture of the functional signal in response to the retinal activation and other signals from undetermined physiological activity. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1.0% of the total reflected intensity level which makes the functional signal difficult to detect by standard methods since it is masked by the other signals that are present. In this paper, we apply principal component analysis (PCA), blind source separation (BSS), using Extended Spatial Decorrelation (ESD) and independent component analysis (ICA) using the Fast-ICA algorithm to extract the functional signal from the retinal videos. The results revealed that the functional signal in a stimulated retina can be detected through the application of some of these techniques.
Semi-automated segmentation of cortical subvolumes via hierarchical mixture modeling
J. Tilak Ratnanather, Carey E. Priebe, Michael I. Miller
We propose a method which allows for the flexibility of a Gaussian mixture model - with model complexity selected adaptively from the data - for each tissue class. Our procedure involves modelling each class as a semiparametric mixture of Gaussians. The major difficulty associated with employing such semiparametric methods is overcome by solving dynamically the model selection problem. The crucial step of determining class-conditional mixture complexities for (unlabeled) test data in the unsupervised case is accomplished by matching models to a predefined data base of hand labelled experimental tissue samples. We model the class-conditional probability density functions via the "alternatinv kernel and mixture" (AKM) method which involves (1) semi-parametric estimation of subject-specific class-conditional marginal densities for a set of training volumes, (2) nearest neighbor matching of the test data to the training models providing for semi-automated class-conditional mixture complexities, (3) parameter fitting of the selected training model to the test data, and (4) plug-in Bayes classification of unlabeled voxels. Compared with previous approaches using partial volume mixtures for ten cingulate gyri, the hierarchical mixture model methodology provides a superior automatic segmentation results with a performance improvement that is statistically significant (p=0.03 for a paired one-sided t-test).
Computerized detection of pulmonary embolism in 3D computed tomographic (CT) images: vessel tracking and segmentation techniques
We are developing a computer-aided diagnosis (CAD) system for detection of pulmonary embolism in computed tomographic (CT) images. An adaptive 3D pixel clustering method was developed based on Bayesian estimation and Expectation-Maximization (EM) analysis to segment vessels from their surrounding tissues. After a “connected component analysis”, the vessel tree was reconstructed by tracking the vessel and its branches in 3D space based on their geometric characteristics such as the tracked vessel direction and skeleton. The node location of splitting and merging of vessel branches were identified based on “bifurcation analysis”. 2D and 3D features of the tracked vessels and the surrounding tissues were used in a multi-dimensional feature space to identify PE from normal vessels. In this preliminary study, about 95% of the vessels could -be segmented and tracked even when the vessels were partially obstructed by PE ranging from 5% to 90%. Our method could detect 58% of all the PEs with an average 10.5 (ranging from 8 to 15) of false positives per case. 100% of the PEs could be detected if the average radius of vessels were larger than 2 mm and the vessels were partially obstructed by PE ranging from 20% to 80%.
Bone segmentation using multiple communicating snakes
Lucia Ballerini, Leonardo Bocchi
Skeletal age assessment is a frequently performed procedure which requires high expertise and a considerable amount of time. Several methods are being developed to assist radiologists in this task by automating the various steps of the process. In this work we describe a method to perform the segmentation step, by means of a modified active contour approach. A set of separate active contours models each bone in a portion of the radiogram. Due to the complexity of the contour, and to the presence of multiple adjacent contours, we add to the commonly used energy termas a first-order derivative energy which allow to take into account the direction of the contour. Moreover, anatomical relationships among bones are modeled as additional internal elastic forces which couple together the contours. Contour energy is optimized using a genetic algorithm. Chromosomes are used to encode positions of snake points, using a polar representation. The genetic optimization overcomes the difficulties related to local minima and to the initialization criterium, and conveniently allows the addition of new energy terms. Experimental results show the method allows to achieve an accurate segmentation of the bone complexes in the region of interest.
Automatic aortic vessel tree extraction and thrombus detection in multislice CT
Krishna Subramanyan, Melinda Steinmiller, Diana Sifri, et al.
The abdominal aorta is the most common site for an aneurysm, which may lead to hemorrhage and death, to develop. The aim of this study was to develop a semi-automated method to de-lineate the blood flow and thrombus region, subsequently detect the centerline of these vessels to make measurements necessary for stent design from computed tomograms. We developed a robust method of tracking the aortic vessel tree from a user selected seed point using series of image processing such as fast marching method to delineate the blood flow, morphological and distance transforms methods to extract centerlines, and finally by reinitializing the fast marching in a blood filled region subtracted CT volume to obtain the thrombus borders. Fifteen patients were scanned with contrast on Mx8000 CT scanner (Philips Medical Systems), with a 1.3 mm thickness, 1.0 mm slice spacing, and a stack of 512x512x380 volume data sets were reconstructed. The automated image processing took approximately 30 to 90 seconds to compute the centerline and borders of the aortic vessel tree. We compared our results with manual and 3D volume rendering methods and found automatic method is superior in accuracy of spatial localization (0.94-0.97 ANOVA K) and accuracy of diameter determination (0.88-0.98).
Image segmentation using information theoretic criteria
Image segmentations based on maximum likelihood (ML) or maximum a posteriori (MAP) analyses of object textures, edges, and shape often assume stationary Gaussian distributions for these features. For real images, neither Gaussianity nor stationarity may be realistic, so model-free inference methods would have advantages over those that are model-dependent. Relative entropy provides model-free inference, and a generalization--the Jensen-Renyi divergence (JRD)--computes optimal n-way decisions. We apply these results to patient anatomy contouring in X-ray computed tomography (CT) for radiotherapy treatment planning.
Model-based 3D segmentation of the bones of the ankle and subtalar joints in MR images
Our ongoing project on the kinematic analysis of the joints of the foot and ankle via magnetic resonance (MR) imaging requires segmentation of bones in images acquired at different positions of the joint. This segmentation requires an extensive amount of operator time, especially for the current study involving 300 scenes and 4 bones to be segmented in each scene. A 3D model-based segmentation approach is developed wherein the model is generated from the segmentation of a specific bone from one scene and is used for segmenting the same bone in all other scenes. This method works in two sequential steps. In the first step, the patient- and bone-specific model is generated by segmenting the target bone from one scene using the live wire method. In the second step, the segmentation of the same bone for the same patient in a scene corresponding to another ankle position is obtained by finding an optimum rigid body transformation to minimize its fitting energy. The fitting energy utilized captures both boundary- and region-based information in a unified manner. This method has produced satisfactory results for the 30 pairs of images used for evaluation. The model-based segmentation method will significantly reduce the operator time required by our ongoing study.
Magnetic resonance segmentation with the bubble wave algorithm
Harvey E. Cline, Siegwalt Ludke
A new bubble wave algorithm provides automatic segmentation of three-dimensional magnetic resonance images of both the peripheral vasculature and the brain. Simple connectivity algorithms are not reliable in these medical applications because there are unwanted connections through background noise. The bubble wave algorithm restricts connectivity using curvature by testing spherical regions on a propagating active contour to eliminate noise bridges. After the user places seeds in both the selected regions and in the regions that are not desired, the method provides the critical threshold for segmentation using binary search. Today, peripheral vascular disease is diagnosed using magnetic resonance imaging with a timed contrast bolus. A new blood pool contrast agent MS-325 (Epix Medical) binds to albumen in the blood and provides high-resolution three-dimensional images of both arteries and veins. The bubble wave algorithm provides a means to automatically suppress the veins that obscure the arteries in magnetic resonance angiography. Monitoring brain atrophy is needed for trials of drugs that retard the progression of dementia. The brain volume is measured by placing seeds in both the brain and scalp to find the critical threshold that prevents connections between the brain volume and the scalp. Examples from both three-dimensional magnetic resonance brain and contrast enhanced vascular images were segmented with minimal user intervention.
Vascular MR segmentation: wall and plaque
Fuxing Yang, Gerhard Holzapfel, Christian Schulze-Bauer, et al.
Cardiovascular events frequently result from local rupture of vulnerable atherosclerotic plaque. Non-invasive assessment of plaque vulnerability is needed to allow institution of preventive measures before heart attack or stroke occur. A computerized method for segmentation of arterial wall layers and plaque from high-resolution volumetric MR images is reported. The method uses dynamic programming to detect optimal borders in each MRI frame. The accuracy of the results was tested in 62 T1-weighted MR images from 6 vessel specimens in comparison to borders manually determined by an expert observer. The mean signed border positioning errors for the lumen, internal elastic lamina, and external elastic lamina borders were -0.12±0.14 mm, 0.04±0.12mm, and -0.15±0.13 mm, respectively. The presented wall layer segmentation approach is one of the first steps towards non-invasive assessment of plaque vulnerability in atherosclerotic subjects.
Feature extraction and segmentation in medical images by statistical optimization and point operation approaches
Shuyu Yang, Philip King, Enrique Corona, et al.
Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.
The watershed and skeletonization of angiography
Peter J. Yim, Desok Kim, Peter L. Choyke
The Ordered Region Growing (ORG) algorithm has been proposed as a method for delineation of vessel paths from magnetic resonance angiography (MRA). In this paper we demonstrate that the ORG algorithm is a fundamental method for ridge detection that is analogous to watershed segmentation. First, we characterize the segmentation boundaries produced by the watershed as optimal paths. Watershed lines between two points satisfy the criteria that the minimum intensity of the line is maximal out of all possible connected paths between the two points. This is referred to as the greatest-minima criteria. This criteria is guaranteed to provide a unique solution when points in the image are unique-valued. We observe that detection of watershed boundaries from the 2D gradient magnitude image is a similar problem to detection of line-like objects in 3D images, including small vessels in 3D angiography. The ORG algorithm generates an acyclic graph that represents unique paths between any two given points in an image. We prove that paths within the acyclic graph generated by the ORG algorithm conform to the greatest-minima criteria and are thus fundamentally analogous to watershed segmentation boundaries.
Analysis of the trade-offs between manual and computer-based stereology/classification
Estimation of volume or area of tissue types in an image requires both mensuration and classification. The former is achieved through stereology -- a set of techniques that estimate such parameters as area, volume, surface area, length, and number. Classification is achieved by extracting features that capture the discriminating information about tissue type. Both stereology and classification can be performed either manually or by computer. Manual techniques for the combination are based on coarse point counting (low resolution), but assumed perfect pixel classification. Computer-based methods, on the other hand, rely on very fine point counting but in general suffer from imperfect pixel classification. This paper examines the interaction between manual and image processing-based approaches; in particular, we present a measure that combines the classification and measurement errors. Estimation of the variance is used to define the conditions under which each method is and is not advantageous despite its underlying error. This allows the user to choose a method that optimizes overall performance, given the human and machine capabilities available. Illustrations are given of cases in which each method can be preferable, as measured by the variance of the estimate of the performance that was inferred from the measurement.
Theoretical assessment of image analysis: statistical vs structural approaches
Statistical and structural methods are two major approaches commonly used in image analysis and have demonstrated considerable success. The former is based on statistical properties and stochastic models of the image and the latter utilizes geometric and topological models. In this study, Markov random field (MRF) theory/model based image segmentation and Fuzzy Connectedness (FC) theory/Fuzzy connected objeect delineation are chosen as the representatives for these two approaches, respectively. The comparative study is focused on their theoretical foundations and main operative procedures. The MRF is defined on a lattice and the associated neighborhood system and is based on the Markov property. The FC method is defined on a fuzzy digital space and is based on fuzzy relations. Locally, MRF is characterized by potentials of cliques, and FC is described by fuzzy adjacency and affinity relations. Globally, MRF is characterized by Gibbs distribution, and FC is described by fuzzy connectedness. The task of MRF model based image segmentation is toe seek a realization of the embedded MRF through a two-level operation: partitioning and labeling. The task of FC object delineation is to extract a fuzzy object from a given scene, through a two-step operation: recognition and delineation. Theoretical foundations which underly statistical and structural approaches and the principles of the main operative procedures in image segmentation by these two approaches demonstrate more similarities than differences between them. Two approaches can also complement each other, particularly in seed selection, scale formation, affinity and object membership function design for FC and neighbor set selection and clique potential design for MRF.
Image analysis: recovering the true scene from noisy images
Inspired by the principle of image restoration, a strategy for restoring the true scene from a given image has been developed. When the embedded model of a type of images is derived, the true scene can be recovered by seeking an image which best fits this model. Gaussian mixture and asymptotic independence of a pixel intensifies of x-ray CT and MR images have been proved which validate the use of independent Finite Normal Mixture (FNM) and locally dependent Markov random field (MRF) models. FNM is futher shown to be a degenerate Hidding MRF (HMRF). A two-level image analysis method is developed for recovering the true scene from the given x-ray CT and MR image. At the low-level, it is a pixel-based intensity processing method and utilizes the FNM model and an Expectation-Maximization algorithm, known as FNM-EM operation. At the high-level, it is a region-based context processing method and utilizes MRF and an Iterated Conditional Mode algorithm, known as MRF-ICM operation. The results obtained by applying this method to simulated and real phantom images demonstrated considerable success: the restored true scene is the ground truth of the objets in the given image. Results also demonstrated that best fitting the given image and best fitting the embedded model may lead to two different scenes, and only in the case of high SNR, these two scenes are closer.
A detection method of ground glass opacities in chest x-ray CT images using automatic clustering techniques
Mitsuhiro Tanino, Hotaka Takizawa, Shinji Yamamoto, et al.
In this paper, we described an algorithm of automatic detection of Ground Glass Opacities (GGO) from X-ray CT images. In this algorithm, first, suspicious shadows are extracted by our Variable N-Quoit (VNQ) filter which is a type of Mathematical Morphology filters. This filter can detect abnormal shadows with high sensitivity. Next, the suspicious shadows are classified into a certain number of classes using feature values calculated from the suspicious shadows. In our traditional clustering method, a medical doctor has to manually classify the suspicious shadows into 5 clusters. The manual classification is very hard for the doctor. Thus, in this paper, we propose a new automatic clustering method which is based on a Principal Component (PC) theory. In this method, first, the detected shadows are classified into two sub-clusters according to their sizes. And then, each sub-cluster is further classified into two sub-sub-clusters according to PC Scores(PCS) calcuated from the feature values of the shadows in the sub-cluster. In this PCS-based classification, we use a threshold which maximizes the distance between the two sub-sub-clusters. The PCS-based classification is iterated recursively. Using discriminate functions based on Mahalanobis distance, the suspicious shadows are determined to be normal or abnormal. This method was examined by many samples (including GGO's shadows) of chest CT images, and proved to be very effective.
Analysis of elastography methods using mathematical and ex vivo data
Brett C. Byram, Michael R. Wahl, David Richard Holmes III, et al.
Intravascular ultrasound (IVUS) currently has a limited ability to characterize endovascular anatomic properties. IVUS elastography enhances the ability to characterize the biomechanical properties of arterial walls. A mathematical phantom generator was developed based on the characteristics of 30MHz, 64 element IVUS catheter images from excised canine femoral arteries. The difference between high and low-pressure intra-arterial images was modeled using phase shifts. The increase in phase shift occurred randomly, generally at every three pixels in our images. Using mathematical phantoms, different methods for calculating elastograms were quantitatively analyzed. Specifically, the effect of standard cross correlation versus cross correlation of the integral of the inflection characteristics for a given set of data, and the effect of an algorithm utilizing a non-constant kernel, were assessed. The specific methods found to be most accurate on the mathematical phantom data were then applied to ex vivo canine data of a scarred and a healthy artery. The algorithm detected significant differences between these two sets of arterial data. It will be necessary to obtain and analyze several more sets of canine arterial data in order to determine the accuracy and reproducibility of the algorithm.
Analyzing and selecting texture measures for quantifying trabecular bone structures using surrogates
The method of surrogates is applied to three-dimensional MR images to assess and select texture measures for the quantitative characterization of trabecular bone structures for patients with and without osteoporotic bone fractures. Using methods borrowed from the analysis of nonlinear time series, it is possible to generate for a given 3D image surrogate images which have the same linear correlations and the same intensity distribution as the original one whereas all higher-order correlations are wiped out. For bone data (distal radius) from osteoporotic and healthy patients, surrogates are generated by using iterative techniques. In order to test for the presence of nonlinear correlations we calculate the spectrum of weighted scaling indices as a nonlinear texture measure sensitive to morphological image features. It is shown that a significant discrimination between original and surrogate data is possible by comparing the respective spectra. This proves that nonlinear correlations are relevant for bone images and must be taken into account in the texture analysis. The use of nonlinear measures becomes mandatory for an effective description of the image content. Generally, it turns out that the method of surrogate data is a vital tool for assessing the quality of texture measures for any kind of medical images.
Morphological and texture features for cancer tissues microscopic images
Accurate and reliable decision making in cancer prognosis can help in the planning of appropriate surgery and therapy and, in general, optimize patient management through the different stages of the disease. In this paper, we present a novel fractal geometry algorithm as a potential method for classifying colorectal histopathological images. 102 microscopic samples of colon tissue were examined in order to identify abnormalities using a morphogical feature approach based on segmenting the image into different classes, derived from fractal dimension. The obtained mean fractal dimension (FD) for normal object tissue was 1.797+/- 0.0381 (n = 44) compared with 1.866+/-0.0262 for malignant samples (n = 58). In brief, this study was able to demonstrate the value of fractal dimension based on morphological approach in the analysis of microscopic colon cancer images. Although, the obtained results are strongly significant in the separation between normal and malignant colorectal images, further analyses are essential to incorporate this methodology into routine clinical practice by supporting pathologist decision.
Automatic bone age assessment: a registration approach
Miguel-Angel Martin-Fernandez, Marcos Martin-Fernandez, Carlos Alberola-Lopez
In this paper we describe a method for registering human hand radiographs (templates) onto a target radiograph for automatic bone age assessment. The method itself constitutes a complete methodology for bone age determination, since it performs similar tasks as the well-known Greulich-Pyle medical technique. In addition, this method is a first step towards a segmentation-by-registration procedure which pursues to carry out a detailed shape analysis of the bones of the hand with the same purpose of age determination. The method consists of two registration stages: the first one is a landmark-based procedure, with landmarks located in relevant areas of the human hand. This first stage consists of several affine transformations both applied to the whole hand and to each particular finger. The second stage is an intensity-based method which uses mutual information to correct for the differences in the finger widths between the template images and the target image.
Scaling index method: a novel nonlinear technique for the analysis of high-resolution MRI of human bones
The scaling index method (SIM) is a novel non-linear technique to extract structural information from arbitrary data sets. The tomographic images of a three dimensional object can be interpreted as a pixel distribution in a four dimensional space. The SIM provides a distribution of pointwise dimensions which characterizes the structural information of images. The SIM is applied to high resolution magnetic resonance images of human spinal and femoral bone specimens IN VITRO in order to derive a 3d non-linear texture measure which is compared to standard 2d morphometric parameters and bone mineral density in the prediction of biomechanical strength of trabecular bone. Our results show that structural non-linear parameters associated with the trabecular substructure of the bone can effectively predict the mechanical properties of trabecular bone in vitro. This indicates that the trabecular architecture contributes substantially to the biomechanical properties of the bone.
Computer-assisted radiographic characterization of alloimplant materials used as bone substitutes in dentistry
Naglaa Abdel-Wahed, Abdel-Wahab S. Ahmed, Adel Zein Elabedeen, et al.
We develop a computerized system for evaluation of alloimplant procedures in dentistry from x-ray images. The goal of this system is to help clinicians make more accurate evaluation of their surgical procedures as well as to guide them in selecting the most appropriate alloimplant material in an objective manner. A study was conducted whereby three types of alloimplant materials were inserted in surgical defects in the tibia of dogs. Each animal had four such defects for the three different materials in addition to a control defect that was intentionally left empty. The defect locations were imaged using x-rays at periodic intervals starting immediately after the operation. The animals were sacrificed at different times after the surgical operation. The acquired images were paired with their correct diagnosis and split into two sets representing the learning and testing data for our computerized system. The plain x-ray films were scanned using a standard film digitizer and standardized in size and intensity using a step wedge that was imaged beside the region of interest. A set of first and second order textural and radiometric parameters were extracted from each alloimplant location outlined by the radiographer to describe its clinical status in a quantitative manner.
Filtered back-projection reconstruction technique for coherent-scatter computed tomography
Udo van Stevendaal, Jens-Peter Schlomka, Michael Grass
For the first time, a reconstruction technique based on filtered back-projection (FBP) using curved 3D back-projection lines is applied to 2D coherent-scatter computed tomography (CSCT) projection data. It has been demonstrated, that CSCT yields information about the molecular structure of an object. So far, the acquired projection data are reconstructed with the help of algebraic reconstruction techniques. Due to the computational complexity of iterative reconstruction, these methods lead to relatively long reconstruction times. In this contribution, a reconstruction algorithm based on 3D FBP is introduced and tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. Within a fraction of computation time at least a comparable image quality is achieved when using FBP reconstruction. In addition, it has the advantage, that - in contrast to iterative reconstruction schemes - sub-field-of-view reconstruction becomes feasible. This allows a selective reconstruction of the scatter function for a region of interest. The method is based on a row by row high-pass filtering of the scatter data, with or without fan to parallel beam rebinning. The 3D back-projection is performed along curved lines through a volume defined by the in-plane spatial coordinates and the wave-vector transfer.
Cardiac image reconstruction on a 16-slice CT scanner using a retrospectively ECG-gated multicycle 3D back-projection algorithm
Fast 16-slice spiral CT delivers superior cardiac visualization in comparison to older generation 2- to 8-slice scanners due to the combination of high temporal resolution along with isotropic spatial resolution and large coverage. The large beam opening of such scanners necessitates the use of adequate algorithms to avoid cone beam artifacts. We have developed a multi-cycle phase selective 3D back projection reconstruction algorithm that provides excellent temporal and spatial resolution for 16-slice CT cardiac images free of cone beam artifacts.
Image reconstruction from sensitivity encoded MRI data using extrapolated iterations of parallel projections onto convex sets
Alexei A. Samsonov, Eugene G. Kholmovski, Christopher R. Johnson
Parallel imaging techniques for MRI use differences in spatial sensitivity of multiple receiver coils to achieve additional encoding effect and significantly reduce data acquisition time. Recently, a projection onto convex sets (POCS) based method for reconstruction from sensitivity-encoded data (POCSENSE) has been proposed. The main advantage of the POCSENCE in comparison with other iterative reconstruction techniques is that it offers a straightforward and computationally efficient way to incorporate non-linear constraints into the reconstruction that can lead to improved image quality and/or reliable reconstruction for underdetermined problems. However, POCSENSE algorithm demonstrates slow convergence in cases of badly conditioned problems. In this work, we propose a novel method for image reconstruction from sensitivity encoded MRI data that overcomes the limitation of the original POCSENSE technique. In the proposed method, the convex combination of projections onto convex sets is used to obtain an updated estimate of the solution via relaxation. The new method converges very efficiently due to the use of an iteration-dependent relaxation parameter that may extend far beyond the theoretical limits of POCS. The developed method was validated with phantom and volunteer MRI data and was demonstrated to have a much higher convergence rate than that of the original POCSENSE technique.
Tomographic Reconstruction
icon_mobile_dropdown
Dynamic cardiac volume imaging using area detectors
Herbert Bruder, Arne Hoelzel, Karl Stierstorfer, et al.
We present a reconstruction scheme for dynamic cardiac volume imaging using Area Detector Computed Tomography (CT) named Multi-Sector Cardiac Volume Reconstruction (MCVR) which is based on a 3D-backprojection of the Feldkamp-type. It is intended for circular scanning using area detectors covering the whole heart volume, but the method can easily be extended to cardiac spiral imaging using multi-slice CT. In cardiac imaging with multi-slice CT continuous data acquisition combined with the parallel recording of the patient's ECG enables retrospective gating of data segments for image reconstruction. Using consecutive heart cycles MCVR identifies complementary and time consistent projection data segments ≤ π using temporal information of the ECG. After a row by row parallel rebinning and temporal rebinning the projection data have to be filtered using conventional convolution kernels and finally reconstructed to image space using a 3D-backprojection. A dynamic anthropomorphic computer model of the human heart was developed in order to validate the MCVR approach. A 256-slice detector system with 0.5mm slice collimation was simulated operating in a circular scanning mode at a gantry rotation time of 330ms and compared to state-of-the-art 16-slice technology. At enddiastole the coronary anatomy can be visualized with excellent image quality. Although an area detector with large cone angling covering the entire heart volume was used no cone-artifacts could be observed. Using a 2-sector approach a nearly motion free 3D visualization of the heart chambers was obtained even at endsystole.
Poster Session
icon_mobile_dropdown
Efficient and accurate likelihood for iterative image reconstruction in x-ray computed tomography
We report a novel approach for statistical image reconstruction in X-ray CT. Statistical image reconstruction depends on maximizing a likelihood derived from a statistical model for the measurements. Traditionally, the measurements are assumed to be statistically Poisson, but more recent work has argued that CT measurements actually follow a compound Poisson distribution due to the polyenergetic nature of the X-ray source. Unlike the Poisson distribution, compound Poisson statistics have a complicated likelihood that impedes direct use of statistical reconstruction. Using a generalization of the saddle-point integration method, we derive an approximate likelihood for use with iterative algorithms. In its most realistic form, the approximate likelihood we derive accounts for polyenergetic X-rays and poisson light statistics in the detector scintillator, and can be extended to account for electronic additive noise. The approximate likelihood is closer to the exact likelihood than is the conventional Poisson likelihood, and carries the promise of more accurate reconstruction, especially in low X-ray dose situations.
CT scout z-resolution improvement with image restoration methods
Yufeng Zheng, Xiaohui Cui, Mark P. Wachowiak, et al.
Currently, new applications demand utilizing CT scout images for diagnostic purposes. However, many CT scout images cannot be used diagnostically due to their poor resolution, particularly in the direction of table movement. Spatial resolution generally can be improved with image restoration techniques. Based on the principles of Wiener filtering and inverse filtering, this paper presents a modified Wiener filtering approach in the frequency domain. The concept of equivalent target point spread function is introduced, which makes the restoration process steerable. Consequently, balancing resolution improvement with noise suppression is facilitated. Experiments compare the image quality with traditional inverse filtering and Wiener filtering. The modified Wiener filtering method has been shown to restore the scout image with higher resolution and lower noise.
Analytical solution to 3D SPECT reconstruction with nonuniform attenuation, scatter, and spatially variant resolution variation for variable focal-length fan-beam collimators
In the past decades, analytical (non-iterative) methods have been extensively investigated and developed for the reconstruction of three-dimensional (3D) single-photon emission computed tomography (SPECT). However, it becomes possible only recently when the exact analytic non-uniform attenuation reconstruction algorithm was derived. Based on the explicit inversion formula for the attenuated Radon transform discovered by Novikov (2000), we extended the previous researches of inverting the attenuated Radon transform of parallel-beam collimation geometry to fan-beam and variable focal-length fan-beam (VFF) collimators and proposed an efficient, analytical solution to 3D SPECT reconstruction with VFF collimators, which compensates simultaneously for non-uniform attenuation, scatter, and spatially-variant or distance-dependent resolution variation (DDRV), as well as suppression of signal-dependent non-stationary Poisson noise. In this procedure, to avoid the reconstructed images being corrupted by the presence of severe noise, we apply a Karhune-Loève (K-L) domain adaptive Wiener filter, which accurately treats the non-stationary Poisson noise. The scatter is then removed by our scatter estimation method, which is based on the energy spectrum and modified from the triple-energy-window acquisition protocol. For the correction of DDRV, a distance-dependent deconvolution is adapted to provide a solution that realistically characterizes the resolution kernel in a real SPECT system. Finally image is reconstructed using our VFF non-uniform attenuation inversion formula.
Optimization of view weighting in tilted-plane-based reconstruction algorithms to minimize helical artifacts in multislice helical CT
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
Adaptive interpolation approach for multislice helical CT reconstruction
Helical interpolation or weighting functions used in most multi-slice computed tomography (MSCT) today are object independent. Although these algorithms have been shown to perform satisfactorily in most clinical settings, recent investigations have revealed significant image artifacts under special conditions. These artifacts are generated mainly when scanning objects wiht large variations at high helical pitches. In this paper, we present an adaptive interpolation approach that produces effective artifact reduction while keeping the SSP impact to a minimum. In the proposed scheme, two interplations are performed for each projection sample. The first interpolation has the property of producint excellent SSP while the second interpolation has the property of image artifacts suppression. The projection samples generated by the two interpolation processes are then compared and a differential signal is produced. The final interpolated projection is the weighted sum of the two interpolations, with the weight being derived from a scaling function. Extensive phantom and patient studies were conducted. A thin-disc phantom experiment shows that the proposed scheme produces identical SSP as compared to the first interpolation. Experiments with phantoms and clinical studies also show that the adaptive interpolation approach produces significantly reduced image artifacts as compared to the existing algorithms.
A Wiener filtering approach over the Euclidean motion group for radon transform inversion
The problem of Radon transform inversion arises in field as diverse as medical imaging, synthetic aperture radar, and radio astronomy. In this paper, we model the Radon transform as a convolution integral over the Euclidean motion group and provide a novel deconvolution method for its inversion. The deconvolution method presesnted here is a special case of the Wiener filtering framework in abstract harmonic analysis that was recently developed by the author. The proposed deconvolution method provides a fundamentally new statistical formulation for the inversion of the Radon transform that can operate in nonstationary noise and signal fields. It can be utilized for radiation treatment planning, inverse source problems, and 3D and 4D computed tomography. Furthermore it is directly applicable to many computer vision and pattern recognition problems, as well as to problems in robotics and polymer science. Here, we present an algorithm for the discrete implementation of the Wiener filter and provide a comparison of the proposed image reconstruction method with the filtered back projection algorithms.
High-resolution reconstruction for 3D SPECT
In this work, we have developed a new method for SPECT (single photon emission computed tomography) image reconstruction, which has shown the potential to provide higher resolution results than any other conventional methods using the same projection data. Unlike the conventional FBP- (filtered backprojection) and EM- (expectation maximization) type algorithms, we utilize as much system response information as we can during the reconstruction process. This information can be pre-measured during the calibration process and stored in the computer. By selecting different sampling schemes for the point response measurement, different system kernel matrices are obtained. Reconstruction utilizing these kernels generates a set of reconstructed images of the same source. Based on these reconstructed images and their corresponding sampling schemes, we are able to achieve a high resolution final image that best represents the object. Because a uniform attenuation, resolution variation and some other effects are included during the formation of the system kernel matrices, the reconstruction from the acquired projection data also compensates for all these effects correctly.
Tilted helical Feldkamp cone-beam reconstruction algorithm for multislice CT
Ilmar A. Hein, Katsuyuki Taguchi, Issei Mori, et al.
In many clinical applications, it is necessary to tilt the gantry of an X-ray CT system with respect to the patient. Tilting the gantry introduces no complications for single-slice fan-beam systems; however, most systems today are helical multislice systems with up to 16 slices (and this number is sure to increase in the future). The image reconstruction algorithms used in multislice helical CT systems must be modified to compensate for the tilt. If they are not, the quality of reconstructed images will be poor with the presence of significant artifacts produced by the tilt. Practical helical multislice algorithms currently incorporated in today’s systems include helical fan-beam, ASSR (Advanced single-slice rebinning), and Feldkamp algorithms. This paper presents the modifications necessary to compensate for gantry tilt for the helical cone-beam Feldkamp algorithm implemented by Toshiba (referred to as TCOT for true cone-beam tomography). Unlike some of the other algorithms, gantry tilt compensation is simple and straightforward to implement with no significant increase in computational complexity. It will be shown that the effect of the gantry tilt is to introduce a lateral shift in the isocenter of the reconstructed slice of interest, which is a function of the tilt, couch speed, and view angle. This lateral shift is easily calculated and incorporated into the backprojection algorithm. The tilt-compensated algorithm is called T-TCOT. Experimental tilted-gantry data has been obtained with 8- and 16 slice Toshiba Aquilion systems, and examples of uncompensated and tilt compensated images are presented.
A new approach for CT image reconstruction with asymmetric configuration
Lifeng Yu, Xiaochuan Pan, Charles A. Pelizzari, et al.
We developed a novel algorithm for image reconstruction from fan-beam data acquired with asymmetric flat-panel detectors. This new algorithm can improve the noise properties of the widely used fan-beam filtered-backprojection (FFBP) algorithm by eliminating the spatially-variant weighting factor while retaining FFBP's favorable resolution properties. Quantitative results verify the theoretical prediction of the improved noise properties. These improved noise properties can be translated into a reduction of radiation dose. The new algorithm is particularly robust and useful when applied to CT systems with large field of measurement (FOM) and/or relatively small focal lengths.
Windmill artifact in multislice helical CT
Multi-slice helical CT-systems suffer from windmill artifacts: black/white patterns that spin off of features with high longitudinal gradients. The number of black/white pairs matches the number of slices (detector rows) in the multi-slive detector. The period of spin is the same as the helical pitch. We investigate the cause of the pattern by following the traces of selected voxels through the multi-slive detector array as a function of view position. This forms an "extracted sinogram" which represents the data used to reconstruct the specific voxel. Now we can determine the cause of the artifact by correlating the windmill streak in the image with the extracted data. The investigation shows that inadequate sampling along the longitudinal direction causes the artifact.