Clinical relevance of model based computer-assisted diagnosis and therapy
Author(s):
Andrea Schenk;
Stephan Zidowitz;
Holger Bourquain;
Milo Hindennach;
Christian Hansen;
Horst K. Hahn;
Heinz-Otto Peitgen
Show Abstract
The ability to acquire and store radiological images digitally has made this data available to mathematical and scientific
methods. With the step from subjective interpretation to reproducible measurements and knowledge, it is also possible to
develop and apply models that give additional information which is not directly visible in the data. In this context, it is
important to know the characteristics and limitations of each model. Four characteristics assure the clinical relevance of
models for computer-assisted diagnosis and therapy: ability of patient individual adaptation, treatment of errors and
uncertainty, dynamic behavior, and in-depth evaluation. We demonstrate the development and clinical application of a
model in the context of liver surgery. Here, a model for intrahepatic vascular structures is combined with individual, but
in the degree of vascular details limited anatomical information from radiological images. As a result, the model allows
for a dedicated risk analysis and preoperative planning of oncologic resections as well as for living donor liver
transplantations. The clinical relevance of the method was approved in several evaluation studies of our medical partners
and more than 2900 complex surgical cases have been analyzed since 2002.
Feature selection for computer-aided detection: comparing different selection criteria
Author(s):
Rianne Hupse;
Nico Karssemeijer
Show Abstract
In this study we investigated different feature selection methods for use in computer-aided mass detection. The
data set we used (1357 malignant mass regions and 58444 normal regions) was much larger than used in previous
research where feature selection did not directly improve the performance compared to using the entire feature set.
We introduced a new performance measure to be used during feature selection, defined as the mean sensitivity
in an interval of the free response operating characteristic (FROC) curve computed on a logarithmic scale. This
measure is similar to the final validation performance measure we were optimizing. Therefore it was expected
to give better results than more general feature selection criteria. We compared the performance of feature
sets selected using the mean sensitivity of the FROC curve to sets selected using the Wilks' lambda statistic
and investigated the effect of reducing the skewness in the distribution of the feature values before performing
feature selection. In the case of Wilks' lambda, we found that reducing skewness had a clear positive effect,
yielding performances similar or exceeding performances obtained when the entire feature set was used. Our
results indicate that a general measure like Wilks' lambda selects better performing feature sets than the mean
sensitivity of the FROC curve.
Hybrid linear classifier for jointly normal data: theory
Author(s):
Weijie Chen;
Charles E. Metz;
Maryellen L. Giger
Show Abstract
Classifier design for a given classification task needs to take into consideration both the complexity of the classifier and the size of the data set that is available for training the classifier. With limited training data, as often is the situation in computer-aided diagnosis of medical images, a classifier with simple structure (e.g., a linear classifier) is more robust and therefore preferred. We consider the two-class classification problem in which the feature data arise from two multivariate normal distributions. A linear function is used to combine the multi-dimensional feature vector onto a scalar variable. This scalar variable, however, is generally not an ideal decision variable unless the covariance matrices of the two classes are equal. We propose using the likelihood ratio of this scalar variable as a decision variable and, thus, generalizing the traditional classification paradigm to a hybrid two-stage procedure: a linear combination of the feature vector elements to form a scalar variable followed by a nonlinear, nonmonotic transformation that maps the scalar variable onto its likelihood ratio (i.e., the ideal decision variable, given the scalar variable). We show that the traditional Fisher's linear discriminant function is generally not the optimal linear function for the first stage in this two-stage paradigm. We further show that the optimal linear function can be obtained with a numerical optimization procedure using the area under the "proper" ROC curve as the objective function.
Computer-aided detection of breast masses in tomosynthesis reconstructed volumes using information-theoretic similarity measures
Author(s):
Swatee Singh;
Georgia D. Tourassi;
Amarpreet S. Chawla;
Robert S. Saunders;
Ehsan Samei;
Joseph Y. Lo
Show Abstract
The purpose of this project is to study two Computer Aided Detection (CADe) systems for breast masses for
digital tomosynthesis using reconstructed slices. This study used eighty human subject cases collected as part
of on-going clinical trials at Duke University. Raw projections images were used to identify suspicious regions
in the algorithm's high sensitivity, low specificity stage using a Difference of Gaussian filter. The filtered
images were thresholded to yield initial CADe hits that were then shifted and added to yield a 3D distribution
of suspicious regions. The initial system performance was 95% sensitivity at 10 false positives per breast
volume. Two CADe systems were developed. In system A, the central slice located at the centroid depth was
used to extract a 256 X 256 Regions of Interest (ROI) database centered at the lesion coordinates. For system B,
5 slices centered at the lesion coordinates were summed before the extraction of 256 × 256 ROIs. To avoid
issues associated with feature extraction, selection, and merging, information theory principles were used to
reduce false positives for both the systems resulting in a classifier performance of 0.81 and 0.865 Area Under
Curve (AUC) with leave-one-case-out sampling. This resulted in an overall system performance of 87%
sensitivity with 6.1 FPs/ volume and 85% sensitivity with 3.8 FPs/ volume for systems A and B respectively.
This system therefore has the potential to detect breast masses in tomosynthesis data sets.
Digital tomosynthesis mammography: comparison of mass classification using 3D slices and 2D projection views
Author(s):
Heang-Ping Chan;
Yi-Ta Wu;
Berkman Sahiner;
Yiheng Zhang;
Jun Wei;
Richard H. Moore;
Daniel B. Kopans;
Mark A. Helvie;
Lubomir Hadjiiski;
Ted Way
Show Abstract
We are developing computer-aided diagnosis (CADx) methods for classification of masses on digital breast
tomosynthesis mammograms (DBTs). A DBT data set containing 107 masses (56 malignant and 51 benign) collected at
the Massachusetts General Hospital was used. The DBTs were obtained with a GE prototype system which acquired 11
projection views (PVs) over a 50-degree arc. We reconstructed the DBTs at 1-mm slice interval using a simultaneous
algebraic reconstruction technique. The regions of interest (ROIs) containing the masses in the DBT volume and the
corresponding ROIs on the PVs were identified. The mass on each slice or each PV was segmented by an active contour
model. Spiculation measures, texture features, and morphological features were extracted from the segmented mass.
Four feature spaces were formed: (1) features from the central DBT slice, (2) average features from 5 DBT slices
centered at the central slice, (3) features from the central PV, and (4) average features from all 11 PVs. In each feature
space, a linear discriminant analysis classifier with stepwise feature selection was trained and tested using a two loop
leave-one-case-out procedure. The test Az of 0.91±0.03 from the 5-DBT-slice feature space was significantly (p=0.003)
higher than that of 0.84±0.04 from the 1-DBT-slice feature space. The test Az of 0.83±0.04 from the 11-PV feature
space was not significantly different (p=0.18) from that of 0.79±0.04 from the 1-PV feature space. The classification
accuracy in the 5-DBT-slice feature space was significantly better (p=0.006) than that in the 11-PV feature space. The
results demonstrate that the features of breast lesions extracted from the DBT slices may provide higher classification
accuracy than those from the PV images.
Applying a 2D based CAD scheme for detecting micro-calcification clusters using digital breast tomosynthesis images: an assessment
Author(s):
Sang Cheol Park;
Bin Zheng;
Xiao-Hui Wang;
David Gur
Show Abstract
Digital breast tomosynthesis (DBT) has emerged as a promising imaging modality for screening
mammography. However, visually detecting micro-calcification clusters depicted on DBT images is a difficult task.
Computer-aided detection (CAD) schemes for detecting micro-calcification clusters depicted on mammograms can
achieve high performance and the use of CAD results can assist radiologists in detecting subtle micro-calcification
clusters. In this study, we compared the performance of an available 2D based CAD scheme with one that includes a new
grouping and scoring method when applied to both projection and reconstructed DBT images. We selected a dataset
involving 96 DBT examinations acquired on 45 women. Each DBT image set included 11 low dose projection images
and a varying number of reconstructed image slices ranging from 18 to 87. In this dataset 20 true-positive micro-calcification
clusters were visually detected on the projection images and 40 were visually detected on the reconstructed
images, respectively. We first applied the CAD scheme that was previously developed in our laboratory to the DBT
dataset. We then tested a new grouping method that defines an independent cluster by grouping the same cluster detected
on different projection or reconstructed images. We then compared four scoring methods to assess the CAD
performance. The maximum sensitivity level observed for the different grouping and scoring methods were 70% and
88% for the projection and reconstructed images with a maximum false-positive rate of 4.0 and 15.9 per examination,
respectively. This preliminary study demonstrates that (1) among the maximum, the minimum or the average CAD
generated scores, using the maximum score of the grouped cluster regions achieved the highest performance level, (2)
the histogram based scoring method is reasonably effective in reducing false-positive detections on the projection images
but the overall CAD sensitivity is lower due to lower signal-to-noise ratio, and (3) CAD achieved higher sensitivity and higher false-positive rate (per examination) on the reconstructed images. We concluded that without changing the detection threshold or performing pre-filtering to possibly increase detection sensitivity, current CAD schemes developed and optimized for 2D mammograms perform relatively poorly and need to be re-optimized using DBT datasets and new grouping and scoring methods need to be incorporated into the schemes if these are to be used on the DBT examinations.
Classification of breast masses and normal tissues in digital tomosynthesis mammography
Author(s):
Jun Wei;
Heang-Ping Chan;
Yiheng Zhang;
Berkman Sahiner;
Chuan Zhou;
Jun Ge;
Yi-Ta Wu;
Lubomir M. Hadjiiski
Show Abstract
Digital tomosynthesis mammography (DTM) can provide quasi-3D structural information of the breast by
reconstructing the breast volume from projection views (PV) acquired in a limited angular range. Our purpose is to
design an effective classifier to distinguish breast masses from normal tissues in DTMs. A data set of 100 DTM cases
collected with a GE first generation prototype DTM system at the Massachusetts General Hospital was used. We
reconstructed the DTMs using a simultaneous algebraic reconstruction technique (SART). Mass candidates were
identified by 3D gradient field analysis. Three approaches to distinguish breast masses from normal tissues were
evaluated. In the 3D approach, we extracted morphological and run-length statistics texture features from DTM slices as
input to a linear discriminant analysis (LDA) classifier. In the 2D approach, the raw input PVs were first preprocessed
with a Laplacian pyramid multi-resolution enhancement scheme. A mass candidate was then forward-projected to the
preprocessed PVs in order to determine the corresponding regions of interest (ROIs). Spatial gray-level dependence
(SGLD) texture features were extracted from each ROI and averaged over 11 PVs. An LDA classifier was designed to
distinguish the masses from normal tissues. In the combined approach, the LDA scores from the 3D and 2D approaches
were averaged to generate a mass likelihood score for each candidate. The Az values were 0.87±0.02, 0.86±0.02, and
0.91±0.02 for the 3D, 2D, and combined approaches, respectively. The difference between the Az values of the 3D and
2D approaches did not achieve statistical significance. The performance of the combined approach was significantly
(p<0.05) better than either the 3D or 2D approach alone. The combined classifier will be useful for false-positive
reduction in computerized mass detection in DTM.
Masses classification using fuzzy active contours and fuzzy decision trees
Author(s):
G. J. Palma;
G. Peters;
S. Muller;
I. Bloch
Show Abstract
In this paper we propose a method to classify masses in digital breast tomosynthesis (DBT) datasets. First,
markers of potential lesions are extracted and matched over the different projections. Then two level-set models
are applied on each finding corresponding to spiculated and circumscribed mass assumptions respectively. The
formulation of the active contours within this framework leads to several candidate contours for each finding. In
addition, a membership value to the class contour is derived from the energy of the segmentation model, and
allows associating several fuzzy contours from different projections to each set of markers corresponding to a
lesion. Fuzzy attributes are computed for each fuzzy contour. Then the attributes corresponding to fuzzy contours
associated to each set of markers are aggregated. Finally, these cumulated fuzzy attributes are processed by two
distinct fuzzy decision trees in order to validate/invalidate the spiculated or circumscribed mass assumptions.
The classification has been validated on a database of 23 real lesions using the leave-one-out method. An error
classification rate of 9% was obtained with these data, which confirms the interest of the proposed approach.
Texture in digital breast tomosynthesis: a comparison between mammographic and tomographic characterization of parenchymal properties
Author(s):
Despina Kontos;
Predrag R. Bakic;
Andrew D. A. Maidment
Show Abstract
Studies have demonstrated a relationship between mammographic texture and breast cancer risk. To date, texture
analysis has been limited by tissue superimposition in mammography. Digital Breast Tomosynthesis (DBT) is a novel
x-ray imaging modality in which 3D images of the breast are reconstructed from a limited number of source projections.
Tomosynthesis alleviates the effect of tissue superimposition and offers the ability to perform tomographic texture
analysis; having the potential to ultimately yield more accurate measures of risk. In this study, we analyzed texture in
DBT and digital mammography (DM). Our goal was to compare tomographic versus mammographic texture
characterization and evaluate the robustness of texture descriptors in reflecting characteristic parenchymal properties.
We analyzed DBT and DM images from 40 women with recently detected abnormalities and/or previously diagnosed
breast cancer. Texture features, previously shown to correlate with risk, were computed from the retroareolar region.
We computed the texture correlation between (i) the DBT and DM, and (ii) between contralateral and ipsilateral breasts.
The effect of the gray-level quantization on the observed correlations was investigated. Low correlation was detected
between DBT and DM features. The correlation between contralateral and ipsilateral breasts was significant for both
modalities, and overall stronger for DBT. We observed that the selection of the gray-level quantization algorithm affects
the detected correlations. The strong correlation between contralateral and ipsilateral breasts supports the hypothesis
that parenchymal properties appear to be inherent in an individual woman; the texture of the unaffected breast could
potentially be used as a marker of risk.
Automated matching of supine and prone colonic polyps based on PCA and SVMs
Author(s):
Shijun Wang;
Robert L. Van Uitert;
Ronald M. Summers M.D.
Show Abstract
Computed tomographic colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer screening. In current practice, a patient will be scanned twice during the CTC examination - once supine and once prone. In order to assist the radiologists in evaluating colon polyp candidates in both scans, we expect the computer aided detection (CAD) system can provide not only the locations of suspicious polyps, but also the possible matched pairs of polyps in two scans. In this paper, we propose a new automated matching method based on the extracted features of polyps by using principal component analysis (PCA) and Support Vector Machines (SVMs). Our dataset comes from the 104 CT scans of 52 patients with supine and prone positions collected from three medical centers. From it we constructed two groups of matched polyp candidates according to the size of true polyps: group A contains 12 true polyp pairs (> 9 mm) and 454 false pairs; group B contains 24 true polyp pairs (6-9 mm) and 514 false pairs. By using PCA, we reduced the dimensions of original data (with 157 attributes) to 30 dimensions. We did leave-one-patient-out test on the two groups of data. ROC analysis shows that it is easier to match bigger polyps than that of smaller polyps. On group A data, when false alarm probability is 0.18, the sensitivity of SVM achieves 0.83 which shows that automated matching of polyp candidates is practicable for clinical applications.
DMLLE: a large-scale dimensionality reduction method for detection of polyps in CT colonography
Author(s):
Shijun Wang;
Jianhua Yao;
Ronald M. Summers M.D.
Show Abstract
Computer-aided diagnosis systems have been shown to be feasible for polyp detection on computed tomography (CT) scans. After 3-D image segmentation and feature extraction, the dataset of colonic polyp candidates has large-scale and high dimension characteristics. In this paper, we propose a large-scale dimensionality reduction method based on Diffusion Map and Locally Linear Embedding for detection of polyps in CT colonography. By selecting partial data as landmarks, we first map the landmarks into a low dimensional embedding space using Diffusion Map. Then by using Locally Linear Embedding algorithm, non-landmark samples are embedded into the same low dimensional space according to their nearest landmark samples. The local geometry of samples is preserved in both the original space and the embedding space. We applied the proposed method called DMLLE to a colonic polyp dataset which contains 58336 candidates (including 85 6-9mm true polyps) with 155 features. Visual inspection shows that true polyps with similar shapes are mapped to close vicinity in the low dimensional space. FROC analysis shows that SVM with DMLLE achieves higher sensitivity with lower false positives per patient than that of SVM using all features. At the false positives of 8 per patient, SVM with DMLLE improves the average sensitivity from 64% to 75% for polyps whose sizes are in the range from 6 mm to 9 mm (p < 0.05). This higher sensitivity is comparable to unaided readings by trained
radiologists.
Mosaic decomposition method for detection and removal of inhomogeneously tagged regions in electronic cleansing for CT colonography
Author(s):
Wenli Cai;
Micheal Zalis M.D.;
Hiroyuki Yoshida
Show Abstract
Electronic cleansing (EC) is a method that segments fecal material tagged by an X-ray-opaque oral contrast agent in CT
colonography (CTC) images, and effectively removes the material for digitally cleansing the colon. In this study, we
developed a novel EC method, called mosaic decomposition, for reduction of the artifacts due to incomplete cleansing of
heterogeneously tagged fecal material in reduced- or non-cathartic fecal-tagging CTC examinations. In our approach, a
segmented colonic lumen, including the tagged regions, was first partitioned into a set of local homogeneous regions by
application of a watershed transform to the gradient of the CTC images. Then, each of the local homogeneous regions
was classified into five different material classes, including air, soft tissue, tagged feces, air bubbles, and foodstuff, based
on texture features of the tile. A single index, called a soft-tissue index, is formulated for differentiation of these
materials from the submerged solid soft-tissue structures such as polyps and folds. Here, a larger value of the index
indicates a higher likelihood of soft tissue. Then, EC is performed by first initializing the level-set front with the
classified tagged regions, and the front is evolved by use of a speed function that was designed, based on the soft-tissue
index, to reserve the submerged soft-tissue structures while suppressing the air bubbles and foodstuff. Visual assessment
and application of our computer-aided detection (CAD) of polyps showed that the use of our new EC method
substantially improved the detection performance of CAD, indicating the effectiveness of our EC method in reducing
incomplete cleansing artifacts.
Simultaneous feature selection and classification based on genetic algorithms: an application to colonic polyp detection
Author(s):
Yalin Zheng;
Xiaoyun Yang;
Musib Siddique;
Gareth Beddoe
Show Abstract
Selecting a set of relevant features is a crucial step in the process of building robust classifiers. Searching all
possible subsets of features is computationally impractical for large number of features. Generally, classifiers are
used for the evaluation of the separability of a certain feature subset. The performance of these classifiers depends
on some predefined parameters. However, the choice of these parameters for a given classifier is influenced by
the given feature subset and vice versa. The computational cost for feature selection would be largely increased
by including the selection of optimal parameters for the classifier (for each subset). This paper attempts to
tackle the problem by introducing genetic algorithms (GAs) to combine the processes. The proposed approach
can choose the most relevant features from a feature set whilst simultaneously optimising the parameters of the
classifier. Its performance was tested on a colon polyp database from a cohort study using a weighted support
vector machine (SVM) classifier. As a general approach, other classifiers such as artificial neural networks (ANN)
and decision trees could be used. This approach could also be applied to other classification problems such as
other computer aided detection/diagnosis applications.
An MTANN CAD for detection of polyps in false-negative CT colonography cases in a large multicenter clinical trial: preliminary results
Author(s):
Kenji Suzuki;
Ivan Sheu;
Mark Epstein;
Ryan Kohlbrenner;
Antonella Lostumbo;
Don C. Rockey;
Abraham H. Dachman M.D.
Show Abstract
A major challenge in computer-aided detection (CAD) of polyps in CT colonography (CTC) is the detection of
"difficult" polyps which radiologists are likely to miss. Our purpose was to develop a CAD scheme incorporating
massive-training artificial neural networks (MTANNs) and to evaluate its performance on false-negative (FN) cases in a
large multicenter clinical trial. We developed an initial polyp-detection scheme consisting of colon segmentation based
on CT value-based analysis, detection of polyp candidates based on morphologic analysis, and quadratic discriminant
analysis based on 3D pattern features for classification. For reduction of false-positive (FP) detections, we developed
multiple expert 3D MTANNs designed to differentiate between polyps and seven types of non-polyps. Our independent
database was obtained from CTC scans of 155 patients with polyps from a multicenter trial in which 15 medical
institutions participated nationwide. Among them, about 45% patients received FN interpretations in CTC. For testing
our CAD, 14 cases with 14 polyps/masses were randomly selected from the FN cases. Lesion sizes ranged from 6-35
mm, with an average of 10 mm. The initial CAD scheme detected 71.4% (10/14) of "missed" polyps, including sessile
polyps and polyps on folds, with 18.9 (264/14) FPs per case. The MTANNs removed 75% (197/264) of the FPs without
loss of any true positives; thus, the performance of our CAD scheme was improved to 4.8 (67/14) FPs per case. With our
CAD scheme incorporating MTANNs, 71.4% of polyps "missed" by radiologists in the trial were detected correctly,
with a reasonable number of FPs.
Computerized self-assessment of automated lesion segmentation in breast ultrasound: implication for CADx applied to findings in the axilla
Author(s):
K. Drukker;
M. L. Giger
Show Abstract
We developed a self-assessment method in which the CADx system provided a confidence level for its lesion
segmentations. The self-assessment was performed by a fuzzy-inference system based on 4 computer-extracted features
of the computer-segmented lesions in a leave-one-case-out evaluation protocol. In instances where the initial
segmentation received a low assessment rating, lesions were re-segmented using the same segmentation method but
based on a user-defined region-of-interest. A total of 542 cases with 1133 lesions were collected in this study, and we
focused here on the 97 normal lymph nodes in this dataset since these pose challenges for automated segmentation due
to their inhomogeneous appearance. The percentage of all lesions with satisfactory segmentation (i.e., normalized
overlap with the radiologist-delineated lesion >=0.3) was 85%. For normal lymph nodes, however, this percentage was
only 36%. Of the lymph nodes, 53 received a low confidence rating (<0.3) for their initial segmentation. When those
lymph nodes were re-segmented, the percentage with a satisfactory segmentation improved to 80.0%. Computerassessed
confidence levels demonstrated potential to 1) help radiologists decide whether to use or disregard CADx
output, and 2) provide a guide for improvement of lesion segmentation.
Design and evaluation of a new automated method for the segmentation and characterization of masses on ultrasound images
Author(s):
Jing Cui;
Berkman Sahiner;
Heang-Ping Chan;
Alexis Nees;
Chintana Paramagul;
Lubomir M. Hadjiiski;
Chuan Zhou;
Jiazheng Shi
Show Abstract
Segmentation of masses is the first step in most computer-aided diagnosis (CAD) systems for characterization of breast
masses as malignant or benign. In this study, we designed an automated method for segmentation of masses on
ultrasound (US) images. The method automatically estimated an initial contour based on a manually-identified point
approximately at the mass center. A two-stage active contour (AC) method iteratively refined the initial contour and
performed self-examination and correction on the segmentation result. To evaluate our method, we compared it with
manual segmentation by an experienced radiologists (R1) on a data set of 226 US images containing biopsy-proven
masses from 121 patients (44 malignant and 77 benign). Four performance measures were used to evaluate the
segmentation accuracy; two measures were related to the overlap between the computer and radiologist segmentation,
and two were related to the area difference between the two segmentation results. To compare the difference between the
segmentation results by the computer and R1 to inter-observer variation, a second radiologist (R2) also manually
segmented all masses. The two overlap measures between the segmentation results by the computer and R1 were
0.87+
0.16 and 0.73+ 0.17 respectively, indicating a high agreement. However, the segmentation results between two
radiologists were more consistent. To evaluate the effect of the segmentation method on classification accuracy, three
feature spaces were formed by extracting texture, width-to-height, and posterior shadowing features using the computer
segmentation, R1's manual segmentation, and R2's manual segmentation. A linear discriminant analysis classifier using
stepwise feature selection was tested and trained by a leave-one-case-out method to characterize the masses as malignant
or benign. For case-based classification, the area Az under the test receiver operating characteristic (ROC) curve was
0.90±0.03, 0.87±0.03 and 0.87±0.03 for the feature sets based on computer segmentation, R1's manual segmentation,
and R2's manual segmentation, respectively.
Computer-aided diagnosis of breast color elastography
Author(s):
Ruey-Feng Chang;
Wei-Chih Shen;
Min-Chun Yang;
Woo Kyung Moon;
Etsuo Takada M.D.;
Yu-Chun Ho;
Michiko Nakajima;
Masayuki Kobayashi
Show Abstract
Ultrasound has been an important imaging technique for detecting breast tumors. As opposed to the conventional B-mode
image, the ultrasound elastography is a new technique for imaging the elasticity and applied to detect the stiffness
of tissues. The red region of color elastography indicates the soft tissue and the blue one indicates the hard tissue, and
the harder tissue usually is classified to malignancy. In this paper, we proposed a CAD system on elastography to
measure whether this system is effective and accurate to classify the tumor into benign and malignant. According to the
features of elasticity, the color elastography was transferred to HSV color space and extracted meaningful features from
hue images. Then the neural network was utilized in multiple features to distinguish tumors. In this experiment, there
are 180 pathology-proven cases including 113 benign and 67 malignant cases used to examine the classification. The
results of the proposed system showed an accuracy of 83.89%, a sensitivity of 85.07% and a specificity of 83.19%.
Compared with the physician's diagnosis, an accuracy of 78.33%, a sensitivity of 53.73% and a specificity of 92.92%,
the proposed CAD system had better performance. Moreover, the agreement of the proposed CAD system and the
physician's diagnosis was calculated by kappa statistics, the kappa 0.54 indicated there is a fair agreement of observers.
Computer-aided classification of lesions by means of their kinetic signatures in dynamic contrast-enhanced MR images
Author(s):
Thorsten Twellmann;
Bart ter Haar Romeny
Show Abstract
The kinetic characteristics of tissue in dynamic contrast-enhanced magnetic resonance imaging data are an important
source of information for the differentiation of benign and malignant lesions. Kinetic curves measured for each lesion
voxel allow to infer information about the state of the local tissue. As a whole, they reflect the heterogeneity of the vascular
structure within a lesion, an important criterion for the preoperative classification of lesions. Current clinical practice in
analysis of tissue kinetics however is mainly based on the evaluation of the "most-suspect curve", which is only related
to a small, manually or semi-automatically selected region-of-interest within a lesion and does not reflect any information
about tissue heterogeneity.
We propose a new method which exploits the full range of kinetic information for the automatic classification of
lesions. Instead of breaking down the large amount of kinetic information to a single curve, each lesion is considered as a
probability distribution in a space of kinetic features, efficiently represented by its kinetic signature obtained by adaptive
vector quantization of the corresponding kinetic curves. Dissimilarity of two signatures can be objectively measured using
the Mallows distance, which is a metric defined on probability distributions. The embedding of this metric in a suitable
kernel function enables us to employ modern kernel-based machine learning techniques for the classification of signatures.
In a study considering 81 breast lesions, the proposed method yielded an Az value of 0.89±0.01 for the discrimination of
benign and malignant lesions in a nested leave-one-lesion-out evaluation setting.
Expanded pharmacokinetic model for population studies in breast MRI
Author(s):
Vandana Mohan;
Yoshihisa Shinagawa;
Bing Jian;
Gerardo Hermosillo
Show Abstract
We propose a new model for pharmacokinetic analysis based on the one proposed by Tofts. Our model
both eliminates the need for estimating the Arterial Input Function (AIF) and normalizes analysis so
that comparisons across patients can be performed. Previous methods have attempted to circumvent
the AIF estimation by using the pharmacokinetic parameters of multiple reference regions (RR). Viewing
anatomical structures as filters, pharmacokinetic analysis tells us that 'similar' structures will be similar
filters. By cascading the inverse filter at a RR with the filter at the voxel being analyzed, we obtain a
transfer function relating the concentration of a voxel to that of the RR. We show that this transfer function
simplifies into a five-parameter nonlinear model with no reference to the AIF. These five parameters are
combinations of the three parameters of the original model at the RR and the region of interest. Contrary
to existing methods, ours does not require explicit estimation of the pharmacokinetic parameters of the
RR. Also, cascading filters in the frequency domain allows us to manipulate more complex models, such as
accounting for the vascular tracer component. We believe that our model can improve analysis across MR
parameters because the analyzed and reference enhancement series are from the same image. Initial results
are promising with the proposed model parameters exhibiting values that are more consistent across lesions
in multiple patients. Additionally, our model can be applied to multiple voxels to estimate the original
pharmacokinetic parameters as well as the AIF.
A knowledge-based approach to CADx of mammographic masses
Author(s):
Matthias Elter;
Erik Haßlmeyer
Show Abstract
Today, mammography is recognized as the most effective technique for breast cancer screening. Unfortunately,
the low positive predictive value of breast biopsy examinations resulting from mammogram interpretation leads
to many unnecessary biopsies performed on benign lesions. In the last years, several computer assisted diagnosis
(CADx) systems have been proposed with the goal to assist the radiologist in the discrimination of benign and
malignant breast lesions and thus to reduce the high number of unnecessary biopsies. In this paper we present
a novel, knowledge-based approach to the computer aided discrimination of mammographic mass lesions that
uses computer-extracted attributes of mammographic masses and clinical data as input attributes to a case-based
reasoning system. Our approach emphasizes a transparent reasoning process which is important for the
acceptance of a CADx system in clinical practice. We evaluate the performance of the proposed system on a
large publicly available mammography database using receiver operating characteristic curve analysis. Our results
indicate that the proposed CADx system has the potential to significantly reduce the number of unnecessary
breast biopsies in clinical practice.
Computerized assessment of coronary calcified plaques in CT images of a dynamic cardiac phantom
Author(s):
Zachary B. Rodgers;
Martin King;
Maryellen L. Giger;
Michael Vannier;
Dianna M. E. Bardo;
Kenji Suzuki;
Li Lan
Show Abstract
Motion artifacts in cardiac CT are an obstacle to obtaining diagnostically usable images. Although phase-specific
reconstruction can produce images with improved assessability (image quality), this requires that the radiologist spend
time and effort evaluating multiple image sets from reconstructions at different phases. In this study, ordinal logistic
regression (OLR) and artificial neural network (ANN) models were used to automatically assign assessability to images
of coronary calcified plaques obtained using a physical, dynamic cardiac phantom. 350 plaque images of 7 plaques from
five data sets (heart rates 60, 60, 70, 80, 90) and ten phases of reconstruction were obtained using standard cardiac CT
scanning parameters on a Phillips Brilliance 64-channel clinical CT scanner. Six features of the plaques (velocity,
acceleration, edge-based volume, threshold-based volume, sphericity, and standard deviation of intensity) as well as
mean feature values and heart rate were used for training the OLR and ANN in a round-robin re-sampling scheme based
on training and testing groups with independent plaques. For each image, an ordinal assessability index rating on a 1-5
scale was assigned by a cardiac radiologist (D.B.) for use as a "truth" in training the OLR and ANN. The mean
difference between the assessability index truth and model-predicted assessability index values was +0.111 with
SD=0.942 for the OLR and +0.143 with SD=0.916 for the ANN. Comparing images from the repeat 60 bpm scans gave
concordance correlation coefficients (CCCs) of 0.794 [0.743, 0.837] (value, 95% CI) for the radiologist assigned values,
0.894 [0.856, 0.922] for the OLR, and 0.861 [0.818, 0.895] for the ANN. Thus, the variability of the OLR and ANN
assessability index values appear to lie within the variability of the radiologist assigned values.
Hotspot quantification of myocardial focal tracer uptake from molecular targeted SPECT/CT images: experimental validation
Author(s):
Yi-Hwa Liu;
Zakir Sahul;
Christopher A. Weyman;
William J. Ryder;
Donald P. Dione;
Lawrence W. Dobrucki;
Choukri Mekkaoui;
Matthew P. Brennan;
Xiaoyue Hu;
Christi Hawley;
Albert J. Sinusas
Show Abstract
We have developed a new single photon emission computerized tomography (SPECT) hotspot quantification
method incorporating extra cardiac activity correction and hotspot normal limit estimation. The method was validated
for estimation accuracy of myocardial tracer focal uptake in a chronic canine model of myocardial infarction (MI). Dogs
(n = 4) at 2 weeks post MI were injected with Tl-201 and a
Tc-99m-labeled hotspot tracer targeted at matrix
metalloproteinases (MMPs). An external point source filled with
Tc-99m was used for a reference of absolute
radioactivity. Dual-isotope (Tc-99m/Tl-201) SPECT images were acquired simultaneously followed by an X-ray CT
acquisition. Dogs were sacrificed after imaging for myocardial gamma well counting. Images were reconstructed with
CT-based attenuation correction (AC) and without AC (NAC) and were quantified using our quantification method.
Normal limits for myocardial hotspot uptake were estimated based on 3 different schemes: maximum entropy,
meansquared-error minimization (MSEM) and global minimization. Absolute myocardial hotspot uptake was quantified from
SPECT images using the normal limits and compared with well-counted radioactivity on a segment-by-segment basis (n = 12 segments/dog). Radioactivity was expressed as % injected dose (%ID). There was an excellent correlation (r = 0.78-0.92) between the estimated activity (%ID) derived using the SPECT quantitative approach and
well-counting, independent of AC. However, SPECT quantification without AC resulted in the significant underestimation of radioactivity. Quantification using SPECT with AC and the MSEM normal limit yielded the best results compared with well-counting. In conclusion, focal myocardial "hotspot" uptake of a targeted radiotracer can be accurately quantified in
vivo using a method that incorporates SPECT imaging with AC, an external reference, background scatter compensation,
and a suitable normal limit. This hybrid SPECT/CT approach allows for the serial non-invasive quantitative evaluation
of molecular targeted tracers in the heart.
Automated segmentation and tracking of coronary arteries in ECG-gated cardiac CT scans
Author(s):
Chuan Zhou;
Heang-Ping Chan;
Aamer Chughtai;
Smita Patel;
Prachi Agarwal;
Lubomir M. Hadjiiski;
Berkman Sahiner;
Jun Wei;
Jun Ge;
Ella A. Kazerooni
Show Abstract
Cardiac CT has been reported to be an effective means for clinical diagnosis of coronary artery plaque disease.
We are investigating the feasibility of developing a computer-assisted image analysis (CAA) system to assist
radiologist in detection of coronary artery plaque disease in ECG-gated cardiac CT scans. The heart region was
first extracted using morphological operations and an adaptive EM thresholding method. Vascular structures in
the heart volume were enhanced by 3D multi-scale filtering and analysis of the eigenvalues of Hessian matrices
using a vessel enhancement response function specially designed for coronary arteries. The enhanced vascular
structures were then segmented by an EM estimation method. Finally, our newly developed 3D rolling balloon
vessel tracking method (RBVT) was used to track the segmented coronary arteries. Starting at two manually
identified points located at the origins of left and right coronary artery (LCA and RCA), the RBVT method moved
a sphere of adaptive diameter along the vessels, tracking the vessels and identifying its branches automatically to
generate the left and right coronary arterial trees. Ten cardiac CT scans that contained various degrees of coronary
artery diseases were used as test data set for our vessel segmentation and tracking method. Two experienced
thoracic radiologists visually examined the computer tracked coronary arteries on a graphical interface to count
untracked false-negative (FN) branches (segments). A total of 27 artery segments were identified to be FNs in the
10 cases, ranging from 0 to 6 FN segments in each case. No FN artery segment was found in 2 cases.
Computer-aided prognosis of neuroblastoma: classification of stromal development on whole-slide images
Author(s):
Olcay Sertel;
Jun Kong;
Hiroyuki Shimada;
Umit Catalyurek;
Joel H. Saltz;
Metin Gurcan
Show Abstract
Neuroblastoma is a cancer of the nervous system and one of the most common tumors in children. In clinical practice,
pathologists examine the haematoxylin and eosin (H&E) stained tissue slides under the microscope for the diagnosis.
According to the International Neuroblastoma Classification System, neuroblastoma tumors are categorized into
favorable and unfavorable histologies. The subsequent treatment planning is based on this classification. However, this
qualitative evaluation is time consuming, prone to error and subject to inter- and intra-reader variations and sampling
bias. To overcome these shortcomings, we are developing a computerized system for the quantitative analysis of
neuroblastoma slides. In this study, we present a novel image analysis system to determine the degree of stromal
development from digitized whole-slide neuroblastoma samples. The developed method uses a multi-resolution
approach that works similar to how pathologists examine slides. Due to their very large resolutions, the whole-slide
images are divided into non-overlapping image tiles and the proposed image analysis steps are applied to each image tile
using a parallel computation infrastructure developed earlier by our group. The computerized system classifies image
tiles as stroma-poor or stroma-rich subtypes using texture characteristics. The developed method has been independently
tested on 20 whole-slide neuroblastoma slides and it has achieved 95% classification accuracy.
Automatic classification and detection of clinically relevant images for diabetic retinopathy
Author(s):
Xinyu Xu;
Baoxin Li
Show Abstract
We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of
clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of
the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that
are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR
images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as
bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is
characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns
a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation-
Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a
point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class
bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the
query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag
feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant
images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also
improves the efficiency and accuracy of DR lesion diagnosis and assessment.
Learning from imbalanced data: a comparative study for colon CAD
Author(s):
Xiaoyun Yang;
Yalin Zheng;
Musib Siddique;
Gareth Beddoe
Show Abstract
Classification plays an important role in the reduction of false positives in many computer aided detection and
diagnosis methods. The difficulty of classifying polyps lies in the variation of possible polyp shapes and sizes and
the imbalance between the number of polyp and non-polyp regions available in the training data. CAD schemes
for medical applications demand high levels of sensitivity even at the expense of keeping a certain number of
false positives. In this paper, we investigate some state-of-the-art solutions to the imbalanced data problem:
Synthetic Minority Over-sampling Technique (SMOTE) and weighted Support Vector Machines (SVM). We
tested these methods using a diverse database of CT colonography, which included a wide spectrum of dificult
cases to detect polyps. We performed several experiments with different combinations of over-sampling techniques
on training data. The results demonstrated that SVMs have achieved much better performance over C4.5 with
different over-sampling techniques. Also, the results show that weighted SVM without over-sampling can achieve
comparable performance in terms of sensitivity and specificity to conventional SVM combined with the over-sampling
approach.
Reduction of false positives by extracting fuzzy rules from data for polyp detection in CTC scans
Author(s):
Musib M. Siddique;
Yalin Zheng;
Xiaoyun Yang;
Gareth Beddoe
Show Abstract
This paper presents an adaptive neural network based Fuzzy Inference System (ANFIS) to reduce the false positive (FP)
rate of detected colonic polyps in Computed Tomography Colonography (CTC) scans. Extracted fuzzy rules establish
linguistically interpretable relationships in the data that are easy to understand, validate, and extend. The system takes
several features identified from regions extracted by a segmentation algorithm and decides whether the regions are true
polyps. In the training phase, subtractive clustering is used to down-sample the negative regions in order to get balanced
data. The rule extraction method is based on estimating clusters in the data using the subtractive clustering algorithm;
each cluster obtained corresponds to a fuzzy rule that maps a region in the input space to an output class. After the
number of rules and initial rule parameters are obtained by cluster estimation, the rule parameters are optimized using a
hybrid learning algorithm which is a combination of least-squares estimation with back propagation. The evolved
Sugeno-type FIS has been tested on a total of 129 scans with 99 polyps of sizes 5-15 mm by experienced radiologists.
The results indicate that for 93% detection sensitivity (on polyps), the evolved FIS method is able to remove 88% of FPs
generated by the segmentation algorithm leaving 7.5 FP per scan. The high sensitivity rate of our results show the
promise of neuro-fuzzy classifiers as an aid for interpreting CTC examinations.
Computer aided detection of polyps in virtual colonoscopy with sameday faecal tagging
Author(s):
Silvia Delsanto;
Lia Morra;
Silvano Agliozzo;
Riccardo Baggio;
Delia Campanella M.D.;
Vincenzo Tartaglia M.D.;
Francesca Cerri M.D.;
Franco Iafrate;
Emanuele Neri M.D.;
Andrea Laghi;
Daniele Regge M.D.
Show Abstract
One of the key factors which may lead to a greater patient compliance in virtual colonoscopy is a well tolerated bowel preparation; there is evidence that this may be obtained by faecal tagging techniques. In so-called "sameday faecal tagging" (SDFT) preparations, iodine contrast material is administered on the day of the exam in a hospital ward. The administration of oral contrast on the day of the procedure, however, often results in a less homogenous marking of faecal residues and computer aided-detection (CAD) systems must be able to treat these kinds of preparations. The aim of this work is to present a CAD scheme capable of achieving good performances on CT datasets obtained using SDFT, both in terms of sensitivity and specificity. The electronically cleansed datasets is processed by a scheme composed of three steps: colon surface extraction, polyp candidate segmentation through curvature-based features and discrimination between true polyps and false alarms. The system was evaluated on a dataset including 102 patients from three different centers. A specificity of 8.2 false positives per scan was obtained at a 100% sensitivity for polyps larger than 10 mm. In conclusion CAD schemes for SDFT may be designed to obtain high performances.
A consensus embedding approach for segmentation of high resolution in vivo prostate magnetic resonance imagery
Author(s):
Satish Viswanath;
Mark Rosen;
Anant Madabhushi
Show Abstract
Current techniques for localization of prostatic adenocarcinoma (CaP) via blinded trans-rectal ultrasound biopsy
are associated with a high false negative detection rate. While high resolution endorectal in vivo Magnetic
Resonance (MR) prostate imaging has been shown to have improved contrast and resolution for CaP detection
over ultrasound, similarity in intensity characteristics between benign and cancerous regions on MR images
contribute to a high false positive detection rate. In this paper, we present a novel unsupervised segmentation
method that employs manifold learning via consensus schemes for detection of cancerous regions from high
resolution 1.5 Tesla (T) endorectal in vivo prostate MRI. A significant contribution of this paper is a method to
combine multiple weak, lower-dimensional representations of high dimensional feature data in a way analogous
to classifier ensemble schemes, and hence create a stable and accurate reduced dimensional representation. After
correcting for MR image intensity artifacts, such as bias field inhomogeneity and intensity non-standardness,
our algorithm extracts over 350 3D texture features at every spatial location in the MR scene at multiple scales
and orientations. Non-linear dimensionality reduction schemes such as Locally Linear Embedding (LLE) and
Graph Embedding (GE) are employed to create multiple low dimensional data representations of this high
dimensional texture feature space. Our novel consensus embedding method is used to average object adjacencies
from within the multiple low dimensional projections so that class relationships are preserved. Unsupervised
consensus clustering is then used to partition the objects in this consensus embedding space into distinct classes.
Quantitative evaluation on 18 1.5 T prostate MR data against corresponding histology obtained from the multi-site
ACRIN trials show a sensitivity of 92.65% and a specificity of 82.06%, which suggests that our method is
successfully able to detect suspicious regions in the prostate.
Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer
Author(s):
Jonathan Chappelow;
Satish Viswanath;
James Monaco;
Mark Rosen;
John Tomaszewski;
Michael Feldman;
Anant Madabhushi
Show Abstract
Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling
of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial
extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When
ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer
ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual
segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal
image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment
of the two modalities, in order to improve the quality of training data and hence classifier performance. We
quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to
the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels
for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration
methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method
that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates
high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by
succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS).
In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7
ground truth surrogates obtained by different combinations of the expert and registration segmentations. For
26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver
operating characteristic curve compared to that obtained from expert annotation. These results suggest that in
the presence of additional multimodal image information one can obtain more accurate object annotations than
achievable via expert delineation despite vast differences between modalities that hinder image registration.
Combining T2-weighted with dynamic MR images for computerized classification of prostate lesions
Author(s):
Pieter C. Vos;
Thomas Hambrock M.D.;
Jelle O. Barentsz M.D.;
Henkjan J. Huisman
Show Abstract
In this study, we investigate the diagnostic performance of our CAD system when discriminating prostate cancer
from benign lesions and normal peripheral zone using registered multi-modal images. We have developed a
method that automatically extracts quantitative T2 values out of acquired T2-w images and evaluated its additional
value to the discriminating performance of our CAD system. This study addresses 2 issues when using
both T2-w and dynamic MR images for the characterization of prostate lesions. Firstly, T2-w images do not
provide quantitative values, and secondly, images can be misaligned due to patient movements. To compensate,
a mutual information registration strategy is performed after which T2 values are estimated using the acquired
proton density images. From the resulted quantitative T2 maps as well as the dynamic images relevant features
were extracted for training a support vector machine as classfier. The output of the classifier was used as a
measure of likelihood of malignancy. General performance of the scheme was evaluated using the area under the
ROC curve.
We conclude that it is feasible to automatically extract diagnostic T2 values out of acquired T2-w images.
Furthermore, a discriminating performance of 0.75 (0.66-0.85) was obtained when only using T2-values as feature.
Combining the T2 values with pharmacokinetic parameters did not increase diagnostic performance in a pilot
study.
Automated detection of nodules attached to the pleural and mediastinal surface in low-dose CT scans
Author(s):
Bram van Ginneken;
Andre Tan;
Keelin Murphy;
Bart-Jan de Hoop;
Mathias Prokop
Show Abstract
This paper presents a new computer-aided detection scheme for lung nodules attached to the pleural or mediastinal
surface in low dose CT scans. First the lungs are automatically segmented and smoothed. Any connected
set of voxels attached to the wall - with each voxel above minus 500 HU and the total object within a specified
volume range - was considered a candidate finding. For each candidate, a refined segmentation was computed
using morphological operators to remove attached structures. For each candidate, 35 features were defined,
based on their position in the lung and relative to other structures, and the shape and density within and around
each candidate. In a training procedure an optimal set of 15 features was determined with a k-nearest-neighbor
classifier and sequential floating forward feature selection.
The algorithm was trained with a data set of 708 scans from a lung cancer screening study containing 224
pleural nodules and tested on an independent test set of 226 scans from the same program with 58 pleural
nodules. The algorithm achieved a sensitivity of 52% with an average of 0.76 false positives per scan. At 2.5
false positive marks per scan, the sensitivity increased to 80%.
Performance levels for computerized detection of nodules in different size and pattern groups on thin-slice CT
Author(s):
Qiang Li;
Feng Li;
Kunio Doi
Show Abstract
We developed a computer-aided diagnostic (CAD) scheme for detection of lung nodules in CT, and investigated its
performance levels for nodules in different size and pattern groups. Our database consisted of 117 thin-slice CT scans
with 153 nodules. There were 68 (44.4%) small, 52 (34.0%) medium-sized, and 33 (21.6%) large nodules; 101 (66.0%)
solid and 52 (34.0%) nodules with ground glass opacity (GGO) in the database. Our CAD scheme consisted of lung
segmentation, selective nodule enhancement, initial nodule detection, accurate nodule segmentation, and feature
extraction and analysis techniques. We employed a case-based four-fold cross-validation method to evaluate the
performance levels of our CAD scheme. We detected 87% of nodules (small: 74%, medium-sized: 98%, large: 94%;
solid: 85%, GGO: 90%) with 6.5 false positives per scan; 82% of nodules (small: 68%, medium-sized: 94%, large: 91%;
solid: 78%, GGO: 89%) with 2.8 false positives per scan; and 77% of nodules (small: 63%, medium-sized: 90%, large:
89%; solid: 71%, GGO: 89%) with 1.5 false positives per scan. Our CAD scheme achieved a higher sensitivity for GGO
nodules than for solid nodules, because most of small nodules were solid. In conclusion, our CAD scheme achieved a
low false positive rate and a relatively high detection rate for nodules with a large variation in size and pattern.
A novel method of partitioning regions in lungs and their usage in feature extraction for reducing false positives
Author(s):
Mausumi Acharyya;
Dinesh M. Siddu;
Alexandra Manevitch;
Jonathan Stoeckel
Show Abstract
Chest X-ray (CXR) data is a 2D projection image. The main drawback of such an image is that each pixel
of it represents a volumetric integration. This poses a challenge in detection and estimation of nodules and
their characteristics. Due to human anatomy there are a lot of lung structures which can be falsely identified as
nodules in a projection data. Detection of nodules with a large number of false positives (FP) adds more work
for the radiologists.
With the help of CAD algorithms we aim to identify regions which cause higher FP readings or provide
additional information for nodule detection based on the human anatomy.
Different lung regions have different image characteristics we take advantage of this and propose an automatic
lung partitioning into vessel, apical, basal and exterior pulmonary regions. Anatomical landmarks like aortic
arch and end of cardiac-notch along-with inter intra-rib width and their shape characteristics were used for this
partitioning. Likelihood of FPs is more in vessel, apical and exterior pulmonary regions due to rib-crossing,
overlap of vessel with rib and vessel branching. For each of these three cases, special features were designed
based on histogram of rib slope and the structural properties of rib segments information. These features were
assigned different weights based on the partitioning.
An experiment was carried out using a prototype CAD system 150 routine CXR studies were acquired from
three institutions (24 negatives, rest with one or more nodules). Our algorithm provided a sensitivity of 70.4%
with 5 FP/image for cross-validation without partition. Inclusion of the proposed techniques increases the
sensitivity to 78.1% with 4.1 FP/image.
Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans
Author(s):
Alberto M. Biancardi;
Anthony P. Reeves;
Artit C. Jirapatnakul;
Tatiyana Apanasovitch;
David Yankelevitz;
Claudia I. Henschke
Show Abstract
Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change
in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules
that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that
each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around
the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database
with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image
slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection
criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid
of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were
correctly segmented, but one was considered not meeting the first selection criterion by the automated method;
for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were
considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications,
as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core
capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We
ranked the distances of the automated method estimates and the radiologist-based estimates from the median
of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at
least one of the values derived from the manual markings, which is a sign of a very good agreement with the
radiologists' markings.
Repeatability and noise robustness of spicularity features for computer aided characterization of pulmonary nodules in CT
Author(s):
Rafael Wiemker;
Roland Opfer;
Thomas Bülow;
Sven Kabus;
Ekta Dharaiya
Show Abstract
Computer aided characterization aims to support the differential diagnosis of indeterminate pulmonary nodules. A
number of published studies have correlated automatically computed features from image processing with clinical
diagnoses of malignancy vs. benignity. Often, however, a high number of features was trained on a relatively small
number of diagnosed nodules, raising a certain skepticism as to how salient and numerically robust the various features
really are. On the way towards computer aided diagnosis which is trusted in clinical practice, the credibility of the
individual numerical features has to be carefully established.
Nodule volume is the most crucial parameter for nodule characterization, and a number of studies are testing its
repeatability. Apart from functional parameters (such as dynamic CT enhancement and PET uptake values), the next
most widely used parameter is the surface characteristic (vascularization, spicularity, lobulation, smoothness). In this
study, we test the repeatability of two simple surface smoothness features which can discriminate between smoothly
delineated nodules and those with a high degree of surface irregularity.
Robustness of the completely automatically computed features was tested with respect to the following aspects: (a)
repeated CT scan of the same patient with equal dose, (b) repeated CT scan with much lower dose and much higher
noise, (c) repeated automatic segmentation of the nodules using varying segmentation parameters, resulting in differing
nodule surfaces. The tested nodules (81) were all solid or partially solid and included a high number of sub- and juxtapleural
nodules. We found that both tested surface characterization features correlated reasonably well with each other
(80%), and that in particular the mean-surface-shape-index showed an excellent repeatability: 98% correlation between
equal dose CT scans, 93% between standard-dose and low-dose scan (without systematic shift), and 97% between
varying HU-threshold of the automatic segmentation, which makes it a reliable feature to be used in computer aided
diagnosis.
Volume analysis of treatment response of head and neck lesions using 3D level set segmentation
Author(s):
Lubomir Hadjiiski;
Ethan Street;
Berkman Sahiner;
Sachin Gujar;
Mohannad Ibrahim;
Heang-Ping Chan;
Suresh K. Mukherji
Show Abstract
A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in
estimation of the response to treatment of malignant lesions. The system performs 3D segmentations based on a level set
model and uses as input an approximate bounding box for the lesion of interest. In this preliminary study, CT scans from
a pre-treatment exam and a post one-cycle chemotherapy exam of 13 patients containing head and neck neoplasms were
used. A radiologist marked 35 temporal pairs of lesions. 13 pairs were primary site cancers and 22 pairs were metastatic
lymph nodes. For all lesions, a radiologist outlined a contour on the best slice on both the pre- and post treatment scans.
For the 13 primary lesion pairs, full 3D contours were also extracted by a radiologist. The average pre- and post-treatment
areas on the best slices for all lesions were 4.5 and 2.1 cm2, respectively. For the 13 primary site pairs the
average pre- and post-treatment primary lesions volumes were 15.4 and 6.7 cm3 respectively. The correlation between
the automatic and manual estimates for the pre-to-post-treatment change in area for all 35 pairs was r=0.97, while the
correlation for the percent change in area was r=0.80. The correlation for the change in volume for the 13 primary site
pairs was r=0.89, while the correlation for the percent change in volume was r=0.79. The average signed percent error
between the automatic and manual areas for all 70 lesions was 11.0±20.6%. The average signed percent error between
the automatic and manual volumes for all 26 primary lesions was 37.8±42.1%. The preliminary results indicate that the
automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's
hand segmentation.
Automatic lesion tracking for a PET/CT based computer aided cancer therapy monitoring system
Author(s):
Roland Opfer;
Winfried Brenner;
Ingwer Carlsen;
Steffen Renisch;
Jörg Sabczynski;
Rafael Wiemker
Show Abstract
Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.
Unsupervised classification of cirrhotic livers using MRI data
Author(s):
Gobert Lee;
Masayuki Kanematsu;
Hiroki Kato;
Hiroshi Kondo;
Xiangrong Zhou;
Takeshi Hara;
Hiroshi Fujita;
Hiroaki Hoshi
Show Abstract
Cirrhosis of the liver is a chronic disease. It is characterized by the presence of widespread nodules and fibrosis in
the liver which results in characteristic texture patterns. Computerized analysis of hepatic texture patterns is usually
based on regions-of-interest (ROIs). However, not all ROIs are typical representatives of the disease stage of the
liver from which the ROIs originated. This leads to uncertainties in the ROI labels (diseased or non-diseased). On
the other hand, supervised classifiers are commonly used in determining the assignment rule. This presents a
problem as the training of a supervised classifier requires the correct labels of the ROIs. The main purpose of this
paper is to investigate the use of an unsupervised classifier, the k-means clustering, in classifying ROI based data.
In addition, a procedure for generating a receiver operating characteristic (ROC) curve depicting the classification
performance of k-means clustering is also reported. Hepatic MRI images of 44 patients (16 cirrhotic; 28 non-cirrhotic)
are used in this study. The MRI data are derived from gadolinium-enhanced equilibrium phase images.
For each patient, 10 ROIs selected by an experienced radiologist and 7 texture features measured on each ROI are
included in the MRI data. Results of the k-means classifier are depicted using an ROC curve. The area under the
curve (AUC) has a value of 0.704. This is slightly lower than but comparable to that of LDA and ANN classifiers
which have values 0.781 and 0.801, respectively. Methods in constructing ROC curve in relation to k-means
clustering have not been previously reported in the literature.
An information theoretic view of the scheduling problem in whole-body CAD
Author(s):
Yiqiang Zhan;
Xiang Sean Zhou;
Arun Krishnan
Show Abstract
Emerging whole-body imaging technologies push computer aided detection/diagnosis (CAD) to scale up to a
whole-body level, which involves multiple organs or anatomical structure. To be exploited in this paper is the
fact that the various tasks in whole-body CAD are often highly dependent (e.g., the localization of the femur
heads strongly predicts the position of the iliac bifurcation of the aorta). One way to effectively employ task
dependency is to schedule the tasks such that outputs of some tasks are used to guide the others. In this sense,
optimal task scheduling is key to improve overall performance of a whole-body CAD system. In this paper,
we propose a method for task scheduling that is optimal in an information-theoretic sense. The central idea
is to schedule tasks in such an order that each operation achieves maximum expected information gain over
all the tasks. The formulation embeds two intuitive principles: (1) a task with higher confidence tends to be
scheduled earlier; (2) a task with higher predictive power for other tasks tends to be scheduled earlier. More
specifically, task dependency is modeled by conditional probability; the outcome of each task is assumed to be
probabilistic as well; and the objective function is based on the reduction of the summed conditional entropy
over all tasks. The validation is carried out on a challenging CAD problem, multi-organ localization in whole-body
CT. Compared to unscheduled and ad hoc scheduled organ detection/localization, our scheduled execution
achieves higher accuracy with much less computation time.
Multiparametric tissue abnormality characterization using manifold regularization
Author(s):
Kayhan Batmanghelich;
Xiaoying Wu;
Evangelia Zacharaki;
Clyde E. Markowitz;
Christos Davatzikos;
Ragini Verma
Show Abstract
Tissue abnormality characterization is a generalized segmentation problem which aims at determining a continuous
score that can be assigned to the tissue which characterizes the extent of tissue deterioration, with completely
healthy tissue being one end of the spectrum and fully abnormal tissue such as lesions, being on the other end.
Our method is based on the assumptions that there is some tissue that is neither fully healthy or nor completely
abnormal but lies in between the two in terms of abnormality; and that the voxel-wise score of tissue abnormality
lies on a spatially and temporally smooth manifold of abnormality. Unlike in a pure classification problem
which associates an independent label with each voxel without considering correlation with neighbors, or an
absolute clustering problem which does not consider a priori knowledge of tissue type, we assume that diseased
and healthy tissue lie on a manifold that encompasses the healthy tissue and diseased tissue, stretching from
one to the other. We propose a semi-supervised method for determining such as abnormality manifold, using
multi-parametric features incorporated into a support vector machine framework in combination with manifold
regularization. We apply the framework towards the characterization of tissue abnormality to brains of multiple
sclerosis patients.
Automated detection of breast vascular calcification on full-field digital mammograms
Author(s):
Jun Ge;
Heang-Ping Chan;
Berkman Sahiner;
Chuan Zhou;
Mark A. Helvie;
Jun Wei;
Lubomir M. Hadjiiski;
Yiheng Zhang;
Yi-Ta Wu;
Jiazheng Shi
Show Abstract
Breast vascular calcifications (BVCs) are calcifications that line the blood vessel walls in the breast and appear
as parallel or tubular tracks on mammograms. BVC is one of the major causes of the false positive (FP) marks from
computer-aided detection (CADe) systems for screening mammography. With the detection of BVCs and the calcified
vessels identified, these FP clusters can be excluded. Moreover, recent studies reported the increasing interests in the
correlation between mammographically visible BVCs and the risk of coronary artery diseases. In this study, we
developed an automated BVC detection method based on microcalcification prescreening and a new k-segments
clustering algorithm. The mammogram is first processed with a difference-image filtering technique designed to
enhance calcifications. The calcification candidates are selected by an iterative process that combines global
thresholding and local thresholding. A new k-segments clustering algorithm is then used to find a set of line segments
that may be caused by the presence of calcified vessels. A linear discriminant analysis (LDA) classifier was designed to
reduce false segments that are not associated with BVCs. Four features for each segment selected with stepwise feature
selection were used for this LDA classification. Finally, the neighboring segments were linked and dilated with
morphological dilation to cover the regions of calcified vessels. A data set of 16 FFDM cases with vascular
calcifications was collected for this preliminary study. Our preliminary result demonstrated that breast vascular
calcifications can be accurately detected and the calcified vessels identified. It was found that the automated method can
achieve a detection sensitivity of 65%, 70%, and 75% at 6.1 mm, 8.4 mm, and 12.6mm FP segments/image, respectively,
without any true clustered microcalcifications being falsely marked. Further work is underway to improve this method
and to incorporate it into our FFDM CADe system.
Human airway measurement from CT images
Author(s):
Jaesung Lee;
Anthony P. Reeves;
Sergei Fotin;
Tatiyana Apanasovich;
David Yankelevitz M.D.
Show Abstract
A wide range of pulmonary diseases, including common ones such as COPD, affect the airways. If the dimensions
of airway can be measured with high confidence, the clinicians will be able to better diagnose diseases as well
as monitor progression and response to treatment. In this paper, we introduce a method to assess the airway
dimensions from CT scans, including the airway segments that are not oriented axially. First, the airway lumen
is segmented and skeletonized, and subsequently each airway segment is identified. We then represent each
airway segment using a segment-centric generalized cylinder model and assess airway lumen diameter (LD)
and wall thickness (WT) for each segment by determining inner and outer wall boundaries. The method was
evaluated on 14 healthy patients from a Weill Cornell database who had two scans within a 2 month interval.
The corresponding airway segments were located in two scans and measured using the automated method. The
total number of segments identified in both scans was 131. When 131 segments were considered altogether, the
average absolute change over two scans was 0.31 mm for LD and 0.12 mm for WT, with 95% limits of agreement
of [-0.85, 0.83] for LD and [-0.32, 0.26] for WT. The results were also analyzed on per-patient basis, and the
average absolute change was 0.19 mm for LD and 0.05 mm for WT. 95% limits of agreement for per-patient
changes were [-0.57, 0.47] for LD and [-0.16, 0.10] for WT.
Computer aided detection of endobronchial valves
Author(s):
Robert A. Ochs;
Jonathan G. Goldin;
Fereidoun Abtin;
Raffi Ghurabi;
Ajay Rao;
Shama Ahmad;
Irene da Costa;
Matthew Brown
Show Abstract
The ability to automatically detect and monitor implanted devices may serve an important role in patient care and
the evaluation of device and treatment efficacy. The purpose of this research was to develop a system for the
automated detection of one-way endobronchial valves implanted as part of a clinical trial for less invasive lung
volume reduction. Volumetric thin section CT data was obtained for 275 subjects; 95 subjects implanted with 246
devices were used for system development and 180 subjects implanted with 354 devices were reserved for testing.
The detection process consisted of pre-processing, pattern-recognition based detection, and a final device selection.
Following the pre-processing, a set of classifiers were trained using AdaBoost to discriminate true devices from
false positives (such as calcium deposits). The classifiers in the cascade used simple features (the mean or max
attenuation) computed near control points relative to a template model of the valve. Visual confirmation of the
system output served as the gold standard. FROC analysis was performed for the evaluation; the system could be set
so the mean sensitivity was 96.5% with a mean of 0.18 false positives per subject. These generic device modeling
and detection techniques may be applicable to other devices and useful for monitoring the placement and function of
implanted devices.
Computerized scheme for detection of diffuse lung diseases on CR chest images
Author(s):
Roberto R. Pereira Jr.;
Junji Shiraishi;
Feng Li;
Qiang Li;
Kunio Doi
Show Abstract
We have developed a new computer-aided diagnostic (CAD) scheme for detection of diffuse lung disease in computed
radiographic (CR) chest images. One hundred ninety-four chest images (56 normals and 138 abnormals with diffuse
lung diseases) were used. The 138 abnormal cases were classified into three levels of severity (34 mild, 60 moderate,
and 44 severe) by an experienced chest radiologist with use of five different patterns, i.e., reticular, reticulonodular,
nodular, air-space opacity, and emphysema. In our computerized scheme, the first moment of the power spectrum, the
root-mean-square variation, and the average pixel value were determined for each region of interest (ROI), which was
selected automatically in the lung fields. The average pixel value and its dependence on the location of the ROI were
employed for identifying abnormal patterns due to air-space opacity or emphysema. A rule-based method was used for
determining three levels of abnormality for each ROI (0: normal, 1: mild, 2: moderate, and 3: severe). The distinction
between normal lungs and abnormal lungs with diffuse lung disease was determined based on the fractional number of
abnormal ROIs by taking into account the severity of abnormalities. Preliminary results indicated that the area under the
ROC curve was 0.889 for the 44 severe cases, 0.825 for the 104 severe and moderate cases, and 0.794 for all cases. We
have identified a number of problems and reasons causing false positives on normal cases, and also false negatives on
abnormal cases. In addition, we have discussed potential approaches for improvement of our CAD scheme. In
conclusion, the CAD scheme for detection of diffuse lung diseases based on texture features extracted from CR chest
images has the potential to assist radiologists in their interpretation of diffuse lung diseases.
Extraction and visualization of the central chest lymph-node stations
Author(s):
Kongkuo Lu;
Scott A. Merritt;
William E. Higgins
Show Abstract
Lung cancer remains the leading cause of cancer death in the United States and is expected to account for nearly 30% of
all cancer deaths in 2007. Central to the lung-cancer diagnosis and staging process is the assessment of the central chest
lymph nodes. This assessment typically requires two major stages: (1) location of the lymph nodes in a three-dimensional
(3D) high-resolution volumetric multi-detector computed-tomography (MDCT) image of the chest; (2) subsequent nodal
sampling using transbronchial needle aspiration (TBNA). We describe a computer-based system for automatically locating
the central chest lymph-node stations in a 3D MDCT image. Automated analysis methods are first run that extract the
airway tree, airway-tree centerlines, aorta, pulmonary artery, lungs, key skeletal structures, and major-airway labels. This
information provides geometrical and anatomical cues for localizing the major nodal stations. Our system demarcates these
stations, conforming to criteria outlined for the Mountain and Wang standard classification systems. Visualization tools
within the system then enable the user to interact with these stations to locate visible lymph nodes. Results derived from
a set of human 3D MDCT chest images illustrate the usage and efficacy of the system.
Reduction of lymph tissue false positives in pulmonary embolism detection
Author(s):
Bernard Ghanem;
Jianming Liang;
Jinbo Bi;
Marcos Salganicoff;
Arun Krishnan
Show Abstract
Pulmonary embolism (PE) is a serious medical condition, characterized by the partial/complete blockage of an
artery within the lungs. We have previously developed a fast yet effective approach for computer aided detection
of PE in computed topographic pulmonary angiography (CTPA),1 which is capable of detecting both acute and
chronic PEs, achieving a benchmark performance of 78% sensitivity at 4 false positives (FPs) per volume. By
reviewing the FPs generated by this system, we found the most dominant type of FP, roughly one third of all
FPs, to be lymph/connective tissue. In this paper, we propose a novel approach that specifically aims at reducing
this FP type. Our idea is to explicitly exploit the anatomical context configuration of PE and lymph tissue in the
lungs: a lymph FP connects to the airway and is located outside the artery, while a true PE should not connect
to the airway and must be inside the artery. To realize this idea, given a detected candidate (i.e. a cluster of
suspicious voxels), we compute a set of contextual features, including its distance to the airway based on local
distance transform and its relative position to the artery based on fast tensor voting and Hessian "vesselness"
scores. Our tests on unseen cases show that these features can reduce the lymph FPs by 59%, while improving
the overall sensitivity by 3.4%.
Characterization of pulmonary nodules: effects of size and feature type on reported performance
Author(s):
Artit C. Jirapatnakul;
Anthony P. Reeves;
Tatiyana V. Apanasovich;
Alberto M. Biancardi;
David F. Yankelevitz M.D.;
Claudia I. Henschke
Show Abstract
Differences in the size distribution of malignant and benign pulmonary nodules in databases used for training and testing characterization systems have a significant impact on the measured performance. The magnitude of this effect and methods to provide more relevant performance results are explored in this paper. Two- and three-dimensional features, both including and excluding size, and two classifiers, logistic regression and distance-weighted nearest-neighbors (dwNN), were evaluated on a database of 178 pulmonary nodules. For the full database, the area under the ROC curve (AUC) of the logistic regression classifier for 2D features with and without size was 0.721 and 0.614 respectively, and for 3D features with and without size, 0.773 and 0.737 respectively. In comparison, the performance using a simple size-threshold classifier was 0.675. In the second part of the study, the performance was measured on a subset of 46 nodules from the entire subset selected to have a similar size-distribution of malignant and benign nodules. For this subset, performance of the size-threshold was 0.504. For logistic regression, the performance for 2D, with and without size, were 0.578 and 0.478, and for 3D, with and without size, 0.671 and 0.767. Over all the databases, logistic regression exhibited better performance using 3D features than 2D features. This study suggests that in systems for nodule classification, size is responsible for a large part of the reported performance. To address this, system performance should be reported with respect to the performance of a size-threshold classifier.
Use of random process-based fractal measure for characterization nodules and suspicious regions in lung
Author(s):
Mausumi Acharyya;
Sumit Chakravarty;
Jonathan Stoeckel
Show Abstract
Chest X-ray (CXR) data is a projection image where each pixel of it represents a volumetric integration. Consequently
identification of nodules and their characteristics is a difficult task in such images.
Using a novel application of random process-based fractal image processing technique we extract features for
nodule characterization. The uniqueness of the proposed technique lies in the fact that instead of relying on
apriori information from user as in other random process inspired measures, we translate the random walk process
into a feature which is based on its realization values. The Normalized Fractional Brownian Motion (NFBM)
Model is derived from the random walk process. Using neighborhood region information in an incremental
manner we can characterize the smoothness or roughness of a surface. The NFBM system gives a measure of
roughness of a surface which in our case is a suspicious region (probable nodule). A classification procedure uses
this measure to categorize nodule and non-nodule structures in the lung.
The NFBM feature set is integrated in a prototype CAD system for nodule detection in CXR. Our algorithm
provided a sensitivity of 75.9% with 3.1 FP/image on an independent test set of 50 CXR studies.
The impact of pulmonary nodule size estimation accuracy on the measured performance of automated nodule detection systems
Author(s):
Sergei V. Fotin;
Anthony P. Reeves;
David F. Yankelevitz;
Claudia I. Henschke
Show Abstract
The performance of automated pulmonary nodule detection systems is typically qualified with respect to some
minimum size of nodule to be detected. Also, an evaluation dataset is typically constructed by expert radiologists
with all nodules larger than the minimum size being designated as true positives while all other smaller detected "nodules" are considered to be false positives. In this paper, we consider the negative impact that size estimation
error, either in the establishment of ground truth for the evaluation dataset or by the automated detection
method for the size estimate of nodule candidates, has on the measured performance of the detection system.
Furthermore, we propose a modified evaluation procedure that addresses the size estimation error issue.
The impact of the size measurement error was estimated for a documented research image database consisting
of whole-lung CT scans for 509 cases in which 690 nodules have been documented. We compute FROC curves
both with and without size error compensation and we found that for a minimum size limit of 4 mm the
performance of the system is underestimated by a sensitivity reduction of 5% and a false positive rate increase
of 0.25 per case. Therefore, error in nodule size estimation should be considered in the evaluation of automated
detection systems.
Computer-aided diagnosis: a 3D segmentation method for lung nodules in CT images by use of a spiral-scanning technique
Author(s):
Jiahui Wang;
Roger Engelmann;
Qiang Li
Show Abstract
Lung nodule segmentation in computed tomography (CT) plays an important role in computer-aided detection, diagnosis,
and quantification systems for lung cancer. In this study, we developed a simple but accurate nodule segmentation
method in three-dimensional (3D) CT. First, a volume of interest (VOI) was determined at the location of a nodule. We
then transformed the VOI into a two-dimensional (2D) image by use of a "spiral-scanning" technique, in which a radial
line originating from the center of the VOI spirally scanned the VOI. The voxels scanned by the radial line were
arranged sequentially to form a transformed 2D image. Because the surface of a nodule in 3D image became a curve in
the transformed 2D image, the spiral-scanning technique considerably simplified our segmentation method and enabled
us to obtain accurate segmentation results. We employed a dynamic programming technique to delineate the "optimal"
outline of a nodule in the 2D image, which was transformed back into the 3D image space to provide the interior of the
nodule. The proposed segmentation method was trained on the first and was tested on the second Lung Image Database
Consortium (LIDC) datasets. An overlap between nodule regions provided by computer and by the radiologists was
employed as a performance metric. The experimental results on the LIDC database demonstrated that our segmentation
method provided relatively robust and accurate segmentation results with mean overlap values of 66% and 64% for the
nodules in the first and second LIDC datasets, respectively, and would be useful for the quantification, detection, and
diagnosis of lung cancer.
Comparison of computer-aided diagnosis performance and radiologist readings on the LIDC pulmonary nodule dataset
Author(s):
Luyin Zhao;
Michael C. Lee;
Lilla Boroczky;
Victor Vloemans;
Roland Opfer
Show Abstract
One challenge facing radiologists is the characterization of whether a pulmonary nodule detected in a CT scan is likely to be benign or malignant. We have developed an image processing and machine learning based computer-aided diagnosis (CADx) method to support such decisions by estimating the likelihood of malignancy of pulmonary nodules. The system computes 192 image features which are combined with patient age to comprise the feature pool. We constructed an ensemble of 1000 linear discriminant classifiers using 1000 feature subsets selected from the feature pool using a random subspace method. The classifiers were trained on a dataset of 125 pulmonary nodules. The individual classifier results were combined using a majority voting method to form an ensemble estimate of the likelihood of malignancy. Validation was performed on nodules in the Lung Imaging Database Consortium (LIDC) dataset for which radiologist interpretations were available. We performed calibration to reduce the differences in the internal operating points and spacing between radiologist rating and the CADx algorithm. Comparing radiologists with the CADx in assigning nodules into four malignancy categories, fair agreement was observed (κ=0.381) while binary rating yielded an agreement of (κ=0.475), suggesting that CADx can be a promising second reader in a clinical setting.
Characteristics of suspicious features in CT lung-cancer screening images
Author(s):
Philip F. Judy;
Yoshiko Kanasaki;
Francine L. Jacobson;
Chiara Del Frate
Show Abstract
The high-frequency of suspicious non-malignant image features limit the use of CT lung-cancer screening of
asymptomatic individuals. A reference database of 6 radiologists' localizations of suspicious image features was created.
The frequency, sizes, shapes, margins and degree of calcification of the images features were determined. The
radiologist exam report identified 50% findings reported on 5 or 6 occasions, while the CAD system identified 40%. The
radiologist exam report missed 80% to 50% of suspicious, retrospectively identified in CT lung cancer images. Many
were less than 4 mm. Radiologists can use a lenient criterion in experiments of previously read screening cases.
Database decomposition of a knowledge-based CAD system in mammography: an ensemble approach to improve detection
Author(s):
Maciej A. Mazurowski;
Jacek M. Zurada;
Georgia D. Tourassi
Show Abstract
Although ensemble techniques have been investigated in supervised machine learning, their potential
with knowledge-based systems is unexplored. The purpose of this study is to investigate the ensemble
approach with a knowledge-based (KB) CAD system for the detection of masses in screening mammograms.
The system is designed to determine the presence of a mass in a query mammographic region
of interest (ROI) based on its similarity with previously acquired examples of mass and normal cases.
Similarity between images is assessed using normalized mutual information. Two different approaches
of knowledge database decomposition were investigated to create the ensemble. The first approach was
random division of the knowledge database into a pre-specified number of equal size, separate groups.
The second approach was based on k-means clustering of the knowledge cases according to common
texture features extracted from the ROIs. The ensemble components were fused using a linear classifier.
Based on a database of 1820 ROIs (901 masses and 919 and the leave-one-out crossvalidation scheme,
the ensemble techniques improved the performance of the original KB-CAD system (Az = 0.86±0.01).
Specifically, random division resulted in ROC area index of Az = 0.90 ± 0.01 while k-means clustering
provided further improvement (Az = 0.91 ± 0.01). Although marginally better, the improvement
was statistically significant. The superiority of the k-means clustering scheme was robust regardless
of the number of clusters. This study supports the idea of incorporation of ensemble techniques with
knowledge-based systems in mammography.
Correlative feature analysis of FFDM images
Author(s):
Yading Yuan;
Maryellen L. Giger;
Hui Li;
Charlene Sennett
Show Abstract
Identifying the corresponding image pair of a lesion is an essential
step for combining information from different views of the lesion
to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images
from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially
applied to extract mass lesions from the surrounding tissues. Then
various lesion features were automatically extracted from each of
the two views of each lesion to quantify the characteristics of margin,
shape, size, texture and context of the lesion, as well as its distance
to nipple. We employed a two-step method to select an effective subset
of features, and combined it with a BANN to obtain a discriminant
score, which yielded an estimate of the probability that the two images
are of the same physical lesion. ROC analysis was used to evaluate
the performance of the individual features and the selected feature
subset in the task of distinguishing between corresponding and non-corresponding
pairs. By using a FFDM database with 124 corresponding image pairs
and 35 non-corresponding pairs, the distance feature yielded an AUC
(area under the ROC curve) of 0.8 with leave-one-out evaluation
by lesion, and the feature subset, which includes distance feature,
lesion size and lesion contrast, yielded an AUC of 0.86. The improvement
by using multiple features was statistically significant as compared
to single feature performance. (p<0.001)
Matching mammographic regions in mediolateral oblique and cranio caudal views: a probabilistic approach
Author(s):
Maurice Samulski;
Nico Karssemeijer
Show Abstract
Most of the current CAD systems detect suspicious mass regions independently in single views. In this paper
we present a method to match corresponding regions in mediolateral oblique (MLO) and craniocaudal (CC)
mammographic views of the breast. For every possible combination of mass regions in the MLO view and CC
view, a number of features are computed, such as the difference in distance of a region to the nipple, a texture
similarity measure, the gray scale correlation and the likelihood of malignancy of both regions computed by single-view
analysis. In previous research, Linear Discriminant Analysis was used to discriminate between correct and
incorrect links. In this paper we investigate if the performance can be improved by employing a statistical method
in which four classes are distinguished. These four classes are defined by the combinations of view (MLO/CC)
and pathology (TP/FP) labels. We use distance-weighted k-Nearest Neighbor density estimation to estimate the
likelihood of a region combination. Next, a correspondence score is calculated as the likelihood that the region
combination is a TP-TP link. The method was tested on 412 cases with a malignant lesion visible in at least
one of the views. In 82.4% of the cases a correct link could be established between the TP detections in both
views. In future work, we will use the framework presented here to develop a context dependent region matching
scheme, which takes the number and likelihood of possible alternatives into account. It is expected that more
accurate determination of matching probabilities will lead to improved CAD performance.
Concordance of computer-extracted image features with BI-RADS descriptors for mammographic mass margin
Author(s):
Berkman Sahiner;
Lubomir M. Hadjiiski;
Heang-Ping Chan;
Chintana Paramagul;
Alexis Nees;
Mark Helvie;
Jiazheng Shi
Show Abstract
The purpose of this study was to develop and evaluate computer-extracted features for characterizing mammographic
mass margins according to BI-RADS spiculated and circumscribed categories. The mass was automatically segmented
using an active contour model. A spiculation measure for a pixel on the mass boundary was defined by using the angular
difference between the image gradient vector and the normal to the mass, averaged over pixels in a spiculation search
region. For the circumscribed margin feature, the angular difference between the principal eigenvector of the Hessian
matrix and the normal to the mass was estimated in a band of pixels centered at each point on the boundary, and the
feature was extracted from the resulting profile along the boundary. Three MQSA radiologists provided BI-RADS
margin ratings for a data set of 198 regions of interest containing breast masses. The features were evaluated with
respect to the individual radiologists' characterization using receiver operating characteristic (ROC) analysis, as well as
with respect to that from the majority rule, in which a mass was labeled as spiculated (circumscribed) if it was
characterized as such by 2 or 3 radiologists, and non-spiculated (non-circumscribed) otherwise. We also investigated the
performance of the features for consensus masses, defined as those labeled as spiculated (circumscribed) or nonspiculated
(non-circumscribed) by all three radiologists. When masses were labeled according to radiologists R1, R2,
and R3 individually, the spiculation feature had an area Az under the ROC curve of 0.90±0.04, 0.90±0.03, 0.88±0.03,
respectively, while the circumscribed margin feature had an Az value of 0.77±0.04, 0.74±0.04, and 0.80±0.03,
respectively. When masses were labeled according to the majority rule, the Az values for the spiculation and the
circumscribed margin features were 0.92±0.03 and 0.80±±0.03, respectively. When only the consensus masses were
considered, the Az values for the spiculation and the circumscribed margin features were 0.96±0.04 and 0.87±0.04,
respectively. We conclude that the newly developed features had high accuracy for characterizing mass margins
according to BI-RADS descriptors.
The effect of training with SFM images in a FFDM CAD system
Author(s):
Michiel Kallenberg;
Nico Karssemeijer
Show Abstract
The development of CAD systems that can handle Full Field Digital Mammography (FFDM) images is needed,
as FFDM is getting more important. In order to develop a CAD system a large database containing training
samples is of major importance. However, as FFDM is not yet as widely used as Screen Film Mammography
(SFM) it is difficult to collect a sufficient amount of exams with malignant abnormalities. Therefore it would
be of great value if the available databases of SFM images can be used to train a FFDM CAD system. In this
paper we investigate this possibility.
As we trained our system with SFM images we developed a method that converts the FFDM test images into a
SFM-like representation. Key point in this conversion method is the implementation of the characteristic curve
which describes the relationship between exposure and optical density for a SFM image. As exposure values
can be extracted from the raw FFDM images, the SFM-like representation can be obtained by applying a fitted
characteristic curve. Parameters of the curve were computed by simulating the Automatic Exposure Control
procedure as implemented in clinical practice.
We found that our FFDM CAD system, aimed at detection and classification of masses into normal and malignant,
achieved a case based sensitivity of 70%, 80%, 90%, at 0.06, 0.20, 0.60 FP/image when using SFM-training
with 552 abnormal and 810 normal cases, compared to 0.06, 0.17, 0.72 FP/image with FFDM-training with 80
abnormal and 131 normal cases. These results demonstrate that digitized film databases can still be used as part
of a FFDM CAD system.
Computer-aided diagnostic method for classification of Alzheimer's disease with atrophic image features on MR images
Author(s):
Hidetaka Arimura;
Takashi Yoshiura M.D.;
Seiji Kumazawa;
Kazuhiro Tanaka;
Hiroshi Koga;
Futoshi Mihara;
Hiroshi Honda;
Shuji Sakai;
Fukai Toyofuku;
Yoshiharu Higashida
Show Abstract
Our goal for this study was to attempt to develop a computer-aided diagnostic (CAD) method for classification of Alzheimer's disease (AD) with atrophic image features derived from specific anatomical regions in three-dimensional (3-D) T1-weighted magnetic resonance (MR) images. Specific regions related to the cerebral atrophy of AD were white matter and gray matter regions, and CSF regions in this study. Cerebral cortical gray matter regions were determined by extracting a brain and white matter regions based on a level set based method, whose speed function depended on gradient vectors in an original image and pixel values in grown regions. The CSF regions in cerebral sulci and lateral ventricles were extracted by wrapping the brain tightly with a zero level set determined from a level set function. Volumes of the specific regions and the cortical thickness were determined as atrophic image features. Average cortical thickness was calculated in 32 subregions, which were obtained by dividing each brain region. Finally, AD patients were classified by using a support vector machine, which was trained by the image features of AD and non-AD cases. We applied our CAD method to MR images of whole brains obtained from 29 clinically diagnosed AD cases and 25 non-AD cases. As a result, the area under a receiver operating characteristic (ROC) curve obtained by our computerized method was 0.901 based on a leave-one-out test in identification of AD cases among 54 cases including 8 AD patients at early stages. The accuracy for discrimination between 29 AD patients and 25 non-AD subjects was 0.840, which was determined at the point where the sensitivity was the same as the specificity on the ROC curve. This result showed that our CAD method based on atrophic image features may be promising for detecting AD patients by using 3-D MR images.
Computerized detection of unruptured aneurysms in MRA images: reduction of false positives using anatomical location features
Author(s):
Yoshikazu Uchiyama;
Xin Gao;
Takeshi Hara;
Hiroshi Fujita;
Hiromichi Ando;
Hiroyasu Yamakawa;
Takahiko Asano;
Hiroki Kato;
Toru Iwama;
Masayuki Kanematsu;
Hiroaki Hoshi
Show Abstract
The detection of unruptured aneurysms is a major subject in magnetic resonance angiography (MRA). However, their accurate detection is often difficult because of the overlapping between the aneurysm and the adjacent vessels on maximum intensity projection images. The purpose of this study is to develop a computerized method for the detection of unruptured aneurysms in order to assist radiologists in image interpretation. The vessel regions were first segmented using gray-level thresholding and a region growing technique. The gradient concentration (GC) filter was then employed for the enhancement of the aneurysms. The initial candidates were identified in the GC image using a gray-level threshold. For the elimination of false positives (FPs), we determined shape features and an anatomical location feature. Finally, rule-based schemes and quadratic discriminant analysis were employed along with these features for distinguishing between the aneurysms and the FPs. The sensitivity for the detection of unruptured aneurysms was 90.0% with 1.52 FPs per patient. Our computerized scheme can be useful in assisting the radiologists in the detection of unruptured aneurysms in MRA images.
Coil compaction and aneurysm growth: image-based quantification using non-rigid registration
Author(s):
Mathieu De Craene;
José María Pozo;
Maria Cruz Villa;
Elio Vivas;
Teresa Sola;
Leopoldo Guimaraens;
Jordi Blasco;
Juan Macho;
Alejandro Frangi
Show Abstract
Endovascular treatment of intracranial aneurysms is a minimally-invasive technique recognized as a valid alternative
to surgical clipping. However, endovascular treatment can be associated to aneurysm recurrence, either
due to coil compaction or aneurysm growth. The quantification of coil compaction or aneurysm growth is usually
performed by manual measurements or visual inspection of images from consecutive follow-ups. Manual measurements
permit to detect large global deformation but might have insufficient accuracy for detecting subtle or
more local changes between images. Image inspection permits to detect a residual neck in the aneurysm but do
not differentiate aneurysm growth from coil compaction. In this paper, we propose to quantify independently coil
compaction and aneurysm growth using non-rigid image registration. Local changes of volume between images
at successive time points are identified using the Jacobian of the non-rigid transformation.
Two different non-rigid registration strategies are applied in order to explore the sensitivity of Jacobian-based
volume changes against the registration method, FFD registration based on mutual information and Demons.
This volume-variation measure has been applied to four patients of which a series of 3D Rotational Angiography
(3DRA) images obtained at different controls separated from two months to two years were available. The
evolution of coil and aneurysm volumes along the period has been obtained separately, which allows distinguishing
between coil compaction and aneurysm growth. On the four cases studied in this paper, aneurysm recurrence
was always associated to aneurysm growth, as opposed to strict coil compaction.
Automatic segmentation of different-sized leukoaraiosis regions in brain MR images
Author(s):
Yoshikazu Uchiyama;
Takuya Kunieda;
Takeshi Hara;
Hiroshi Fujita;
Hiromichi Ando;
Hiroyasu Yamakawa;
Takahiko Asano;
Hiroki Kato;
Toru Iwama;
Masayuki Kanematsu;
Hiroaki Hoshi
Show Abstract
Cerebrovascular diseases are the third leading cause of death in Japan. Therefore, a screening system for the early detection of asymptomatic brain diseases is widely used. In this screening system, leukoaraiosis is often detected in magnetic resonance (MR) images. The quantitative analysis of leukoaraiosis is important because its presence and extension is associated with an increased risk of severe stroke. However, thus far, the diagnosis of leukoaraiosis has generally been limited to subjective judgments by radiologists. Therefore, the purpose of this study was to develop a computerized method for the segmentation of leukoaraiosis, and provide an objective measurement of the lesion volume. Our database comprised of T1- and T2-weighted images obtained from 73 patients. The locations of leukoaraiosis regions were determined by an experienced neuroradiologist. We first segment cerebral parenchymal regions in T1-weighted images by using a region growing technique. For determining the initial candidate regions for leukoaraiosis, the k-means clustering of pixel values in the T1- and T2-weighted images was applied to the segmented cerebral region. For the elimination of false positives (FPs), we determined features such as the location, size, and circularity from each of the initial candidates. Finally, rule-based schemes and a quadratic discriminant analysis with these features were employed for distinguishing between the leukoaraiosis regions and the FPs. The results indicated that the sensitivity for the detection of leukoaraiosis was 100% with 5.84 FPs per image. Our computerized scheme can be useful in assisting radiologists for the quantitative analysis of leukoaraiosis in T1- and T2-weighted images.
A multi-resolution image analysis system for computer-assisted grading of neuroblastoma differentiation
Author(s):
Jun Kong;
Olcay Sertel;
Hiroyuki Shimada;
Kim L. Boyer;
Joel H. Saltz;
Metin N. Gurcan
Show Abstract
Neuroblastic Tumor (NT) is one of the most commonly occurring tumors in children. Of all types of NTs, neuroblastoma
is the most malignant tumor that can be further categorized into undifferentiated (UD), poorly-differentiated (PD) and
differentiating (D) types, in terms of the grade of pathological differentiation. Currently, pathologists determine the
grade of differentiation by visual examinations of tissue samples under the microscope. However, this process is
subjective and, hence, may lead to intra- and inter-reader variability. In this paper, we propose a multi-resolution image
analysis system that helps pathologists classify tissue samples according to their grades of differentiation. The inputs to
this system are color images of haematoxylin and eosin (H&E) stained tissue samples. The complete image analysis
system has five stages: segmentation, feature construction, feature extraction, classification and confidence evaluation.
Due to the large number of input images, both parallel processing and multi-resolution analysis were carried out to
reduce the execution time of the algorithm. Our training dataset consists of 387 images tiles of size 512x512 in pixels
from three whole-slide images. We tested the developed system with an independent set of 24 whole-slide images, eight
from each grade. The developed system has an accuracy of 83.3% in correctly identifying the grade of differentiation,
and it takes about two hours, on average, to process each whole slide image.
Quantitative assessment of multiple sclerosis lesion load using CAD and expert input
Author(s):
Arkadiusz Gertych;
Alexis Wong M.D.;
Alan Sangnil;
Brent J. Liu
Show Abstract
Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting
the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on
magnetic resonance (MR) images is clinically useful and provides information about the development and change
reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and
inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it
seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL
quantification strategies.
We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological
findings to original images according to current DICOM standard. CAD is also capable to display and track changes and
make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to
a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual
contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good
correlation of CAD-derived results vs. those incorporated as expert's reading.
Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because
of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion
of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS
progression tracking.
Neurodegenerative changes in Alzheimer's disease: a comparative study of manual, semi-automated, and fully automated assessment using MRI
Author(s):
Klaus H. Fritzsche;
Frederik L. Giesel;
Tobias Heimann;
Philipp A. Thomann;
Horst K. Hahn;
Johannes Pantel;
Johannes Schröder;
Marco Essig;
Hans-Peter Meinzer
Show Abstract
Objective quantification of disease specific neurodegenerative changes can facilitate diagnosis and therapeutic
monitoring in several neuropsychiatric disorders. Reproducibility and easy-to-perform assessment are essential
to ensure applicability in clinical environments. Aim of this comparative study is the evaluation of a fully
automated approach that assesses atrophic changes in Alzheimer's disease (AD) and Mild Cognitive Impairment
(MCI).
21 healthy volunteers (mean age 66.2), 21 patients with MCI (66.6), and 10 patients with AD (65.1) were
enrolled. Subjects underwent extensive neuropsychological testing and MRI was conducted on a 1.5 Tesla clinical
scanner. Atrophic changes were measured automatically by a series of image processing steps including state of
the art brain mapping techniques. Results were compared with two reference approaches: a manual segmentation
of the hippocampal formation and a semi-automated estimation of temporal horn volume, which is based upon
interactive selection of two to six landmarks in the ventricular system.
All approaches separated controls and AD patients significantly (10-5 < p < 10-4) and showed a slight but
not significant increase of neurodegeneration for subjects with MCI compared to volunteers. The automated
approach correlated significantly with the manual (r = -0.65, p < 10-6) and semi automated (r = -0.83,
p < 10-13) measurements. It proved high accuracy and at the same time maximized observer independency,
time reduction and thus usefulness for clinical routine.
Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN
Author(s):
Nandita Pradhan;
A. K. Sinha
Show Abstract
This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor &
Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work
segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform
coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is
used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in
the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used
to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy
tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD
weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and
Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human
visualization.
Influence of signal-to-noise ratio and temporal stability on computer-aided detection of mammographic microcalcifications in digitized screen-film and full-field digital mammography
Author(s):
Laura M. Yarusso;
Robert M. Nishikawa
Show Abstract
Most computer-aided detection (CADe) schemes were developed for digitized screen-film mammography (dSFM) and
are being transitioned to full-field digital mammography (FFDM). In this research, phantoms were used to relate image
quality differences to the performance of the multiple components of our microcalcification CADe scheme, and to
identify to what extent, if any, each CADe component is likely to require modification for FFDM. We compared
multiple image quality metrics for a dSFM imaging chain (GE DMR, MinR-2000 and Lumisiys digitizer) and an FFDM
system (GE Senographe 2000D) and related them to CADe performance for images of 1) contrast-detail phantom disks
and 2) microcalcification phantoms (bone fragments and cadaver breasts). Higher object signal-to noise ratio (SNR) in
FFDM compared with dSFM (p<0.05 for 62% of disks, and p>0.05 for 32% of disks) led to superior CADe signal and
cluster detection FROC performance. Signal segmentation was comparable (p>0.05 for 74% of disks) in dSFM and
FFDM and superior in FFDM (p<0.05) for 19% of disks. Better FFDM temporal stability led to more reproducible
CADe performance. For microcalcification phantoms, seven of eight computer-calculated features performed better or
comparably (p<0.05) at classifying true- and false-positive detections in FFDM. In conclusion, the image quality
improvements offered by FFDM compared to dSFM led to comparable or improved performance of the multiple stages
of our CADe scheme for microcalcification detection.
Toward a standard reference database for computer-aided mammography
Author(s):
Júlia E. E. Oliveira;
Mark O. Gueld;
Arnaldo de A. Araújo;
Bastian Ott;
Thomas M. Deserno
Show Abstract
Because of the lack of mammography databases with a large amount of codified images and identified characteristics
like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for
computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an
available mammography database developed from the union of: The Mammographic Image Analysis Society Digital
Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore
National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH)
Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed
according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data
system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file
name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference
images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective
license agreements, the database will be made freely available for research purposes, and may be used for image based
evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be
extended easily with further cases imported from a picture archiving and communication system (PACS).
A graph matching based automatic regional registration method for sequential mammogram analysis
Author(s):
Fei Ma;
Mariusz Bajger;
Murk J. Bottema
Show Abstract
This paper presents a method for associating regions of sequential
mammograms automatically using graph matching. The graph matching
utilises relative spatial relationships between the regions of a
mammogram to establish regional correspondences between two
mammograms. As a first step of the method, the mammogram is
segmented into separate regions using an adaptive pyramid
segmentation algorithm. This process produces both segmented
regions of the mammogram and a graph. The nodes of the graph
represent the segmented regions, and the lines represent the
relationships between the regions. The regions are then filtered
to remove undesired regions. To express the spatial relations
between the regions, we use a fuzzy logic expression, which takes
into account the characteristics of each region including the
shape, size and orientation. The spatial relations between regions
are utilised as weights of the graph. The backtrack algorithm is
then used to find the common subgraph between two graphs. The
proposed method is applied to 95 temporal pairs of mammograms. For
each temporal mammogram pair, an average of 13.2 regions are
matched. All region matches are classified as "good", "average",
"poor" and "unknown" by one of the authors (FM) based on visual
perception. 63.5% of region matches are identified as "good",
and 23.6% as "average". The percentages of "poor" and
"unknown" are 10.9% and 2% respectively. These results
indicate that our registration method may be useful for
establishing regional correspondence between sequential
mammograms.
Comparison of mammographic parenchymal patterns of normal subjects and breast cancer patients
Author(s):
Yi-Ta Wu;
Berkman Sahiner;
Heang-Ping Chan;
Jun Wei;
Lubomir M. Hadjiiski;
Mark A. Helvie;
Yiheng Zhang;
Jiazheng Shi;
Chuan Zhou;
Jun Ge;
Jing Cui
Show Abstract
In this study, we compared the texture features of mammographic parenchymal patterns (MPPs) of normal subjects and
breast cancer patients and evaluated whether a texture classifier can differentiate their MPPs. The breast image was first
segmented from the surrounding image background by boundary detection. Regions of interest (ROIs) were extracted
from the segmented breast area in the retroareolar region on the cranio-caudal (CC) view mammograms. A mass set
(MS) of ROIs was extracted from the mammograms with cancer, but ROIs overlapping with the mass were excluded. A
contralateral set (CS) of ROIs was extracted from the contralateral mammograms. A normal set (NS) of ROIs was
extracted from one CC view mammogram of the normal subjects. Each data set was randomly separated into two
independent subsets for 2-fold cross-validation training and testing. Texture features from run-length statistics (RLS) and
newly developed region-size statistics (RSS) were extracted to characterize the MPP of the breast. Linear discriminant
analysis (LDA) was performed to compare the MPP difference in each of the three pairs: MS-vs-NS, CS-vs-NS, and MS-vs-CS. The Az values for the three pairs were 0.79, 0.73, and 0.56, respectively. These results indicate that the MPPs of
the contralateral breast of breast cancer patients exhibit textures comparable to that of the affected breast and that the
MPPs of cancer patients are different from those of normal subjects.
Characterization of posterior acoustic features of breast masses on ultrasound images using artificial neural network
Author(s):
Jing Cui;
Berkman Sahiner;
Heang-Ping Chan;
Chintana Paramagul;
Alexis Nees;
Lubomir M. Hadjiiski;
Yi-Ta Wu
Show Abstract
Posterior acoustic enhancement and shadowing on ultrasound (US) images are important features used by radiologists
for characterization of breast masses. We are developing new feature extraction and classification methods for
computerized characterization of posterior acoustic patterns of breast masses into shadowing, no pattern, or enhancement
categories. The sonographic mass was segmented using an automated active contour segmentation method. Three
adjacent rectangular regions of interest (ROIs) of identical sizes were automatically defined at the same depth
immediately behind the mass. Three features related to enhancement, shadowing, and no posterior pattern were designed
by comparing the image intensities within these ROIs. Artificial neural network (ANN) classifiers were trained using a
leave-one-case-out resampling method. Two radiologists provided posterior acoustic descriptors for each mass. Posterior
acoustic patterns of masses for which both radiologists were in agreement were used as the ground truth, and the
agreement of the ANN scores with the radiologists' assessment was used as the performance measure. On a data set of
339 US images containing masses, the overall agreement between the computer and the radiologists was between 86%
and 87% depending on the ANN architecture. The output score of the designed ANN classifiers may be useful in
computer-aided breast mass characterization and content-based image retrieval systems.
Application of the Minkowski-functionals for automated pattern classification of breast parenchyma depicted by digital mammography
Author(s):
Holger F. Boehm;
Tanja Fischer;
Dororthea Riosk;
Stefanie Britsch;
Maximilian Reiser
Show Abstract
With an estimated life-time-risk of about 10%, breast cancer is the most common cancer among women in western societies. Extensive mammography-screening programs have been implemented for diagnosis of the disease at an early stage. Several algorithms for computer-aided detection (CAD) have been proposed to help radiologists manage the increasing number of mammographic image-data and identify new cases of cancer. However, a major issue with most CAD-solutions is the fact that performance strongly depends on the structure and density of the breast tissue. Prior information about the global tissue quality in a patient would be helpful for selecting the most effective CAD-approach in order to increase the sensitivity of lesion-detection. In our study, we propose an automated method for textural evaluation of digital mammograms using the Minkowski Functionals in 2D. 80 mammograms are consensus-classified by two experienced readers as fibrosis, involution/atrophy, or normal. For each case, the topology of graylevel distribution is evaluated within a retromamillary image-section of 512 x 512 pixels. In addition, we obtain parameters from the graylevel-histogram (20th percentile, median and mean graylevel intensity). As a result, correct classification of the mammograms based on the densitometic parameters is achieved in between 38 and 48%, whereas topological analysis increases the rate to 83%. The findings demonstrate the effectiveness of the proposed algorithm. Compared to features obtained from graylevel histograms and comparable studies, we draw the conclusion that the presented method performs equally good or better. Our future work will be focused on the characterization of the mammographic tissue according to the Breast Imaging Reporting and Data System (BI-RADS). Moreover, other databases will be tested for an in-depth evaluation of the efficiency of our proposal.
Improving mass detection performance by use of 3D difference filter in a whole breast ultrasonography screening system
Author(s):
Yuji Ikedo;
Daisuke Fukuoka;
Takeshi Hara;
Hiroshi Fujita;
Etsuo Takada M.D.;
Tokiko Endo M.D.;
Takako Morita
Show Abstract
Ultrasonography is one of the most important methods for breast cancer screening in Japan. Several mechanical
whole breast ultrasound (US) scanners have been developed for mass screening. We have reported a computer-aided
detection (CAD) scheme for the detection of masses in whole breast US images. In this study, the method
of detecting mass candidates and the method of reducing false positives (FPs) were improved in order to enhance
the performance of this scheme. A 3D difference (3DD) filter was newly developed to extract low-intensity regions.
The 3DD filter is defined as the difference of pixel values between the current pixel value and the mean pixel value
of 17 neighboring pixels. Low-intensity regions were efficiently extracted by use of 3DD filter values, and FPs were
reduced using a FP reduction method employing the rule-based technique and quadratic discriminant analysis
with the filter values. The performance of our previous and improved CAD schemes indicated a sensitivity of
80.0% with 16.8 FPs and 9.5 FPs per breast, respectively. The FPs of the improved scheme were reduced by
44% as compared to the previous scheme. The 3DD filter was useful for the detection of masses in whole breast
US images.
Semiautomatic segmentation for the computer aided diagnosis of clustered microcalcifications
Author(s):
Matthias Elter;
Christian Held
Show Abstract
Screening mammography is recognized as the most effective tool for early breast cancer detection. However, its
application in clinical practice shows some of its weaknesses. While clustered microcalcifications are often an
early sign of breast cancer, the discrimination of benign from malignant clusters based on their appearance in
mammograms is a very difficult task. Hence, it is not surprising that typically only 15% to 30% of breast biopsies
performed on calcifications will be positive for malignancy. As this low positive predictive value of mammography
regarding the diagnosis of calcification clusters results in many unnecessary biopsies performed on benign
calcifications, we propose a novel computer aided diagnosis (CADx) approach with the goal to improve the reliability
of microcalcification classification. As effective automatic classification of microcalcification clusters relies
on good segmentations of the individual calcification particles, many approaches to the automatic segmentation
of individual particles have been proposed in the past. Because none of the fully automatic approaches seem to
result in optimal segmentations, we propose a novel semiautomatic approach that has automatic components but
also allows some interaction of the radiologist. Based on the resulting segmentations we extract a broad range
of features that characterize the morphology and distribution of calcification particles. Using regions of interest
containing either benign or malignant clusters extracted from the digital database for screening mammography
we evaluate the performance of our approach using a support vector machine and ROC analysis. The resulting
ROC performance is very promising and we show that the performance of our semiautomatic segmentation is
significantly higher than that of a comparable fully automatic approach.
Rib detection for whole breast ultrasound image
Author(s):
Ruey-Feng Chang;
Yi-Wei Shen;
Jiayu Chen;
Yi-Hong Chou;
Chiun-Sheng Huang
Show Abstract
Recently, the whole breast ultrasound (US) is a new advanced screening technique for detecting breast
abnormalities. Because a lot of images are acquired for a case, the computer-aided system is needed to help the
physicians to reduce the diagnosis time. In the automatic whole breast US, the ribs are the pivotal landmark just like the
pectoral muscle in the mammography. In this paper, we develop an automatic rib detection method for the whole breast
ultrasound. The ribs could be helpful to define the screening area of a CAD system to reduce the tumor detection time
and could be used to register different passes for a case. In the proposed rib detection system, the whole breast images
are subsampled at first in order to reduce the computation of rib detection without reducing the detection performance.
Due to the shadowing is occurred under the rib in the whole breast ultrasound images and is the sheet-like structure, the
Hessian analysis and sheetness function are adopted to enhance the sheet-like structure. Then, the orientation
thresholding is adopted to segment the sheet-like structures. In order to remove the non-rib components in the
segmented sheet-like structures, some features of ribs in whole breast ultrasound are used. Thus, the connected
component labeling is applied and then some characteristics such as orientation, length and radius are calculated.
Finally, some criteria are applied to remove non-rib components. In our experiments, there are 65 ribs in 15 test cases
and the 62 ribs have been detected by the proposed system with the detection ratio 95.38%. The ratio of position
difference under 5 mm is 87.10 % and the ratio of length difference under 10 mm is 85.48 %. The results show that the
proposed system almost could detect the ribs in the breast US images and has a good accuracy.
Automatic categorization of mammographic masses using BI-RADS as a guidance
Author(s):
Yimo Tao;
Shih-Chung B. Lo;
Matthew T. Freedman;
Erini Makariou;
Jianhua Xuan
Show Abstract
In this study, we present a clinically guided technical method for content-based categorization of mammographic masses.
Our work is motivated by the continuing effort in content-based image annotation and retrieval to extract and model the
semantic content of images. Specifically, we classified the shape and margin of mammographic mass into different
categories, which are designated by radiologists according to descriptors from Breast Imaging Reporting and Data
System Atlas (BI-RADS). Experiments were conducted within subsets selected from datasets consisting of 346 masses.
In the experiments that categorize lesion shape, we obtained a precision of 70% with three classes and 87.4% with two
classes. In the experiments that categorize margin, we obtained precisions of 69.4% and 74.7% for the use of four and
three classes, respectively. In this study, we intend to demonstrate that this classification based method is applicable in
extracting the semantic characteristics of mass appearances, and thus has the potential to be used for automatic
categorization and retrieval tasks in clinical applications.
Effect of ROI size on the performance of an information-theoretic CAD system in mammography: multi-size fusion analysis
Author(s):
Robert C. Ike III;
Swatee Singh;
Brian Harrawood;
Georgia D. Tourassi
Show Abstract
Featureless, knowledge-based CAD systems are an attractive alternative to feature-based CAD because they require no
to minimal image preprocessing. Such systems compare images directly using the raw image pixel values rather than
relying on low-level image features. Specifically, information-theoretic (IT) measures such as mutual information (MI)
have been shown to be an effective, featureless, similarity measure for image comparisons. MI captures the statistical
relationship between the gray level values of corresponding image pixels. In a CAD system developed at our laboratory,
the above concept has been applied for location-specific detection of mammographic masses. The system is designed to
operate on a fixed size region of interest (ROI) extracted around a suspicious mammographic location. Since mass sizes
vary substantially, there is a potential drawback. When two ROIs are compared, it is unclear how much the parenchymal
background contributes in the calculated MI. This uncertainty could deteriorate CAD performance in the extreme cases,
namely when a small mass is present in the ROI or when a large mass extends beyond the fixed size ROI. The present
study evaluates the effect of ROI size on the overall CAD performance and proposes multisize analysis for possible
improvement. Based on two datasets of ROIs extracted from DDSM mammograms, there was a statistically significant
decline of the CAD performance as the ROI size increased. The best size ranged between 512x512 and 256x256 pixels.
Multisize fusion analysis using a linear model achieved further improvement in CAD performance for both datasets.
Optimized acquisition scheme for multi-projection correlation imaging of breast cancer
Author(s):
Amarpreet S. Chawla;
Ehsan Samei;
Robert S. Saunders;
Joseph Y. Lo;
Swatee Singh
Show Abstract
We are reporting the optimized acquisition scheme of multi-projection breast Correlation Imaging (CI)
technique, which was pioneered in our lab at Duke University. CI is similar to tomosynthesis in its image
acquisition scheme. However, instead of analyzing the reconstructed images, the projection images are directly
analyzed for pathology. Earlier, we presented an optimized data acquisition scheme for CI using mathematical
observer model. In this article, we are presenting a Computer Aided Detection (CADe)-based optimization
methodology. Towards that end, images from 106 subjects recruited for an ongoing clinical trial for
tomosynthesis were employed. For each patient, 25 angular projections of each breast were acquired. Projection
images were supplemented with a simulated 3 mm 3D lesion. Each projection was first processed by a
traditional CADe algorithm at high sensitivity, followed by a reduction of false positives by combining
geometrical correlation information available from the multiple images. Performance of the CI system was
determined in terms of free-response receiver operating characteristics (FROC) curves and the area under ROC
curves. For optimization, the components of acquisition such as the number of projections, and their angular
span were systematically changed to investigate which one of the many possible combinations maximized the
sensitivity and specificity. Results indicated that the performance of the CI system may be maximized with 7-11
projections spanning an angular arc of 44.8°, confirming our earlier findings using observer models. These
results indicate that an optimized CI system may potentially be an important diagnostic tool for improved breast
cancer detection.
Detection of architectural distortion in mammograms acquired prior to the detection of breast cancer using texture and fractal analysis
Author(s):
Shormistha Prajna;
Rangaraj M. Rangayyan;
Fábio J. Ayres;
J. E. Leo Desautels M.D.
Show Abstract
Mammography is a widely used screening tool for the early detection of breast cancer. One of the commonly
missed signs of breast cancer is architectural distortion. The purpose of this study is to explore the application
of fractal analysis and texture measures for the detection of architectural distortion in screening mammograms
taken prior to the detection of breast cancer. A method based on Gabor filters and phase portrait analysis was
used to detect initial candidates of sites of architectural distortion. A total of 386 regions of interest (ROIs) were
automatically obtained from 14 "prior mammograms", including 21 ROIs related to architectural distortion.
The fractal dimension of the ROIs was calculated using the circular average power spectrum technique. The
average fractal dimension of the normal (false-positive) ROIs was higher than that of the ROIs with architectural
distortion. For the "prior mammograms", the best receiver operating characteristics (ROC) performance achieved
was 0.74 with the fractal dimension and 0.70 with fourteen texture features, in terms of the area under the ROC
curve.
Breast mass segmentation on dynamic contrast-enhanced magnetic resonance scans using the level set method
Author(s):
Jiazheng Shi;
Berkman Sahiner;
Heang-Ping Chan;
Chintana Paramagul;
Lubomir M. Hadjiiski;
Mark Helvie;
Yi-Ta Wu;
Jun Ge;
Yiheng Zhang;
Chuan Zhou;
Jun Wei
Show Abstract
The goal of this study was to develop an automated method to segment breast masses on dynamic contrast-enhanced
(DCE) magnetic resonance (MR) scans that were performed to monitor breast cancer response to neoadjuvant
chemotherapy. A radiologist experienced in interpreting breast MR scans defined the mass using a cuboid volume of
interest (VOI). Our method then used the K-means clustering algorithm followed by morphological operations for initial
mass segmentation on the VOI. The initial segmentation was then refined by a three-dimensional level set (LS) method.
The velocity field of the LS method was formulated in terms of the mean curvature which guaranteed the smoothness of
the surface and the Sobel edge information which attracted the zero LS to the desired mass margin. We also designed a
method to reduce segmentation leak by adapting a region growing technique. Our method was evaluated on twenty
DCE-MR scans of ten patients who underwent neoadjuvant chemotherapy. Each patient had pre- and post-chemotherapy
DCE-MR scans on a 1.5 Tesla magnet. Computer segmentation was applied to coronal T1-weighted images. The in-plane
pixel size ranged from 0.546 to 0.703 mm and the slice thickness ranged from 2.5 to 4.0 mm. The flip angle was
15 degrees, repetition time ranged from 5.98 to 6.7 ms, and echo time ranged from 1.2 to 1.3 ms. The computer
segmentation results were compared to the radiologist's manual segmentation in terms of the overlap measure defined as
the ratio of the intersection of the computer and the radiologist's segmentations to the radiologist's segmentation. Pre-
and post-chemotherapy masses had overlap measures of 0.81±0.11 (mean±s.d.) and 0.70±0.21, respectively.
A study of mammographic mass retrieval based on shape and texture descriptors
Author(s):
Zhengdong Zhou;
Fengmei Zou;
Kwabena Agyepong
Show Abstract
Content-based mass image retrieval technology, utilizing both shape and texture features, is investigated in this paper.
In order to retrieve similar mass patterns that help improve clinical diagnosis, the performance of mass retrieval using
curvature scale space descriptors (CSSDs) and R-transform descriptors was mainly studied. The mass contours in the
DDSM database (Univ. of South Florida) were preprocessed to eliminate curl cases, which is very important for the
extraction of features. The peak extraction method from a CSS contour map by circular shift and CSSDs matching
method were introduced. Preliminary experiments show that the performance of CSSDs and R-transform descriptors
outperform other features such as moment invariants, normalized Fourier descriptors (NFDs), and the combined texture
feature. By combining CSSDs with R-transform descriptors and the texture features based on Gray-level Co-occurrence
Matrices (GLCMs), the experiments show that the hybrid method gives a better performance in mass image retrieval
than CSSDs or R-transform descriptors.
Novel kinetic texture features for breast lesion classification on dynamic contrast enhanced (DCE) MRI
Author(s):
Shannon C. Agner;
Salil Soman;
Edward Libfeld;
Margie McDonald R.N.;
Mark A. Rosen;
Mitchell D. Schnall;
Deanna Chin;
John Nosher;
Anant Madabhushi
Show Abstract
Dynamic contrast enhanced (DCE) MRI has emerged as a promising new imaging modality for breast cancer
screening. Currently, radiologists evaluate breast lesions based on qualitative description of lesion morphology
and contrast uptake profiles. However, the subjectivity associated with qualitative description of breast lesions
on DCE-MRI introduces a high degree of inter-observer variability. In addition, the high sensitivity of MRI
results in poor specificity and thus a high rate of biopsies on benign lesions. Computer aided diagnosis (CAD)
methods have been previously proposed for breast MRI, but research in the field is far from comprehensive. Most
previous work has focused on either quantifying morphological attributes used by radiologists, characterizing
lesion intensity profiles which reflect uptake of contrast dye, or characterizing lesion texture. While there has
been much debate on the relative importance of the different classes of features (e.g., morphological, textural,
and kinetic), comprehensive quantitative comparisons between the different lesion attributes have been rare.
In addition, although kinetic signal enhancement curves may give insight into the underlying physiology of the
lesion, signal intensity is susceptible to MRI acquisition artifacts such as bias field and intensity non-standardness.
In this paper, we introduce a novel lesion feature that we call the kinetic texture feature, which we demonstrate
to be superior compared to the lesion intensity profile dynamics. Our hypothesis is that since lesion intensity is
susceptible to artifacts, lesion texture changes better reflect lesion class (benign or malignant). In this paper,
we quantitatively demonstrate the superiority of kinetic texture features for lesion classification on 18 breast
DCE-MRI studies compared to over 500 different morphological, kinetic intensity, and lesion texture features.
In conjunction with linear and non-linear dimensionality reduction methods, a support vector machine (SVM)
classifier yielded classification accuracy and positive predictive values of 78% and 86% with kinetic texture
features compared to 78% and 73% with morphological features and 72% and 83% with textural features,
respectively.
Tumor classification using perfusion volume fractions in breast DCE-MRI
Author(s):
Sang Ho Lee;
Jong Hyo Kim;
Jeong Seon Park;
Sang Joon Park;
Yun Sub Jung;
Jung Joo Song;
Woo Kyung Moon
Show Abstract
This study was designed to classify contrast enhancement curves using both three-time-points (3TP) method and
clustering approach at full-time points, and to introduce a novel evaluation method using perfusion volume fractions for
differentiation of malignant and benign lesions. DCE-MRI was applied to 24 lesions (12 malignant, 12 benign). After
region growing segmentation for each lesion, hole-filling and 3D morphological erosion and dilation were performed for
extracting final lesion volume. 3TP method and k-means clustering at full-time points were applied for classifying
kinetic curves into six classes. Intratumoral volume fraction for each class was calculated. ROC and linear discriminant
analyses were performed with distributions of the volume fractions for each class, pairwise and whole classes,
respectively. The best performance in each class showed accuracy (ACC), 84.7% (sensitivity (SE), 100%; specificity
(SP), 66.7% to a single class) to 3TP method, whereas ACC, 73.6% (SE, 41.7%; SP, 100% to a single class) to k-means
clustering. The best performance in pairwise classes showed ACC, 75% (SE, 83.3%; SP, 66.7% to four class pairs and
SE, 58.3%; SP, 91.7% to a single class pair) to 3TP method and ACC, 75% (SE, 75%; SP, 75% to a single class pair and
SE, 66.7%; SP, 83.3% to three class pairs) to k-means clustering. The performance in whole classes showed ACC, 75%
(SE, 83.3%; SP, 66.7%) to 3TP method and ACC, 75% (SE, 91.7%; 58.3%) to k-means clustering. The results indicate
that tumor classification using perfusion volume fractions is helpful in selecting meaningful kinetic patterns for
differentiation of malignant and benign lesions, and that two different classification methods are complementary to each
other.
Cell-based image partition and edge grouping: a nearly automatic ultrasound image segmentation algorithm for breast cancer computer aided diagnosis
Author(s):
Jie-Zhi Cheng;
Kuei-Wu Chen;
Yi-Hong Chou;
Chung-Ming Chen
Show Abstract
This study proposes a nearly automatic ultrasound image segmentation algorithm for computer-aided diagnosis on breast
cancer. This method is realized in two phases, i.e., partition phase and edge grouping phase. The two phases are
implemented on the cell tessellation, which is generated by two-pass watershed transformation. With this unique
integration of the three ingredients, i.e., the partition and grouping phases and cell tessellation, it will be shown that the
breast lesion boundaries can be effectively and efficiently detected - even the lesion shape is very uneven. The proposed
algorithm can be served as the kernel of CAD system on breast ultrasound to improve the automation and performance.
Spatio-temporal registration in multiplane MRI acquisitions for 3D colon motiliy analysis
Author(s):
Oliver Kutter;
Sonja Kirchhoff M.D.;
Marina Berkovich;
Maximilian Reiser M.D.;
Nassir Navab
Show Abstract
In this paper we present a novel method for analyzing and visualizing dynamic peristaltic motion of the colon in 3D from two series of differently oriented 2D MRI images. To this end, we have defined an MRI examination protocol, and introduced methods for spatio-temporal alignment of the two MRI image series into a common reference. This represents the main contribution of this paper, which enables the 3D analysis of peristaltic motion. The objective is to provide a detailed insight into this complex motion, aiding in the diagnosis and characterization of colon motion disorders. We have applied the proposed spatio-temporal method on Cine MRI data sets of healthy volunteers. The results have been inspected and validated by an expert radiologist. Segmentation and cylindrical approximation of the colon results in a 4D visualization of the peristaltic motion.
Digital bowel cleansing free detection method of colonic polyp from fecal tagging CT images
Author(s):
Masahiro Oda;
Takayuki Kitasaka;
Kensaku Mori;
Yasuhito Suenaga;
Tetsuji Takayama;
Hirotsugu Takabatake;
Masaki Mori;
Hiroshi Natori;
Shigeru Nawano
Show Abstract
This paper presents a digital bowel cleansing (DBC) free detection method of colonic polyp from fecal tagging
CT images. Virtual colonoscopy (VC) or CT colonography is a new colon diagnostic method to examine the
inside of the colon. However, since the colon has many haustra and its shape is long and convoluted, there is
a risk of overlooking of lesions existing in blinded areas caused by haustra. Automated polyp detection from
colonic CT images will reduce the risk of overlooking. Although many methods for polyp detection have been
proposed, these methods needed DBC to detect polyps surrounded by tagged fecal material (TFM). However,
DBC may changes shapes of polyps or haustra while removing TFM and it adversely affect polyp detection. We
propose a colonic polyp detection method that enables us to detect polyps surrounded by either the air or the
TFM simultaneously without any DBC processes. CT values inside polyps surrounded by the air and polyps
surrounded by the TFM regions tend to gradually increase (blob structure) and decrease (inverse-blob structure)
from outward to inward, respectively. We thus employ blob and inverse-blob structure enhancement filters based
on the eigenvalues of the Hessian matrix to detect polyps using intensity characteristic of polyps. False positive
elimination is performed using three feature values: the volume, maximum value of the filter outputs, and the
standard deviation of CT values inside polyp candidate regions. We applied the proposed method to 104 cases
of abdominal CT images. Sensitivity for polyps ≥ 6 mm was 91.2% with 7.8 FPs/case.
Variation of quantitative emphysema measurements from CT scans
Author(s):
Brad M. Keller;
Anthony P. Reeves;
Claudia I. Henschke;
R. Graham Barr M.D.;
David F. Yankelevitz M.D.
Show Abstract
Emphysema is a lung disease characterized by destruction of the alveolar air sacs and is associated with long-term
respiratory dysfunction. CT scans allow for imaging of the anatomical basis of emphysema, and several measures have
been introduced for the quantification of the extent of disease. In this paper we compare these measures for repeatability
over time. The measures of interest in this study are emphysema index, mean lung density, histogram percentile, and the
fractal dimension. To allow for direct comparisons, the measures were normalized to a 0-100 scale. These measures have
been computed for a set of 2,027 scan pairs in which the mean interval between scans was 1.15 years (σ: 93 days). These
independent pairs were considered with respect to three different scanning conditions (a) 223 pairs where both were
scanned with a 5 mm slice thickness protocol, (b) 695 with the first scanned with the 5 mm protocol and the second with
a 1.25 mm protocol, and (c) 1109 pairs scanned both times using a 1.25 mm protocol. We found that average normalized
emphysema index and histogram percentiles scores increased by 5.9 and 11 points respectively, while the fractal
dimension showed stability with a mean difference of 1.2. We also found, a 7 point bias introduced for emphysema
index under condition (b), and that the fractal dimension measure is least affected by scanner parameter changes.
Computer-aided interpretation of ICU portable chest images: automated detection of endotracheal tubes
Author(s):
Zhimin Huo;
Simon Li;
Minjie Chen;
John Wandtke M.D.
Show Abstract
In intensive care units (ICU), endotracheal (ET) tubes are inserted to assist patients who may have difficulty breathing.
A malpositioned ET tube could lead to a collapsed lung, which is life threatening. The purpose of this study is to
develop a new method that automatically detects the positioning of ET tubes on portable chest X-ray images. The
method determines a region of interest (ROI) in the image and processes the raw image to provide edge enhancement for
further analysis. The search of ET tubes is performed within the ROI. The ROI is determined based upon the analysis of
the positions of the detected lung area and the spine in the image. Two feature images are generated: a Haar-like image
and an edge image. The Haar-like image is generated by applying a Haar-like template to the raw ROI or the enhanced
version of the raw ROI. The edge image is generated by applying a direction-specific edge detector. Both templates are
designed to represent the characteristics of the ET tubes. Thresholds are applied to the Haar-like image and the edge
image to detect initial tube candidates. Region growing, combined with curve fitting of the initial detected candidates, is
performed to detect the entire ET tube. The region growing or "tube growing" is guided by the fitted curve of the initial
candidates. Merging of the detected tubes after tube growing is performed to combine the detected broken tubes. Tubes
within a predefined space can be merged if they meet a set of criteria. Features, such as width, length of the detected
tubes, tube positions relative to the lung and spine, and the statistics from the analysis of the detected tube lines, are
extracted to remove the false-positive detections in the images. The method is trained and evaluated on two different
databases. Preliminary results show that computer-aided detection of tubes in portable chest X-ray images is promising.
It is expected that automated detection of ET tubes could lead to timely detection of malpositioned tubes, thus improve
overall patient care.
Automatic segmentation of lung parenchyma based on curvature of ribs using HRCT images in scleroderma studies
Author(s):
M. N. Prasad;
M. S. Brown;
S. Ahmad;
F. Abtin;
J. Allen;
I. da Costa;
H. J. Kim;
M. F. McNitt-Gray;
J. G. Goldin
Show Abstract
Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis.
Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in
the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to
perform segmentation of the lungs using a technique that selects an optimal threshold for a given
scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is
based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature
of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to
represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the
deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build
the model for selection of the best fitting lung boundary. The performance of the new technique was
compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing.
The two techniques were evaluated against manual reference segmentations using a volumetric
overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the
fixed threshold technique.
Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images
Author(s):
Shinsuke Saita;
Mitsuru Kubo;
Yoshiki Kawata;
Noboru Niki;
Yasutaka Nakano;
Hironobu Ohmatsu;
Keigo Tominaga;
Kenji Eguchi;
Noriyuki Moriyama
Show Abstract
Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys
alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the
destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or
referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has
been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias
components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in
the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm
has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26
cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.
An evaluation of automated broncho-arterial ratios for reliable assessment of bronchiectasis
Author(s):
Benjamin L. Odry;
Atilla P. Kiraly;
Carol L. Novak;
David P. Naidich;
Jean-Francois Lerallut
Show Abstract
Bronchiectasis, the permanent dilatation of the airways, is frequently evaluated by computed tomography (CT) in order
to determine disease progression and response to treatment. Normal airways have diameters of approximately the same
size as their accompanying artery, and most scoring systems for quantifying bronchiectasis severity ask physicians to
estimate the broncho-arterial ratio. However, the lack of standardization coupled with inter-observer variability limits
diagnostic sensitivity and the ability to make reliable comparisons with follow-up CT studies. We have developed a
Computer Aided Diagnosis method to detect airway disease by locating abnormal broncho-arterial ratios. Our approach
is based on computing a tree model of the airways followed by automated measurements of broncho-arterial ratios at
peripheral airway locations. The artery accompanying a given bronchus is automatically determined by correlation of its
orientation and proximity to the airway, while the diameter measurements are based on the full-width half maximum
method. This method was previously evaluated subjectively; in this work we quantitatively evaluate the airway and
vessel measurements on 9 CT studies and compare the results with three independent readers. The automatically selected
artery location was in agreement with the readers in 75.3% of the cases compared with 65.6% agreement of the readers
with each other. The reader-computer variability in lumen diameters (7%) was slightly lower than that of the readers
with respect to each other (9%), whereas the reader-computer variability in artery diameter (18%) was twice that of the
readers (8%), but still acceptable for detecting disease. We conclude that the automatic system has comparable accuracy
to that of readers, while providing greater speed and consistency.
CT-guided automated detection of lung tumors on PET images
Author(s):
Yunfeng Cui;
Binsheng Zhao;
Timothy J. Akhurst;
Jiayong Yan;
Lawrence H. Schwartz
Show Abstract
The calculation of standardized uptake values (SUVs) in tumors on serial [18F]2-fluoro-2-deoxy-D-glucose (18F-FDG)
positron emission tomography (PET) images is often used for the assessment of therapy response. We present a
computerized method that automatically detects lung tumors on 18F-FDG PET/Computed Tomography (CT) images
using both anatomic and metabolic information. First, on CT images, relevant organs, including lung, bone, liver and
spleen, are automatically identified and segmented based on their locations and intensity distributions. Hot spots (SUV
>= 1.5) on 18F-FDG PET images are then labeled using the connected component analysis. The resultant "hot objects"
(geometrically connected hot spots in three dimensions) that fall into, reside at the edges or are in the vicinity of the
lungs are considered as tumor candidates. To determine true lesions, further analyses are conducted, including reduction
of tumor candidates by the masking out of hot objects within CT-determined normal organs, and analysis of candidate
tumors' locations, intensity distributions and shapes on both CT and PET. The method was applied to 18F-FDG-PET/CT
scans from 9 patients, on which 31 target lesions had been identified by a nuclear medicine radiologist during a Phase II
lung cancer clinical trial. Out of 31 target lesions, 30 (97%) were detected by the computer method. However,
sensitivity and specificity were not estimated because not all lesions had been marked up in the clinical trial. The
method effectively excluded the hot spots caused by mediastinum, liver, spleen, skeletal muscle and bone metastasis.
Classifying pulmonary nodules using dynamic enhanced CT images based on CT number histogram
Author(s):
Kazuhiro Minami;
Yoshiki Kawata;
Noboru Niki;
Hironobu Ohmatsu;
Kiyoshi Mori;
Kouzou Yamada;
Kenji Eguchi;
Masahiro Kaneko;
Noriyuki Moriyama
Show Abstract
Pulmonary nodule evaluation based on analyses of contrast-enhanced CT images becomes useful for differentiating
malignant and benign nodules. There are several types of nodule regarding inside density (such as solid, mixed GGO,
and pure GGO) and size. This paper presents relationships between contrast enhancement characteristics and nodule
types. Thin-section, contrast-enhanced CT (pre-contrast, and post-contrast series acquired at 2 and 4 minutes) was
performed on 86 patients with pulmonary nodules (42 benign and 44 malignant). Nodule regions were segmented from
an isotropic volume reconstructed from each image series. In this study, the contrast-enhancement characteristics of
nodules were quantified by using CT number histogram. The CT number histograms inside the segmented nodules were
computed on pre-contrast and post-contrast series. A feature characterizing variation between two histograms was
computed by subtracting the histogram of post-contrast series from that of pre-contrast series, and dividing the
summation of subtracted frequency of each bin by the volume of the segmented nodule on pre-contrast series. The
nodules were classified into five types (α, β, γ, δ, and ε) on the basis of internal features extracted from CT number
histogram on pre-contrast series. The nodule data set was categorized into subset through the nodule type and size and
the performance of the feature to classify malignant from benign nodules was evaluated for each subset.
Volume error analysis for lung nodules attached to pulmonary vessels in an anthropomorphic thoracic phantom
Author(s):
Lisa M. Kinnard;
Marios A. Gavrielides;
Kyle J. Myers;
Rongping Zeng;
Jennifer Peregoy;
William Pritchard;
John W. Karanian;
Nicholas Petrick
Show Abstract
High-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope
that such methods will be more accurate and consistent than currently used planar measures of size. However,
the error associated with volume estimation methods still needs to be quantified. Volume estimation error is
multi-faceted in the sense that it is impacted by characteristics of the patient, the software tool and the CT
system. The overall goal of this research is to quantify the various sources of measurement error and, when
possible, minimize their effects. In the current study, we estimated nodule volume from ten repeat scans of an
anthropomorphic phantom containing two synthetic spherical lung nodules (diameters: 5 and 10 mm; density:
-630 HU), using a 16-slice Philips CT with 20, 50, 100 and 200 mAs exposures and 0.8 and 3.0 mm slice
thicknesses. True volume was estimated from an average of diameter measurements, made using digital calipers.
We report variance and bias results for volume measurements as a function of slice thickness, nodule diameter,
and X-ray exposure.
A novel software assistant for the clinical analysis of MR spectroscopy with MeVisLab
Author(s):
Bernd Merkel;
Markus T. Harz;
Olaf Konrad;
Horst K. Hahn;
Heinz-Otto Peitgen
Show Abstract
We present a novel software assistant for the analysis of multi-voxel 2D or 3D in-vivo-spectroscopy signals based on the
rapid-prototyping platform MeVisLab. Magnetic Resonance Spectroscopy (MRS) is a valuable in-vivo metabolic
window into tissue regions of interest, such as the brain, breast or prostate. With this method, the metabolic state can be
investigated non-invasively. Different pathologies evoke characteristically different MRS signals, e.g., in prostate cancer,
choline levels increase while citrate levels decrease compared to benign tissue. Concerning the majority of processing
steps, available MRS tools lack performance in terms of speed. Our goal is to support clinicians in a fast and robust
interpretation of MRS signals and to enable them to interactively work with large volumetric data sets. These data sets
consist of 3D spatially resolved measurements of metabolite signals. The software assistant provides standard analysis
methods for MRS data including data import and filtering, spatio-temporal Fourier transformation, and basic calculation
of peak areas and spectroscopic metabolic maps. Visualization relies on the facilities of MeVisLab, a platform for
developing clinically applicable software assistants. It is augmented by special-purpose viewing extensions and offers
synchronized 1D, 2D, and 3D views of spectra and metabolic maps. A novelty in MRS processing tools is the side-by-side
viewing ability of standard FT processed spectra with the results of time-domain frequency analysis algorithms like
Linear Prediction and the Matrix Pencil Method. This enables research into the optimal toolset and workflow required to
avoid misinterpretation and misapplication.
Bruise chromophore concentrations over time
Author(s):
Mark G. Duckworth;
Jayme J. Caspall;
Rudolph L. Mappus IV;
Linghua Kong;
Dingrong Yi;
Stephen H. Sprigle
Show Abstract
During investigations of potential child and elder abuse, clinicians and forensic practitioners are often
asked to offer opinions about the age of a bruise. A commonality between existing methods of bruise aging
is analysis of bruise color or estimation of chromophore concentration. Relative chromophore concentration
is an underlying factor that determines bruise color. We investigate a method of chromophore concentration
estimation that can be employed in a handheld imaging spectrometer with a small number of wavelengths.
The method, based on absorbance properties defined by Beer-Lambert's law, allows estimation of
differential chromophore concentration between bruised and normal skin. Absorption coefficient data for
each chromophore are required to make the estimation. Two different sources of this data are used in the
analysis- generated using Independent Component Analysis and taken from published values. Differential
concentration values over time, generated using both sources, show correlation to published models of
bruise color change over time and total chromophore concentration over time.
Efficient SVM classifier based on color and texture region features for wound tissue images
Author(s):
Hazem Wannous;
Yves Lucas;
Sylvie Treuillet
Show Abstract
This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool
using a simple hand held digital camera. The first part was concerned with the computation of a 3D model for wound
measurements using uncalibrated vision techniques. This article presents the second part, which deals with color
classification of wound tissues, a prior step before combining shape and color analysis in a single tool for real tissue
surface measurements. We have adopted an original approach based on unsupervised segmentation prior to
classification, to improve the robustness of the labelling stage. A database of different tissue types is first built; a simple
but efficient color correction method is applied to reduce color shifts due to uncontrolled lighting conditions. A ground
truth is provided by the fusion of several clinicians manual labellings. Then, color and texture tissue descriptors are
extracted from tissue regions of the images database, for the learning stage of an SVM region classifier with the aid of a
ground truth resulting from. The output of this classifier provides a prediction model, later used to label the segmented
regions of the database. Finally, we apply unsupervised color region segmentation on wound images and classify the
tissue regions. Compared to the ground truth, the result of automatic segmentation driven classification provides an
overlap score, (66 % to 88%) of tissue regions higher than that obtained by clinicians.
Automated detection of ureteral wall thickening on multi-detector row CT urography
Author(s):
Lubomir Hadjiiski;
Berkman Sahiner;
Elaine M. Caoili;
Richard H. Cohan;
Chuan Zhou;
Heang-Ping Chan
Show Abstract
We are developing a computer-aided detection (CAD) system for automated detection of ureteral wall thickening
on multi-detector row CT urography, which potentially can assist radiologists in detecting ureter cancer. In the first stage
of our CAD system, given a starting point, the ureter is tracked based on the CT values of the contrast-filled lumen. In
the second stage, the ureter wall is segmented and the ureter wall thickness is estimated based on polar transformation,
separation of the ureter wall from the background and measuring the wall thickness. In this pilot study, a limited data set
of 20 patients with 22 abnormal ureters was used. Fourteen patients had a total of 16 ureters with malignant ureteral wall
thickening. Two of the patients had malignant wall thickening in both the left and right ureters. The other six patients
had 6 ureters with benign ureteral wall thickening. All malignant wall thickenings were biopsy-proven. The benign
thickenings were determined by biopsy or by 2-year follow-up. In addition 3 normal ureters were used to determine the
false positive (FP) detection rate of the CAD system. The tracking program successfully tracked the 25 ureters (22
abnormal and 3 normal) and detected 90% (20/22) of the ureters having wall thickening with 2.3 (7/3) FPs per ureter.
93% (15/16) of the ureters with malignant wall thickening and 83% (5/6) of the ureters with benign wall thickening were
detected. The missed ureteral wall thickenings were developed asymmetrically around the part of the ureter filled with
contrast and the detection criteria in our current CAD system was not able to identify them reliably. The preliminary
results show that our detection system can track the ureter and can detect ureteral wall thickening.
True-false lumen segmentation of aortic dissection using multi-scale wavelet analysis and generative-discriminative model matching
Author(s):
Noah Lee;
Huseyin Tek;
Andrew F. Laine
Show Abstract
Computer aided diagnosis in the medical image domain requires sophisticated probabilistic models to formulate
quantitative behavior in image space. In the diagnostic process detailed knowledge of model performance with respect to
accuracy, variability, and uncertainty is crucial. This challenge has lead to the fusion of two successful learning schools
namely generative and discriminative learning. In this paper, we propose a generative-discriminative learning approach
to predict object boundaries in medical image datasets. In our approach, we perform probabilistic model matching of
both modeling domains to fuse into the prediction step appearance and structural information of the object of interest
while exploiting the strength of both learning paradigms. In particular, we apply our method to the task of true-false
lumen segmentation of aortic dissections an acute disease that requires automated quantification for assisted medical
diagnosis. We report empirical results for true-false lumen discrimination of aortic dissection segmentation showing
superior behavior of the hybrid generative-discriminative approach over their non hybrid generative counterpart.
A tool for computer-aided diagnosis of retinopathy of prematurity
Author(s):
Zheen Zhao;
David K. Wallace;
Sharon F. Freedman M.D.;
Stephen R. Aylward
Show Abstract
In this paper we present improvements to a software application, named ROPtool, that aids in the timely and
accurate detection and diagnosis of retinopathy of prematurity (ROP).
ROP occurs in 68% of infants less than 1251 grams at birth, and it is a leading cause of blindness for
prematurely born infants. The standard of care for its diagnosis is the subjective assessment of retinal vessel
dilation and tortuosity. There is significant inter-observer variation in those assessments.
ROPtool analyzes retinal images, extracts user-selected blood vessels from those images, and quantifies the
tortuosity of those vessels. The presence of ROP is then gauged by comparing the tortuosity of an infant's retinal
vessels with measures made from a clinical-standard image of severely tortuous retinal vessels. The presence of
such tortuous retinal vessels is referred to as 'plus disease'.
In this paper, a novel metric of tortuosity is proposed. From the ophthalmologist's point of view, the new
metric is an improvement from our previously published algorithm, since it uses smooth curves instead of straight
lines to simulate 'normal vessels'.
Another advantage of the new ROPtool is that minimal user interactions are required. ROPtool utilizes
a ridge traversal algorithm to extract retinal vessels. The algorithm reconstructs connectivity along a vessel
automatically.
This paper supports its claims by reporting ROC curves from a pilot study involving 20 retinal images. The
areas under two ROC curves, from two experts in ROP, using the new metric to diagnose 'tortuosity sufficient
for plus disease', varied from 0.86 to 0.91.
Cancer treatment outcome prediction by assessing temporal change: application to cervical cancer
Author(s):
Jeffrey W. Prescott;
Dongqing Zhang;
Jian Z. Wang;
Nina A. Mayr M.D.;
William T. C. Yuh M.D.;
Joel Saltz;
Metin Gurcan
Show Abstract
In this paper a novel framework is proposed for the classification of cervical tumors as susceptible or resistant to radiation therapy. The classification is based on both small- and large-scale temporal changes in the tumors' magnetic resonance imaging (MRI) response. The dataset consists of 11 patients who underwent radiation therapy for advanced cervical cancer. Each patient had dynamic contrast-enhanced (DCE)-MRI studies before treatment and early into treatment, approximately 2 weeks apart. For each study, a T1-weighted scan was performed before injection of contrast agent and again 75 seconds after injection. Using the two studies and the two series from each study, a set of tumor region of interest (ROI) features were calculated. These features were then exhaustively searched for the most separable set of three features based on a treatment outcome of local control or local recurrence. The dimensionality of the three-feature set was then reduced to two dimensions using principal components analysis (PCA). Finally, the classification performance was tested using three different classification procedures: support vector machines (SVM), linear discriminant analysis (LDA), and k-nearest neighbor (KNN). The most discriminatory features were those of volume, standard deviation, skewness, kurtosis, and fractal dimension. Combinations of these features resulted in 100%
classification accuracy using each of the three classifiers.
A new method to efficiently reduce histogram dimensionality
Author(s):
Pedro H. Bugatti;
Agma J. M. Traina;
Joaquim C. Felipe;
Caetano Traina Jr.
Show Abstract
A challenge in Computer-Aided Diagnosis based on image exams is to provide a timely answer that complies to the specialist's expectation. In many situations, when a specialist gets a new image to analyze, having information and knowledge from similar cases can be very helpful. For example, when a radiologist evaluates a new image, it is common to recall similar cases from the past. However, when performing similarity queries to retrieve similar cases, the approach frequently adopted is to extract meaningful features from the images and searching the database based on such features. One of the most popular image feature is the gray-level histogram, because it is simple and fast to obtain, providing the global gray-level distribution of the image. Moreover, normalized histograms are also invariant to affine transformations on the image. Although vastly used, gray-level histograms generates a large number of features, increasing the complexity of indexing and searching operations. Therefore, the high dimensionality of histograms degrades the efficiency of processing similarity queries. In this paper we propose a new and efficient method associating the Shannon entropy and the gray-level histogram to considerably reduce the dimensionality of feature vectors generated by histograms. The proposed method was evaluated using a real dataset and the results showed impressive reductions of up to 99% in the feature vector size, at the same time providing a gain in precision of up to 125% in comparison with the traditional gray-level histogram.
A simple and robust method to screen cataracts using specular reflection appearance
Author(s):
Retno Supriyanti;
Hitoshi Habe;
Masatsugu Kidode;
Satoru Nagata
Show Abstract
The high prevalence of cataracts is still a serious public health problem as a leading cause of blindness, especially in
developing countries with limited health facilities. In this paper we propose a new screening method for cataract
diagnosis by easy-to-use and low cost imaging equipment such as commercially available digital cameras. The
difficulties in using this sort of digital camera equipment are seen in the observed images, the quality of which is not
sufficiently controlled; there is no control of illumination, for example. A sign of cataracts is a whitish color in the pupil
which usually is black, but it is difficult to automatically analyze color information under uncontrolled illumination
conditions. To cope with this problem, we analyze specular reflection in the pupil region. When an illumination light
hits the pupil, it makes a specular reflection on the frontal surface of the lens of the pupil area. Also the light goes
through the rear side of the lens and might be reflected again. Specular reflection always appears brighter than the
surrounding area and is also independent of the illumination condition, so this characteristic enables us to screen out
serious cataract robustly by analyzing reflections observed in the eye image. In this paper, we demonstrate the validity
of our method through theoretical discussion and experimental results. By following the simple guidelines shown in this
paper, anyone would be able to screen for cataracts.
Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance
Author(s):
Bin Zheng;
Jiantao Pu;
Sang Cheol Park;
Margarita Zuley;
David Gur
Show Abstract
In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The
boundary contours of these regions were manually identified and marked. Twelve image features were computed for
each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we
applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an
initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and
automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger
size difference ratios. Using the area under ROC curve (AZ value) as performance index we investigated the
relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD)
scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid
region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid
algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active
contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the
growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated
once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios
between two areas segmented automatically and manually to less than ±15% for all testing regions and the testing AZ
value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of
mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.
Automated discovery of meniscal tears on MR imaging: a novel high-performance computer-aided detection application for radiologists
Author(s):
Bharath Ramakrishna;
Nabile Safdar;
Khan Siddiqui;
Woojin Kim M.D.;
Weimin Liu;
Ganesh Saiprasad;
Chein-I Chang;
Eliot Siegel M.D.
Show Abstract
Knee-related injuries including meniscal tears are common in both young athletes and the aging population and require
accurate diagnosis and surgical intervention when appropriate. With proper techniques and radiologists' experienced
skills, confidence in detection of meniscal tears can be quite high. However, for radiologists without musculoskeletal
training, diagnosis of meniscal tears can be challenging. This paper develops a novel computer-aided detection (CAD)
diagnostic system for automatic detection of meniscal tears in the knee. Evaluation of this CAD system using an
archived database of images from 40 individuals with suspected knee injuries indicates that the sensitivity and
specificity of the proposed CAD system are 83.87% and 75.19%, respectively, compared to the mean sensitivity and
specificity of 77.41% and 81.39%, respectively obtained by experienced radiologists in routine diagnosis without using
the CAD. The experimental results suggest that the developed CAD system has great potential and promise in automatic
detection of both simple and complex meniscal tears of knees.
Computer-aided diagnosis for classification of focal liver lesions on contrast-enhanced ultrasonography: feature extraction and characterization of vascularity patterns
Author(s):
Junji Shiraishi;
Katsutoshi Sugimoto;
Naohisa Kamiyama;
Fuminori Moriyasu;
Kunio Doi
Show Abstract
We have developed a computer-aided diagnostic (CAD) scheme for classifying focal liver lesions (FLLs) into
hepatocellular carcinoma (HCC), liver metastasis, and hemangioma, by use of B-mode and micro flow imaging (MFI) of
contrast-enhanced ultrasonography. We used 98 cases in this study, in which 104 FLLs consisted of 68 HCCs, 21
metastases, and 15 hemangiomas. MFI was obtained with contrast-enhanced low-mechanical-index (MI) pulse
subtraction imaging at a fixed plane which included a distinctive cross section of the FLL. In the MFI, the inflow high
signals in the plane, which were due to the vascular patterns and the contrast agent, were accumulated following flash
scanning with a high-MI ultrasound exposure. In this study, in addition to the existing 29 image features extracted from
MFI images, such as replenishment time, the average and the standard deviation of pixel values in a FLL, and the
average thickness of vessel-like patterns, four types of image features were extracted from MFI, temporal subtraction and
B-mode images based on small square regions of interest (ROIs: 4x4 matrix size) placed to cover a whole region of the
FLL. The four features were 1) uniformity of average pixel values for all ROIs, 2) peak pixel values in a histogram of
average pixel values of ROIs, 3) fraction of hypoechoic regions within an FLL, and 4) cross-correlation of pixel values
within an FLL between B-mode and MFI images. Overall classification accuracies performed by this CAD scheme
were 87.5% for all 104 liver lesions.
The edge-driven dual-bootstrap iterative closest point algorithm for multimodal retinal image registration
Author(s):
Chia-Ling Tsai;
Chun-Yi Li;
Gehua Yang
Show Abstract
Red-free (RF) fundus retinal images and fluorescein angiogram (FA) sequence are often captured from an eye for
diagnosis and treatment of abnormalities of the retina. With the aid of multimodal image registration, physicians can
combine information to make accurate surgical planning and quantitative judgment of the progression of a disease. The
goal of our work is to jointly align the RF images with the FA sequence of the same eye in a common reference space.
Our work is inspired by Generalized Dual-Bootstrap Iterative Closest Point (GDB-ICP), which is a fully-automatic,
feature-based method using structural similarity. GDB-ICP rank-orders Lowe keypoint matches and refines the
transformation computed from each keypoint match in succession. Albeit GDB-ICP has been shown robust to image
pairs with illumination difference, the performance is not satisfactory for multimodal and some FA pairs which exhibit
substantial non-linear illumination changes. Our algorithm, named Edge-Driven DBICP, modifies generation of
keypoint matches for initialization by extracting the Lowe keypoints from the gradient magnitude image, and enriching
the keypoint descriptor with global-shape context using the edge points. Our dataset consists of 61 randomly selected
pathological sequences, each on average having two RF and 13 FA images. There are total of 4985 image pairs, out of
which 1323 are multimodal pairs. Edge-Driven DBICP successfully registered 93% of all pairs, and 82% multimodal
pairs, whereas GDB-ICP registered 80% and 40%, respectively. Regarding registration of the whole image sequence in
a common reference space, Edge-Driven DBICP succeeded in 60 sequences, which is 26% improvement over GDB-ICP.
Automated scoring system of standard uptake value for torso FDG-PET images
Author(s):
Takeshi Hara;
Tatsunori Kobayashi;
Kazunao Kawai;
Xiangrong Zhou;
Satoshi Itoh;
Tetsuro Katafuchi;
Hiroshi Fujita
Show Abstract
The purpose of this work was to develop an automated method to calculate the score of SUV for torso region on FDG-PET scans. The three dimensional distributions for the mean and the standard deviation values of SUV were stored in each volume to score the SUV in corresponding pixel position within unknown scans. The modeling methods is based on SPM approach using correction technique of Euler characteristic and Resel (Resolution element). We employed 197 nor-mal cases (male: 143, female: 54) to assemble the normal metabolism distribution of FDG. The physique were registered each other in a rectangular parallelepiped shape using affine transformation and Thin-Plate-Spline technique. The regions of the three organs were determined based on semi-automated procedure. Seventy-three abnormal spots were used to estimate the effectiveness of the scoring methods. As a result, the score images correctly represented that the scores for normal cases were between zeros to plus/minus 2 SD. Most of the scores of abnormal spots associated with cancer were lager than the upper of the SUV interval of normal organs.
Computerized microscopic image analysis of follicular lymphoma
Author(s):
Olcay Sertel;
Jun Kong;
Gerard Lozanski;
Umit Catalyurek;
Joel H. Saltz;
Metin N. Gurcan
Show Abstract
Follicular Lymphoma (FL) is a cancer arising from the lymphatic system. Originating from follicle center B cells, FL is
mainly comprised of centrocytes (usually middle-to-small sized cells) and centroblasts (relatively large malignant cells).
According to the World Health Organization's recommendations, there are three histological grades of FL characterized
by the number of centroblasts per high-power field (hpf) of area 0.159 mm2. In current practice, these cells are manually
counted from ten representative fields of follicles after visual examination of hematoxylin and eosin (H&E) stained
slides by pathologists. Several studies clearly demonstrate the poor reproducibility of this grading system with very low
inter-reader agreement. In this study, we are developing a computerized system to assist pathologists with this process. A
hybrid approach that combines information from several slides with different stains has been developed. Thus, follicles
are first detected from digitized microscopy images with immunohistochemistry (IHC) stains, (i.e., CD10 and CD20).
The average sensitivity and specificity of the follicle detection tested on 30 images at 2×, 4× and 8× magnifications are
85.5±9.8% and 92.5±4.0%, respectively. Since the centroblasts detection is carried out in the H&E-stained slides, the
follicles in the IHC-stained images are mapped to H&E-stained counterparts. To evaluate the centroblast differentiation
capabilities of the system, 11 hpf images have been marked by an experienced pathologist who identified 41 centroblast
cells and 53 non-centroblast cells. A non-supervised clustering process differentiates the centroblast cells from noncentroblast
cells, resulting in 92.68% sensitivity and 90.57% specificity.
Image based grading of nuclear cataract by SVM regression
Author(s):
Huiqi Li;
Joo Hwee Lim;
Jiang Liu;
Tien Yin Wong;
Ava Tan;
Jie Jin Wang;
Paul Mitchell
Show Abstract
Cataract is one of the leading causes of blindness worldwide. A computer-aided approach to assess nuclear cataract
automatically and objectively is proposed in this paper. An enhanced Active Shape Model (ASM) is investigated to
extract robust lens contour from slit-lamp images. The mean intensity in the lens area, the color information on the
central posterior subcapsular reflex, and the profile on the visual axis are selected as the features for grading. A Support
Vector Machine (SVM) scheme is proposed to grade nuclear cataract automatically. The proposed approach has been
tested using the lens images from Singapore National Eye Centre. The mean error between the automatic grading and
grader's decimal grading is 0.38. Statistical analysis shows that 97.8% of the automatic grades are within one grade
difference to human grader's integer grades. Experimental results indicate that the proposed automatic grading approach
is promising in facilitating nuclear cataract diagnosis.
Design of a benchmark dataset, similarity metrics, and tools for liver segmentation
Author(s):
Suryaprakash Kompalli;
Mohammed Alam;
Raja S. Alomari;
Stanley T. Lau;
Vipin Chaudhary
Show Abstract
Reliable segmentation of the liver has been acknowledged as a significant step in several computational and
diagnostic processes. While several methods have been designed for liver segmentation, comparative analysis
of reported methods is limited by the unavailability of annotated datasets of the abdominal area. Currently
available generic data-sets constitute a small sample set, and most academic work utilizes closed datasets. We
have collected a dataset containing abdominal CT scans of 50 patients, with coordinates for the liver boundary.
The dataset will be publicly distributed free of cost with software to provide similarity metrics, and a liver
segmentation technique that uses Markov Random Fields and Active Contours. In this paper we discuss our
data collection methodology, implementation of similarity metrics, and the liver segmentation algorithm.
Joint detection and localization of multiple anatomical landmarks through learning
Author(s):
Mert Dikmen;
Yiqiang Zhan;
Xiang Sean Zhou
Show Abstract
Reliable landmark detection in medical images provides the essential groundwork for successful automation of
various open problems such as localization, segmentation, and registration of anatomical structures. In this paper,
we present a learning-based system to jointly detect (is it there?) and localize (where?) multiple anatomical
landmarks in medical images. The contributions of this work exist in two aspects. First, this method takes the
advantage from the learning scenario that is able to automatically extract the most distinctive features for multi-landmark
detection. Therefore, it is easily adaptable to detect arbitrary landmarks in various kinds of imaging
modalities, e.g., CT, MRI and PET. Second, the use of multi-class/cascaded classifier architecture in different
phases of the detection stage combined with robust features that are highly efficient in terms of computation
time enables a seemingly real time performance, with very high localization accuracy.
This method is validated on CT scans of different body sections, e.g., whole body scans, chest scans and
abdominal scans. Aside from improved robustness (due to the exploitation of spatial correlations), it gains a
run time efficiency in landmark detection. It also shows good scalability performance under increasing number
of landmarks.
Robust vessel segmentation
Author(s):
Susanne Bock;
Caroline Kühnel;
Tobias Boskamp;
Heinz-Otto Peitgen
Show Abstract
In the context of cardiac applications, the primary goal of coronary vessel analysis often consists in supporting the diagnosis
of vessel wall anomalies, such as coronary plaque and stenosis. Therefore, a fast and robust segmentation of the coronary
tree is a very important but challenging task.
We propose a new approach for coronary artery segmentation. Our method is based on an earlier proposed progressive
region growing. A new growth front monitoring technique controls the segmentation and corrects local leakage by retrospective
detection and removal of leakage artifacts. While progressively reducing the region growing threshold for the
whole image, the growing process is locally analyzed using criteria based on the assumption of tubular, gradually narrowing
vessels. If a voxel volume limit or a certain shape constraint is exceeded, the growing process is interrupted. Voxels
affected by a failed segmentation are detected and deleted from the result. To avoid further processing at these positions, a
large neighborhood is blocked for growing.
Compared to a global region growing without local correction, our new local growth control and the adapted correction
can deal with contrast decrease even in very small coronary arteries. Furthermore, our algorithm can efficiently handle
noise artifacts and partial volume effects near the myocardium. The enhanced segmentation of more distal vessel parts was
tested on 150 CT datasets. Furthermore, a comparison between the pure progressive region growing and our new approach
was conducted.
Border preserving skin lesion segmentation
Author(s):
Mostafa Kamali;
Golnoosh Samei
Show Abstract
Melanoma is a fatal cancer with a growing incident rate. However it could be cured if diagnosed in early stages. The
first step in detecting melanoma is the separation of skin lesion from healthy skin. There are particular features
associated with a malignant lesion whose successful detection relies upon accurately extracted borders. We propose a
two step approach. First, we apply K-means clustering method (to 3D RGB space) that extracts relatively accurate
borders. In the second step we perform an extra refining step for detecting the fading area around some lesions as
accurately as possible. Our method has a number of novelties. Firstly as the clustering method is directly applied to the
3D color space, we do not overlook the dependencies between different color channels. In addition, it is capable of
extracting fine lesion borders up to pixel level in spite of the difficulties associated with fading areas around the lesion.
Performing clustering in different color spaces reveals that 3D RGB color space is preferred. The application of the
proposed algorithm to an extensive data-base of skin lesions shows that its performance is superior to that of existing
methods both in terms of accuracy and computational complexity.
AutoEDES: a model-based Bayesian framework for automatic end-diastolic and end-systolic frame selection in angiographic image sequence
Author(s):
Wei Qu;
Sukhveer Singh;
Mike Keller
Show Abstract
This paper presents a novel approach to automatically detect the end-diastolic (ED) and end-systolic (ES) frames from
an X-ray left ventricular angiographical image sequence. ED and ES image detection is the first step for widely used
left ventricular analysis in catheterization lab. However, due to the inherent difficulties of X-ray angiographical image,
automatic ED and ES frame selection is a challenging task and still remains unsolved. The current clinical practice
uses manual selection, which is not only time consuming but also sensitive to different persons at different time. In
this paper, we propose to formulate the X-ray angiogram by a dynamical graphical model. Then the posterior density of
the left ventricular state is estimated by using Bayesian probability density propagation and adaptive background modeling.
Preliminary experimental results have demonstrated the superior performance of the proposed algorithm on clinical data.
Multifractal modeling, segmentation, prediction, and statistical validation of posterior fossa tumors
Author(s):
Atiq Islam;
Khan M. Iftekharuddin;
Robert J. Ogg;
Fred H. Laningham M.D.;
Bhuvaneswari Sivakumar
Show Abstract
In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit
these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the
prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the
tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the
time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed
mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal
structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.
A meta-classifier for detecting prostate cancer by quantitative integration of in vivo magnetic resonance spectroscopy and magnetic resonance imaging
Author(s):
Satish Viswanath;
Pallavi Tiwari;
Mark Rosen;
Anant Madabhushi
Show Abstract
Recently, in vivo Magnetic Resonance Imaging (MRI) and Magnetic Resonance Spectroscopy (MRS) have
emerged as promising new modalities to aid in prostate cancer (CaP) detection. MRI provides anatomic and
structural information of the prostate while MRS provides functional data pertaining to biochemical concentrations
of metabolites such as creatine, choline and citrate. We have previously presented a hierarchical clustering
scheme for CaP detection on in vivo prostate MRS and have recently developed a computer-aided method for
CaP detection on in vivo prostate MRI. In this paper we present a novel scheme to develop a meta-classifier
to detect CaP in vivo via quantitative integration of multimodal prostate MRS and MRI by use of non-linear
dimensionality reduction (NLDR) methods including spectral clustering and locally linear embedding (LLE).
Quantitative integration of multimodal image data (MRI and PET) involves the concatenation of image intensities
following image registration. However multimodal data integration is non-trivial when the individual
modalities include spectral and image intensity data. We propose a data combination solution wherein we project
the feature spaces (image intensities and spectral data) associated with each of the modalities into a lower dimensional
embedding space via NLDR. NLDR methods preserve the relationships between the objects in the
original high dimensional space when projecting them into the reduced low dimensional space. Since the original
spectral and image intensity data are divorced from their original physical meaning in the reduced dimensional
space, data at the same spatial location can be integrated by concatenating the respective embedding vectors.
Unsupervised consensus clustering is then used to partition objects into different classes in the combined MRS
and MRI embedding space. Quantitative results of our multimodal computer-aided diagnosis scheme on 16 sets
of patient data obtained from the ACRIN trial, for which corresponding histological ground truth for spatial
extent of CaP is known, show a marginally higher sensitivity, specificity, and positive predictive value compared
to corresponding CAD results with the individual modalities.
Improvement of automatic hemorrhage detection methods using brightness correction on fundus images
Author(s):
Yuji Hatanaka;
Toshiaki Nakagawa;
Yoshinori Hayashi;
Masakatsu Kakogawa;
Akira Sawada;
Kazuhide Kawase;
Takeshi Hara;
Hiroshi Fujita
Show Abstract
We have been developing several automated methods for detecting abnormalities in fundus images. The purpose of this
study is to improve our automated hemorrhage detection method to help diagnose diabetic retinopathy. We propose a
new method for preprocessing and false positive elimination in the present study. The brightness of the fundus image
was changed by the nonlinear curve with brightness values of the hue saturation value (HSV) space. In order to
emphasize brown regions, gamma correction was performed on each red, green, and blue-bit image. Subsequently, the
histograms of each red, blue, and blue-bit image were extended. After that, the hemorrhage candidates were detected.
The brown regions indicated hemorrhages and blood vessels and their candidates were detected using density analysis.
We removed the large candidates such as blood vessels. Finally, false positives were removed by using a 45-feature
analysis. To evaluate the new method for the detection of hemorrhages, we examined 125 fundus images, including 35
images with hemorrhages and 90 normal images. The sensitivity and specificity for the detection of abnormal cases was
were 80% and 88%, respectively. These results indicate that the new method may effectively improve the performance
of our computer-aided diagnosis system for hemorrhages.
Quantitative evaluation of humeral head defects by comparing left and right feature
Author(s):
Shogo Kawasaki;
Toshiya Nakaguchi;
Nobuyasu Ochiai;
Norimichi Tsumura;
Yoichi Miyake
Show Abstract
Humeral head (top of arm bone) defect has been diagnosed subjectively by physician. Therefore, it is required to
develop a quantitative evaluation method of those defects. In this paper, we propose a quantitative diagnostic method for
evaluating humeral head defects by comparing the shapes of the left and right humeral heads. The proposed method is
composed of three steps. In the first step, the contour of humerus is extracted from a set of multi-slice CT images by
using thresholding technique and active contour model. In the second step, the three-dimensional (3-D) surface model of
humerus is reconstructed from extracted contours. In the third step, the reconstructed 3-D shape of left and right humerus
is superimposed each other, and then the non-overlapped part is recognized as the defect part. This idea is based on the
assumption that human bone structure is symmetrical each other. Finally, the shape of visualized defect part is analyzed
by principal component analysis, and we consider that those obtained principal components and contributions represent
the feature of the defect part. In this research, seven sets of shoulder multi-slice CT images are analyzed and evaluated.
A concurrent computer aided detection (CAD) tool for articular cartilage disease of the knee on MR imaging using active shape models
Author(s):
Bharath Ramakrishna;
Ganesh Saiprasad;
Nabile Safdar;
Khan Siddiqui;
Chein-I Chang;
Eliot Siegel M.D.
Show Abstract
Osteoarthritis (OA) is the most common form of arthritis and a major cause of morbidity affecting millions of adults in
the US and world wide. In the knee, OA begins with the degeneration of joint articular cartilage, eventually resulting in
the femur and tibia coming in contact, and leading to severe pain and stiffness. There has been extensive research
examining 3D MR imaging sequences and automatic/semi-automatic techniques for 2D/3D articular cartilage
extraction. However, in routine clinical practice the most popular technique still remain radiographic examination and
qualitative assessment of the joint space. This may be in large part because of a lack of tools that can provide clinically
relevant diagnosis in adjunct (in near real time fashion) with the radiologist and which can serve the needs of the
radiologists and reduce inter-observer variation. Our work aims to fill this void by developing a CAD application that
can generate clinically relevant diagnosis of the articular cartilage damage in near real time fashion. The algorithm
features a 2D Active Shape Model (ASM) for modeling the bone-cartilage interface on all the slices of a Double Echo
Steady State (DESS) MR sequence, followed by measurement of the cartilage thickness from the surface of the bone,
and finally by the identification of regions of abnormal thinness and focal/degenerative lesions. A preliminary
evaluation of CAD tool was carried out on 10 cases taken from the Osteoarthritis Initiative (OAI) database. When
compared with 2 board-certified musculoskeletal radiologists, the automatic CAD application was able to get
segmentation/thickness maps in little over 60 seconds for all of the cases. This observation poses interesting
possibilities for increasing radiologist productivity and confidence, improving patient outcomes, and applying more
sophisticated CAD algorithms to routine orthopedic imaging tasks.
Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images
Author(s):
Robert LeAnder;
Myneni Sushma Chowdary;
Swapnasri Mokkapati;
Scott E. Umbaugh
Show Abstract
Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a
shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to
diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy
prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and
monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss.
Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in
retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss.
Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms.
Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools
software environment. Another set of fifteen images were derived from the first fifteen and contained
ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold
standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms.
Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM),
Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10%
higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS
error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections
and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2
outperformed Algorithm 1 in terms of visual clarity, FOM and SNR. The performances of these algorithms show that
they have an appreciable amount of potential in helping ophthalmologists detect the severity of eye-related diseases
and prevent vision loss.
A new registration method with voxel-matching technique for temporal subtraction images
Author(s):
Yoshinori Itai;
Hyoungseop Kim;
Seiji Ishikawa;
Shigehiko Katsuragawa;
Kunio Doi
Show Abstract
A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for
enhancing interval changes on medical images by removing most of normal structures. One of the important problems in
temporal subtraction is that subtraction images commonly include artifacts created by slight differences in the size, shape,
and/or location of anatomical structures. In this paper, we developed a new registration method with voxel-matching
technique for substantially removing the subtraction artifacts on the temporal subtraction image obtained from multiple-detector
computed tomography (MDCT). With this technique, the voxel value in a warped (or non-warped) previous
image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be
closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. Our new method
was examined on 16 clinical cases with MDCT images. Preliminary results indicated that interval changes on the
subtraction images were enhanced considerably, with a substantial reduction of misregistration artifacts. The temporal
subtraction images obtained by use of the voxel-matching technique would be very useful for radiologists in the
detection of interval changes on MDCT images.
Image-based retrieval system and computer-aided diagnosis system for renal cortical scintigraphy images
Author(s):
Erkan Mumcuoğlu;
Fatih Nar;
Omer Uğur;
M. Fani Bozkurt;
Mehmet Aslan
Show Abstract
Cortical renal (kidney) scintigraphy images are 2D images (256x256) acquired in three projection angles (posterior,
right-posterior-oblique and left-posterior-oblique). These images are used by nuclear medicine specialists to examine
the functional morphology of kidney parenchyma. The main visual features examined in reading the images are: size,
location, shape and activity distribution (pixel intensity distribution within the boundary of each kidney). Among the
above features, activity distribution (in finding scars if any) was found to have the least interobserver reproducibility.
Therefore, in this study, we developed an image-based retrieval (IBR) and a computer-based diagnosis (CAD) system,
focused on this feature in particular. The developed IBR and CAD algorithms start with automatic segmentation,
boundary and landmark detection. Then, shape and activity distribution features are computed. Activity distribution
feature is obtained using the acquired image and image set statistics of the normal patients. Active Shape Model (ASM)
technique is used for more accurate kidney segmentation. In the training step of ASM, normal patient images are used.
Retrieval performance is evaluated by calculating precision and recall. CAD performance is evaluated by specificity and
sensitivity. To our knowledge, this paper is the first IBR or CAD system reported in the literature on renal cortical
scintigraphy images.
Handheld erythema and bruise detector
Author(s):
Linghua Kong;
Stephen Sprigle;
Mark G. Duckworth;
Dingrong Yi;
Jayme J. Caspall;
Jiwu Wang;
Futing Zhao
Show Abstract
Visual inspection of intact skin is commonly used when assessing persons for pressure ulcers and bruises. Melanin
masks skin discoloration hindering visual inspection in people with darkly pigmented skin. The objective of the project
is to develop a point of care technology capable of detecting erythema and bruises in persons with darkly pigmented skin.
Two significant hardware components, a color filter array and illumination system have been developed and tested. The
color filter array targets four defined wavelengths and has been designed to fit onto a CMOS sensor. The crafting process
generates a multilayer film on a glass substrate using vacuum ion beam splitter and lithographic techniques. The
illumination system is based upon LEDs and targets these same pre-defined wavelengths. Together, these components
are being used to create a small, handheld multispectral imaging device. Compared to other multi spectral technologies
(multi prisms, optical-acoustic crystal and others), the design provides simple, low cost instrumentation that has many
potential multi spectral imaging applications which require a handheld detector.
Glaucoma diagnosis by mapping macula with Fourier domain optical coherence tomography
Author(s):
Ou Tan;
Ake Lu;
Vik Chopra;
Rohit Varma;
Ishikawa Hiroshi;
Joel Schuman M.D.;
David Huang
Show Abstract
A new image segmentation method was developed to detect macular retinal sub-layers boundary on newly-developed
Fourier-Domain Optical Coherence Tomography (FD-OCT) with macular grid scan pattern. The segmentation results
were used to create thickness map of macular ganglion cell complex (GCC), which contains the ganglion cell dendrites,
cell bodies and axons. Overall average and several pattern analysis parameters were defined on the GCC thickness map
and compared for the diagnosis of glaucoma. Intraclass correlation (ICC) is used to compare the reproducibility of the
parameters. Area under receiving operative characteristic curve (AROC) was calculated to compare the diagnostic
power. The result is also compared to the output of clinical time-domain OCT (TD-OCT). We found that GCC based
parameters had good repeatability and comparable diagnostic power with circumpapillary nerve fiber layer (cpNFL)
thickness. Parameters based on pattern analysis can increase the diagnostic power of GCC macular mapping.
Linear structure verification for medical imaging applications
Author(s):
Shoupu Chen;
Yong Chu;
Yang Zheng
Show Abstract
This paper proposes a method for linear-structure (LS) verification in mammography computer-aided detection (CAD)
systems that aims at reducing post-classification microcalcification (MCC) false-positives (FPs). It is an MCC cluster-driven
method that verifies linear structures with a small rotatable band that is centered on a given MCC cluster
candidate. The classification status of an MCC cluster candidate is changed if its association with a linear structure is
confirmed through LS verification. There are mainly four identifiable features that are extracted from the rotatable band
in the gradient-magnitude and Hough parameter spaces. The LS verification process applies cascade rules to the
extracted features to determine if an MCC cluster candidate resides in a linear structure area. The efficiency and efficacy
of the proposed method are demonstrated with results obtained by applying the LS verification method to over one
hundred cancer cases and over one thousand normal cases.