Resampling method for balancing training data in video analysis
Author(s):
Balathasan Giritharan;
Xiaohui Yuan
Show Abstract
Reviewing videos from medical procedures is a tedious work that requires concentration for extended hours and
usually screens thousands of frames to find only a few positive cases that indicate probable presence of disease.
Computational classification algorithms are sought to automate the reviewing process. The class imbalance
problem becomes challenging when the learning process is driven by relative few minority class samples. The
learning algorithms using imbalanced data sets generally result in large number of false negatives. In this article,
we present an efficient rebalancing method for finding video frames that contain bleeding lesions. The majority
class generally has clusters of data within them. Here we cluster the majority class and under-sample the each
cluster based on its variance so that useful examples would not be lost during the under-sampling process. The
balance of bleeding to non-bleeding frames is restored by the proposed cluster-based under-sampling and oversampling
using Synthetic Minority Over-sampling Technique (SMOTE). Experiments were conducted using
synthetic data and videos manually annotated by medical specialists for obscure bleeding detection. Our method
achieved a high average sensitivity and specificity.
Training variability in the evaluation of automated classifiers
Author(s):
Weijie Chen;
Brandon D. Gallas
Show Abstract
The evaluation of automated classifiers in computer-aided diagnosis of medical images often involves a training dataset for classifier design and a test dataset for performance estimation in terms of, e.g., area under the receiver operating characteristic (ROC) curve, or AUC. The traditional approach to assess the uncertainty of the estimated AUC only considers the finite testing set as the source of variability. However, a finite training set is also a random sample and the AUC varies with varying training sets. We categorize the assessment of classifiers into three levels and provide analytical expressions for the variance of the estimated AUC at each level: (1) training treated as a fixed effect, the estimated performance generalizable only to the population of testing sets; (2) training treated as a random effect, the estimated performance generalizable to both the population of training sets and the population of testing sets; (3) training treated as a random effect, performance averaged over training sets generalizable to both the population of training sets and the population of testing sets. The two sources of variability - training and testing - in automated classifiers are analogous to readers and cases in the multi-reader multi-case (MRMC) ROC paradigm in reader studies. We show the one-to-one analogy between the automated classifiers and human readers at these three levels as well as the practical difference in estimating their performance, especially regarding variance.
Database-guided breast tumor detection and segmentation in 2D ultrasound images
Author(s):
Jingdan Zhang;
Shaohua Kevin Zhou;
Shelby Brunke;
Carol Lowery;
Dorin Comaniciu
Show Abstract
Ultrasonography is a valuable technique for diagnosing breast cancer. Computer-aided tumor detection and
segmentation in ultrasound images can reduce labor cost and streamline clinic workflows. In this paper, we
propose a fully automatic system to detect and segment breast tumors in 2D ultrasound images. Our system,
based on database-guided techniques, learns the knowledge of breast tumor appearance exemplified by expert
annotations. For tumor detection, we train a classifier to discriminate between tumors and their background.
For tumor segmentation, we propose a discriminative graph cut approach, where both the data fidelity and
compatibility functions are learned discriminatively. The performance of the proposed algorithms is demonstrated
on a large set of 347 images, achieving a mean contour-to-contour error of 3.75 pixels with about 4.33 seconds.
Perception-driven IT-CADe analysis for the detection of masses in screening mammography: initial investigation
Author(s):
Georgia D. Tourassi;
Maciej A. Mazurowski;
Elizabeth A. Krupinski
Show Abstract
We have previously reported an interactive information-theoretic CADe (IT-CADe) system for the detection of masses
in screening mammograms. The system operates in either traditional static mode or in interactive mode whenever the
user requests a second opinion. In this study we report preliminary investigation of a new paradigm of clinical
integration, guided by the user's eye-gazing and reporting patterns. An observer study was conducted in which 6
radiologists evaluated 20 mammographic cases while wearing a head-mounted eye-tracking device. For each radiologistreported
location, eye-gazing data were collected. Image locations that attracted prolonged dwelling (>1000msec) but
were not reported were also recorded. Fixed size regions of interest (ROIs) were extracted around all above locations and
analyzed using the IT-CADe system. Preliminary analysis showed that IT-CADe correctly confirmed 100% of reported
true mass locations while eliminating 12.5% of the reported false positive locations. For unreported locations that
attracted long dwelling, IT-CADe identified 4/6 false negative errors (i.e., errors of decision) while overcalling 8/84 TN
decisions. Finally, for missed true masses that attracted short (i.e., errors of recognition) or no dwelling at all (i.e., errors
of search), IT-CADe detected 5/8 of them. These results suggest that IT-CADe customization to the user's eye-gazing
and reporting pattern could potentially help delineate the various sources of diagnostic error (search, recognition,
decision) for each individual user and provide targeted decision support, thus improving the human-CAD synergy.
Joint segmentation and spiculation detection for ill-defined and spiculated mammographic masses
Author(s):
Yimo Tao;
Shih-Chung B. Lo;
Matthew T. Freedman;
Jianhua Xuan
Show Abstract
A learning-based approach integrating the use of pixel level statistical modeling and spiculation detection is
presented for the segmentation of mammographic masses with ill-defined margins and spiculations. The algorithm
involves a multi-phase pixel-level classification, using a comprehensive group of regional features, to generate a
pixel level mass-conditional probability map (PM). Then, mass candidate along with background clutters are
extracted from the PM by integrating the prior knowledge of shape and location of masses. A multi-scale steerable
ridge detection algorithm is employed to detect spiculations. Finally, all the object level findings, including mass
candidate, detected spiculations, and clutters, along with the PM are integrated by graph cuts to generate the
final segmentation mask. The method was tested on 54 masses (51 malignant and 3 benign), all with ill-defined
margins and irregular shape or spiculations. The ground truth delineations were provided by five experienced
radiologists. Area overlap ratio of 0.766 (±0.144) and 0.642 (±0.173) were obtained for segmenting the whole mass
and only the margin portion, respectively. Williams index of area and contour based measurements indicated
that segmentation results of the algorithm well agreed with the radiologists' delineation. Most importantly, the
proposed approach is capable of including mass margin and its extension which are considered as key features
for breast lesion analyses.
Detection of architectural distortion in prior mammograms using fractal analysis and angular spread of power
Author(s):
Shantanu Banik;
Rangaraj M. Rangayyan;
J. E. L. Desautels
Show Abstract
This paper presents methods for the detection of architectural distortion in mammograms of interval-cancer cases
taken prior to the diagnosis of breast cancer, using Gabor filters, phase portrait analysis, fractal dimension (FD),
and analysis of the angular spread of power in the Fourier spectrum. In the estimation of FD using the Fourier
power spectrum, only the distribution of power over radial frequency is considered; the information regarding
the angular spread of power is ignored. In this study, the angular spread of power in the Fourier spectrum is
used to generate features for the detection of spiculated patterns related to architectural distortion. Using Gabor
filters and phase portrait analysis, a total of 4224 regions of interest (ROIs) were automatically obtained from
106 prior mammograms of 56 interval-cancer cases, including 301 ROIs related to architectural distortion, and
from 52 mammograms of 13 normal cases. For each ROI, the FD and measures of the angular spread of power
were computed. Feature selection was performed using stepwise logistic regression. The best result achieved,
in terms of the area under the receiver operating characteristic curve, is 0.75 ± 0.02 with an artificial neural
network including radial basis functions. Analysis of the performance of the methods with free-response receiver
operating characteristics indicated a sensitivity of 0.82 at 7.7 false positives per image.
A comparative study of volumetric breast density estimation in digital mammography and magnetic resonance imaging: results from a high-risk population
Author(s):
Despina Kontos;
Ye Xing;
Predrag R. Bakic;
Emily F. Conant;
Andrew D. A. Maidment
Show Abstract
We performed a study to compare methods for volumetric breast density estimation in digital mammography (DM) and
magnetic resonance imaging (MRI) for a high-risk population of women. DM and MRI images of the unaffected breast
from 32 women with recently detected abnormalities and/or previously diagnosed breast cancer (age range 31-78 yrs,
mean 50.3 yrs) were retrospectively analyzed. DM images were analyzed using QuantraTM (Hologic Inc). The MRI
images were analyzed using a fuzzy-C-means segmentation algorithm on the T1 map. Both methods were compared to
Cumulus (Univ. Toronto). Volumetric breast density estimates from DM and MRI are highly correlated (r=0.90,
p≤0.001). The correlation between the volumetric and the area-based density measures is lower and depends on the
training background of the Cumulus software user (r=0.73-84, p≤0.001). In terms of absolute values, MRI provides the
lowest volumetric estimates (mean=14.63%), followed by the DM volumetric (mean=22.72%) and area-based measures
(mean=29.35%). The MRI estimates of the fibroglandular volume are statistically significantly lower than the DM
estimates for women with very low-density breasts (p≤0.001). We attribute these differences to potential partial volume
effects in MRI and differences in the computational aspects of the image analysis methods in MRI and DM. The good
correlation between the volumetric and the area-based measures, shown to correlate with breast cancer risk, suggests
that both DM and MRI volumetric breast density measures can aid in breast cancer risk assessment. Further work is
underway to fully-investigate the association between volumetric breast density measures and breast cancer risk.
Association of a mammographic parenchymal pattern (MPP) descriptor with breast cancer risk: a case-control study
Author(s):
Jun Wei;
Heang-Ping Chan;
Chuan Zhou;
Mark A. Helvie;
Lubomir M. Hadjiiski;
Berkman Sahiner
Show Abstract
We are investigating the feasibility of improving breast cancer risk prediction by computerized mammographic
parenchymal pattern (MPP) analysis. A case-control study was conducted to investigate the association of the MPP
measures with breast cancer risk. The case group included 168 contralateral CC-view mammograms of breast cancer
patients dated at least one year prior to cancer diagnosis, and the control group included 522 CC-view mammograms
from one breast of normal subjects. We extracted and compared four types of statistical texture feature spaces that
included run length statistics and region size statistics (RLS/RSS) features, spatial gray level dependence (SGLD)
features, gray level difference statistics (GLDS) features, and the feature space combining these three types of texture
features. A linear discriminant analysis (LDA) classifier with stepwise feature selection was trained and tested with
leave-one-case-out resampling to evaluate whether the breast parenchyma of future cancer patients could be
distinguished from those of normal subjects in each feature space. The areas under ROC curves (Az) were 0.71, 0.72,
0.71 and 0.76 for the four feature spaces, respectively. The Az obtained from the combined feature space was
significantly (p<0.05) higher than those from the individual feature spaces. Odd ratios (OR) were used to assess the
association between breast cancer risk and four categories of MPP measures: <0.1 (C1), 0.1-0.15 (C2), 0.15-0.2 (C3),
and >0.2 (C4) while patient age was treated as a confounding factor. The adjusted ORs of breast cancer for C2, C3 and
C4 were 3.23, 7.77 and 25.43, respectively. The preliminary result indicated that our proposed computerized MPP
measures were strongly associated with breast cancer risk.
Projection-based features for reducing false positives in computer-aided detection of colonic polyps in CT colonography
Author(s):
Hongbin Zhu;
Matthew Barish;
Perry Pickhardt;
Yi Fan;
Erica Posniak;
Robert Richards;
Zhengrong Liang
Show Abstract
A large number of false positives (FPs) generated by computer-aided detection schemes is likely to distract radiologists'
attention and decreases their interpretation efficiency. Therefore, it is desirable to reduce FPs as many as possible to
increase the detection specificity while maintaining the high detection sensitivity. In this paper, several features are
extracted from the projected images of each initial polyp candidate to differentiate FPs from true positives. These
features demonstrate the potential to exclude different types of FPs, like haustral folds, rectal tubes and residue stool by
an evaluation using a database of 325 patient studies (from two different institutions) which includes 556 scans at supine
and/or prone positions with 347 polyps and masses sized from 5 to 60 mm. For comparison purpose, several wellestablished
features are used to generate a baseline reference. At the by-polyp detection sensitivity level of 96% (no loss
of detection sensitivity), the number of FPs per scan is 7.8 by the baseline and 3.75 if the new projection features are
added, which is a reduction of 51.9% FPs from the baseline.
Dual-energy electronic cleansing for non-cathartic CT colonography: a phantom study
Author(s):
Wenli Cai;
Bob Liu;
Hiroyuki Yoshida
Show Abstract
Partial volume effect and inhomogeneity are two major causes of artifacts in electronic cleansing (EC) for non-cathartic
CT colonography (CTC). Our purpose was to develop a novel method of EC for non-cathartic dual-energy CTC (DECTC)
using a subvoxel multi-spectral material classifier and a regional material decomposition method for
differentiation of residual fecal materials from colonic soft-tissue structures. In this study, an anthropomorphic colon
phantom, which was filled with a mixture of aqueous fiber (psyllium), ground foodstuff (cereal), and non-ionic iodinated
agent (Omnipaque iohexol, GE Healthcare, Milwaukee, WI), was scanned by a dual-energy CT scanner (SOMATON,
Siemens) with two photon energies: 80 kVp and 140 kVp. The DE-CTC images were subjected to a dual-energy EC
(DE-EC) scheme, in which a multi-spectral material classifier was used to compute the fraction of each material within
one voxel by an expectation-maximization (EM) algorithm. This was followed by a regional material segmentation
method for identifying of homogeneous sub-regions (tiles) as fecal materials from other tissue types. The results were
compared with the structural-analysis cleansing (SA-EC) method based upon the CTC images of native phantom without
fillings. The mean cleansing ratio of the DE-EC scheme was 96.57±1.21% compared to 76.3±5.56% of the SA-EC
scheme. The soft-tissue preservation ratio of the DE-EC scheme was 97.05%±0.64% compared to 99.25±0.77% of the
SA-EC scheme.
Prediction of polyp histology on CT colonography using content-based image retrieval
Author(s):
Javed M. Aman;
Jianhua Yao;
Ronald M. Summers
Show Abstract
Predicting the malignancy of colonic polyps is a difficult problem and in general requires an invasive polypectomy
procedure. We present a less-invasive and automated method to predict the histology of colonic polyps under computed
tomographic colonography (CTC) using the content-based image retrieval (CBIR) paradigm. For the purpose of
simplification, polyps annotated as hyperplastic or "other benign" were classified as benign polyps (BP) and the rest
(adenomas and cancers) were classified as malignant polyps (MP). The CBIR uses numerical feature vectors generated
from our CTC computer aided detection (CTC-CAD) system to describe the polyps. These features relate to physical and
visual characteristics of the polyp. A representative database of CTC-CAD polyp images is created. Query polyps are
matched with those in the database and the results are ranked based on the similarity to the query. Polyps with a majority
of representative MPs in their result set are predicted to be malignant and similarly those with a majority of BPs in their
results are benign. For evaluation, the system is compared to the typical optical colonoscopy (OC) size based
classification. Using receiver operating curve (ROC) analysis, we show our system is sufficiently better than the OC size
method.
Matching colonic polyps using correlation optimized warping
Author(s):
Shijun Wang;
Jianhua Yao;
Nicholas Petrick;
Ronald M. Summers
Show Abstract
Computed tomographic colonography (CTC) combined with a computer aided detection system has the potential for
improving colonic polyp detection and increasing the use of CTC for colon cancer screening. In the clinical use of CTC,
a true colonic polyp will be confirmed with high confidence if a radiologist can find it on both the supine and prone
scans. To assist radiologists in CTC reading, we propose a new method for matching polyp findings on the supine and
prone scans. The method performs a colon registration using four automatically identified anatomical salient points and
correlation optimized warping (COW) of colon centerline features. We first exclude false positive detections using
prediction information from a support vector machine (SVM) classifier committee to reduce initial false positive pairs.
Then each remaining CAD detection is mapped to the other scan using COW technique applied to the distance along the
centerline in each colon. In the last step, a new SVM classifier is applied to the candidate pair dataset to find true polyp
pairs between supine and prone scans. Experimental results show that our method can improve the sensitivity to 0.87 at 4
false positive pairs per patient compared with 0.72 for a competing method that uses the normalized distance along the
colon centerline (p<0.01).
Automated segmentation of reference tissue for prostate cancer localization in dynamic contrast enhanced MRI
Author(s):
Pieter C. Vos;
Thomas Hambrock M.D.;
Jelle O. Barentsz M.D.;
Henkjan J. Huisman
Show Abstract
For pharmacokinetic (PK) analysis of Dynamic Contrast Enhanced (DCE) MRI the arterial input function
needs to be estimated. Previously, we demonstrated that PK parameters have a significant better discriminative
performance when per patient reference tissue was used, but required manual annotation of reference tissue. In
this study we propose a fully automated reference tissue segmentation method that tackles this limitation. The
method was tested with our Computer Aided Diagnosis (CADx) system to study the effect on the discriminating
performance for differentiating prostate cancer from benign areas in the peripheral zone (PZ).
The proposed method automatically segments normal PZ tissue from DCE derived data. First, the bladder
is segmented in the start-to-enhance map using the Otsu histogram threshold selection method. Second, the
prostate is detected by applying a multi-scale Hessian filter to the relative enhancement map. Third, normal
PZ tissue was segmented by threshold and morphological operators. The resulting segmentation was used as
reference tissue to estimate the PK parameters. In 39 consecutive patients carcinoma, benign and normal tissue
were annotated on MR images by a radiologist and a researcher using whole mount step-section histopathology
as reference. PK parameters were computed for each ROI. Features were extracted from the set of ROIs using
percentiles to train a support vector machine that was used as classifier. Prospective performance was estimated
by means of leave-one-patient-out cross validation. A bootstrap resampling approach with 10,000 iterations was
used for estimating the bootstrap mean AUCs and 95% confidence intervals.
In total 42 malignant, 29 benign and 37 normal regions were annotated. For all patients, normal PZ was
successfully segmented. The diagnostic accuracy obtained for differentiating malignant from benign lesions using
a conventional general patient plasma profile showed an accuracy of 0.64 (0.53-0.74). Using the automated
per-patient calibration method the diagnostic performance improved significantly to 0.76 (0.67-0.86, p=0.017) ,
whereas the manual per-patient calibration showed a diagnostic performance of 0.79 (0.70-0.89, p=0.01).
In conclusion, the results show that an automated per-patient reference tissue PK model is feasible. A
significantly better discriminating performance compared to the conventional general calibration was obtained
and the diagnostic accuracy is similar to using manual per-patient calibration.
Automatic classification of pathological myopia in retinal fundus images using PAMELA
Author(s):
Jiang Liu;
Damon W. K. Wong;
Ngan Meng Tan;
Zhuo Zhang;
Shijian Lu;
Joo Hwee Lim;
Huiqi Li;
Seang Mei Saw;
Louis Tong;
Tien Yin Wong
Show Abstract
Pathological myopia is the seventh leading cause of blindness. We introduce a framework based on PAMELA
(PAthological Myopia dEtection through peripapilLary Atrophy) for the detection of pathological myopia from fundus
images. The framework consists of a pre-processing stage which extracts a region of interest centered on the optic disc.
Subsequently, three analysis modules focus on detecting specific visual indicators. The optic disc tilt ratio module gives
a measure of the axial elongation of the eye through inference from the deformation of the optic disc. In the texturebased
ROI assessment module, contextual knowledge is used to demarcate the ROI into four distinct, clinically-relevant
zones in which information from an entropy transform of the ROI is analyzed and metrics generated. In particular, the
preferential appearance of peripapillary atrophy (PPA) in the temporal zone compared to the nasal zone is utilized by
calculating ratios of the metrics. The PPA detection module obtains an outer boundary through a level-set method, and
subtracts this region against the optic disc boundary. Temporal and nasal zones are obtained from the remnants to
generate associated hue and color values. The outputs of the three modules are used as in a SVM model to determine the
presence of pathological myopia in a retinal fundus image. Using images from the Singapore Eye Research Institute, the
proposed framework reported an optimized accuracy of 90% and a sensitivity and specificity of 0.85 and 0.95
respectively, indicating promise for the use of the proposed system as a screening tool for pathological myopia.
Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm
Author(s):
C. Agurto;
S. Barriga;
V. Murray;
M. Pattichis;
P. Soliz
Show Abstract
Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic
methods for detection of the disease have been developed in recent years, most of them addressing the
segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does
approach the problem through the segmentation of features. The algorithm determines non-diseased retinal
images from those with pathology based on textural features obtained using multiscale Amplitude
Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features
that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of
280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image
compression and degradation, which will be present in most actual clinical or screening environments.
Results show that the algorithm is insensitive to illumination variations, but high rates of compression and
large blurring effects degrade its performance.
Automatic determination of the artery-vein ratio in retinal images
Author(s):
Meindert Niemeijer;
Bram van Ginneken;
Michael D. Abràmoff
Show Abstract
A lower ratio between the width of the arteries and veins (Arteriolar-to-Venular diameter Ratio, AVR) on the
retina, is well established to be predictive of stroke and other cardiovascular events in adults, as well as an
increased risk of retinopathy of prematurity in premature infants. This work presents an automatic method that
detects the location of the optic disc, determines the appropriate region of interest (ROI), classifies the vessels
in the ROI into arteries and veins, measures their widths and calculates the AVR. After vessel segmentation
and vessel width determination the optic disc is located and the system eliminates all vessels outside the AVR
measurement ROI. The remaining vessels are thinned, vessel crossing and bifurcation points are removed leaving
a set of vessel segments containing centerline pixels. Features are extracted from each centerline pixel that are
used to assign them a soft label indicating the likelihood the pixel is part of a vein. As all centerline pixels
in a connected segment should be the same type, the median soft label is assigned to each centerline pixel in
the segment. Next artery vein pairs are matched using an iterative algorithm and the widths of the vessels is
used to calculate the AVR. We train and test the algorithm using a set of 25 high resolution digital color fundus
photographs a reference standard that indicates for the major vessels in the images whether they are an artery or
a vein. We compared the AVR values produced by our system with those determined using a computer assisted
method in 15 high resolution digital color fundus photographs and obtained a correlation coefficient of 0.881.
Automated detection and classification of major retinal vessels for determination of diameter ratio of arteries and veins
Author(s):
Chisako Muramatsu;
Yuji Hatanaka;
Tatsuhiko Iwase;
Takeshi Hara;
Hiroshi Fujita
Show Abstract
Abnormalities of retinal vasculatures can indicate health conditions in the body, such as the high blood pressure and
diabetes. Providing automatically determined width ratio of arteries and veins (A/V ratio) on retinal fundus images may
help physicians in the diagnosis of hypertensive retinopathy, which may cause blindness. The purpose of this study was
to detect major retinal vessels and classify them into arteries and veins for the determination of A/V ratio. Images used in
this study were obtained from DRIVE database, which consists of 20 cases each for training and testing vessel detection
algorithms. Starting with the reference standard of vasculature segmentation provided in the database, major arteries and
veins each in the upper and lower temporal regions were manually selected for establishing the gold standard. We
applied the black top-hat transformation and double-ring filter to detect retinal blood vessels. From the extracted vessels,
large vessels extending from the optic disc to temporal regions were selected as target vessels for calculation of A/V
ratio. Image features were extracted from the vessel segments from quarter-disc to one disc diameter from the edge of
optic discs. The target segments in the training cases were classified into arteries and veins by using the linear
discriminant analysis, and the selected parameters were applied to those in the test cases. Out of 40 pairs, 30 pairs (75%)
of arteries and veins in the 20 test cases were correctly classified. The result can be used for the automated calculation of
A/V ratio.
Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors
Author(s):
Gwénolé Quellec;
Michael D. Abràmoff;
Stephen R. Russell
Show Abstract
The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the
diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this
mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly
relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either
monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to
be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then
this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of
drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors
based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft
drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features
characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure
between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic
twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are
comparable or better than the predictions of human experts.
Auto-biometric for M-mode echocardiography
Author(s):
Wei Zhang;
Jinhyong Park;
S. Kevin Zhou
Show Abstract
In this paper we present a system for fast and accurate detection of anatomical structures (calipers) in M-mode images.
The task is challenging because of dramatic variations in their appearances. We propose to solve the problem in a
progressive manner, which ensures both robustness and efficiency. It first obtains rough caliper localization using the
intensity profile image. Then run a constrained search for accurate caliper positions. Markov Random Field (MRF) and
warping image detectors are used for jointly considering appearance information and the geometric relationship between
calipers. Extensive experiments show that our system achieves more accurate results and uses less time in comparison
with previously reported work.
Automatic coronary calcium scoring in low-dose non-ECG-synchronized thoracic CT scans
Author(s):
Ivana Isgum;
Mathias Prokop;
Peter C. Jacobs;
Martijn J. Gondrie;
Willem P. Th. M. Mali;
Max A. Viergever;
Bram van Ginneken
Show Abstract
This work presents a system for automatic coronary calcium scoring and cardiovascular risk stratification in
thoracic CT scans.
Data was collected from a Dutch-Belgian lung cancer screening trial. In 121 low-dose, non-ECG synchronized,
non-contrast enhanced thoracic CT scans an expert scored coronary calcifications manually. A key element of
the proposed algorithm is that the approximate position of the coronary arteries was inferred with a probabilistic
coronary calcium atlas. This atlas was created with atlas-based segmentation from 51 scans and their manually
identified calcifications, and was registered to each unseen test scan. In the test scans all objects with density
above 130 HU were considered candidates that could represent coronary calcifications. A statistical pattern
recognition system was designed to classify these candidates using features that encode their spatial position
relative to the inferred position of the coronaries obtained from the atlas registration. In addition, size and
texture features were computed for all candidates. Two consecutive classifiers were used to label each candidate.
The system was trained with 35 and tested with another 35 scans. The detected calcifications were quantified
and cardiovascular risk was determined for each subject.
The system detected 71% of coronary calcifications with an average of 0.9 false positive objects per scan.
Cardiovascular risk category was correctly assigned to 29 out of 35 subjects (83%). Five scans (14%) were one
category off, and only one scan (3%) was two categories off.
We conclude that automatic assessment of the cardiovascular risk from low-dose, non-ECG synchronized
thoracic CT scans appears feasible.
An hybrid CPU-GPU framework for quantitative follow-up of abdominal aortic aneurysm volume by CT angiography
Author(s):
Claude Kauffmann;
An Tang;
Eric Therasse;
Gilles Soulez
Show Abstract
We developed a hybrid CPU-GPU framework enabling semi-automated segmentation of abdominal aortic aneurysm
(AAA) on Computed Tomography Angiography (CTA) examinations. AAA maximal diameter (D-max) and volume
measurements and their progression between 2 examinations can be generated by this software improving patient followup.
In order to improve the workflow efficiency some segmentation tasks were implemented and executed on the
graphics processing unit (GPU). A GPU based algorithm is used to automatically segment the lumen of the aneurysm
within short computing time. In a second step, the user interacted with the software to validate the boundaries of the
intra-luminal thrombus (ILT) on GPU-based curved image reformation. Automatic computation of D-max and volume
were performed on the 3D AAA model. Clinical validation was conducted on 34 patients having 2 consecutive MDCT
examinations within a minimum interval of 6 months. The AAA segmentation was performed twice by a experienced
radiologist (reference standard) and once by 3 unsupervised technologists on all 68 MDCT. The ICC for intra-observer
reproducibility was 0.992 (≥0.987) for D-max and 0.998 (≥0.994) for volume measurement. The ICC for inter-observer
reproducibility was 0.985 (0.977-0.90) for D-max and 0.998 (0.996- 0.999) for volume measurement. Semi-automated
AAA segmentation for volume follow-up was more than twice as sensitive than D-max follow-up, while providing an
equivalent reproducibility.
Automated segmentation and tracking of coronary arteries in cardiac CT scans: comparison of performance with a clinically used commercial software
Author(s):
Chuan Zhou;
Heang-Ping Chan;
Aamer Chughtai;
Smita Patel;
Lubomir M. Hadjiiski;
Berkman Sahiner;
Jun Wei;
Ella A. Kazerooni
Show Abstract
Coronary CT angiography (cCTA) has been reported to be an effective means for diagnosis of coronary artery disease.
We are investigating the feasibility of developing a computer-aided detection (CADe) system to assist radiologists in
detection of non-calcified plaques in coronary arteries in ECG-gated cCTA scans. In this study, we developed a
prototype vessel segmentation and tracking method to extract the coronary arterial trees which will define the search
space for plaque detection. Vascular structures are first enhanced by 3D multi-scale filtering and analysis of the
eigenvalues of Hessian matrices using a vessel enhancement response function specifically designed for coronary
arteries. The enhanced vascular structures are then segmented by an EM estimation method. The segmented coronary
arteries are tracked using a 3D dynamic balloon tracking (DBT) method. For this preliminary study, two starting seed
points were manually identified at the origins of the left and right coronary artery (LCA and RCA). The DBT method
automatically moves a sphere along the vessel whose diameter is adjusted dynamically based on the local vessel size,
tracks the vessels, and identifies its branches to generate the left and right coronary arterial trees. The algorithm was
applied to 20 cCTA scans that contained various degrees of coronary artery diseases. To evaluate the performance of
vessel segmentation and tracking, the rendered volume of coronary arteries tracked by our algorithm was displayed on
a PC, placed next to a GE Advantage workstation on which the coronary arterial trees tracked by the GE software and
the original cCTA scan were displayed. Two experienced thoracic radiologists visually examined the coronary arteries
on the cCTA scan and the segmented vessels to count untracked false-negative (FN) segments and false positives
(FPs). The comparison was made by radiologists' visual judgment because the digital files for the segmented vessels
were not accessible on the commercial system. A total of 19 and 38 artery segments were identified to be FNs, and 23
FPs and 20 FPs were found in the coronary trees tracked by our algorithm and the GE software, respectively. The
preliminary results demonstrated the feasibility of our approach.
MACD: an imaging marker for cardiovascular disease
Author(s):
Melanie Ganz;
Marleen de Bruijne;
Mads Nielsen
Show Abstract
Despite general acceptance that a healthy lifestyle and the treatment of risk factors can prevent the development
of cardiovascular diseases (CVD), CVD are the most common cause of death in Europe and the United States.
It has been shown that abdominal aortic calcifications (AAC) correlate strongly with coronary artery calcifications.
Hence an early detection of aortic calcified plaques helps to predict the risk of related coronary diseases.
Also since two thirds of the adverse events have no prior symptoms, possibilities to screen for risk in low cost
imaging are important. To this end the Morphological Atherosclerotic Calcification Distribution (MACD) index
was developed.
In the following several potential severity scores relating to the geometrical outline of the calcified deposits
in the lumbar aortic region are introduced. Their individual as well as their combined predictive power is examined
and a combined marker, MACD, is constructed. This is done using a Cox regression analysis, also known as
survival analysis. Furthermore we show how a Cox regression yields MACD to be the most efficient marker. We
also demonstrate that MACD has a larger individual predictive power than any of the other individual imaging
markers described. Finally we present that the MACD index predicts cardiovascular death with a hazard ratio
of approximately four.
Automated classification of lymph nodes in USPIO-enhanced MR-images: a comparison of three segmentation methods
Author(s):
Oscar A. Debats;
Nico Karssemeijer;
Jelle O. Barentsz;
Henkjan Huisman
Show Abstract
Computer assisted detection (CAD) of lymph node metastases may help reduce reading time and improve
interpretation of the large amount of image data in an MR-lymphography exam. We compared the influence
of using different segmentation methods on the performance of a CAD system for classification of normal and
metastasized lymph nodes. Our database consisted of T1 and T2*-weighted pelvic MR images of 603 lymph
nodes, enhanced by USPIO contrast medium. For each lymph node, one seed point was manually defined
Three automated segmentation methods were compared: 1. Confidence Connected segmentation, extended with
automated Bandwidth Factor selection; 2. Conventional Graph Cut segmentation; 3. Pseudo-segmentation by
selecting a sphere around the seed point. All lymph nodes were also manually segmented by a radiologist. The
resulting segmentations were used to calculate 2 features (mean T1 and T2* signal intensity). Linear discriminant
analysis was used for classification. The diagnostic accuracy (AUC at ROC-analysis) was: 0.95 (Confidence-
Connected); 0.95 (Graph-Cut); 0.85 (spheres); and 0.95 (manual segmentations). The CAD performance of both
the Confidence Connected and Graph Cut methods was as good as the manual segmentation. The substantially
lower performance of the sphere segmentations demonstrates the need for accurate segmentations, even in USPIOenhanced
images.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
Author(s):
Kenji Suzuki;
Mark L. Epstein;
Ryan Kohlbrenner;
Ademola Obajuluwa;
Jianwu Xu;
Masatoshi Hori M.D.;
Richard Baron M.D.
Show Abstract
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar
density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We
developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing
filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an
edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching
algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour
segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the
liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen
prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated
liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean
liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean
absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard"
manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference
(p(F≤f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate
way of measuring liver volumes.
Multi-class SVM model for fMRI-based classification and grading of liver fibrosis
Author(s):
M. Freiman;
Y. Sela;
Y. Edrei;
O. Pappo;
L. Joskowicz;
R. Abramovitch
Show Abstract
We present a novel non-invasive automatic method for the classification and grading of liver fibrosis from fMRI
maps based on hepatic hemodynamic changes. This method automatically creates a model for liver fibrosis
grading based on training datasets. Our supervised learning method evaluates hepatic hemodynamics from an
anatomical MRI image and three T2*-W fMRI signal intensity time-course scans acquired during the breathing
of air, air-carbon dioxide, and carbogen. It constructs a statistical model of liver fibrosis from these fMRI scans
using a binary-based one-against-all multi class Support Vector Machine (SVM) classifier. We evaluated the
resulting classification model with the leave-one out technique and compared it to both full multi-class SVM
and K-Nearest Neighbor (KNN) classifications. Our experimental study analyzed 57 slice sets from 13 mice, and
yielded a 98.2% separation accuracy between healthy and low grade fibrotic subjects, and an overall accuracy
of 84.2% for fibrosis grading. These results are better than the existing image-based methods which can only
discriminate between healthy and high grade fibrosis subjects. With appropriate extensions, our method may
be used for non-invasive classification and progression monitoring of liver fibrosis in human patients instead of
more invasive approaches, such as biopsy or contrast-enhanced imaging.
Semi-automatic central-chest lymph-node definition from 3D MDCT images
Author(s):
Kongkuo Lu;
William E. Higgins
Show Abstract
Central-chest lymph nodes play a vital role in lung-cancer staging. The three-dimensional (3D) definition of
lymph nodes from multidetector computed-tomography (MDCT) images, however, remains an open problem.
This is because of the limitations in the MDCT imaging of soft-tissue structures and the complicated phenomena
that influence the appearance of a lymph node in an MDCT image. In the past, we have made significant efforts
toward developing (1) live-wire-based segmentation methods for defining 2D and 3D chest structures and (2)
a computer-based system for automatic definition and interactive visualization of the Mountain central-chest
lymph-node stations. Based on these works, we propose new single-click and single-section live-wire methods
for segmenting central-chest lymph nodes. The single-click live wire only requires the user to select an object
pixel on one 2D MDCT section and is designed for typical lymph nodes. The single-section live wire requires
the user to process one selected 2D section using standard 2D live wire, but it is more robust. We applied
these methods to the segmentation of 20 lymph nodes from two human MDCT chest scans (10 per scan) drawn
from our ground-truth database. The single-click live wire segmented 75% of the selected nodes successfully
and reproducibly, while the success rate for the single-section live wire was 85%. We are able to segment the
remaining nodes, using our previously derived (but more interaction intense) 2D live-wire method incorporated
in our lymph-node analysis system. Both proposed methods are reliable and applicable to a wide range of
pulmonary lymph nodes.
Computer-aided lymph node detection in abdominal CT images
Author(s):
Jiamin Liu;
Jacob M. White;
Ronald M. Summers
Show Abstract
Many malignant processes cause abdominal lymphadenopathy, and computed tomography (CT) has become the primary
modality for its detection. A lymph node is considered enlarged (swollen) if it is more than 1 centimeter in diameter.
Which lymph nodes are swollen depends on the type of disease and the body parts involved. Identifying their locations is
very important to determine the possible cause. In the current clinical workflow, the detection and diagnosis of enlarged
lymph nodes is usually performed manually by examining all slices of CT images, which can be error-prone and time
consuming. 3D blob enhancement filter is a usual way for computer-aided node detection. We proposed a new 3D blob
detector for automatic lymph node detection in contrast-enhanced abdominal CT images. Since lymph nodes are
usually next to blood vessels, abdominal blood vessels were first segmented as a reference to set the search region for
lymph nodes. Then a new detection response measure, blobness, is defined based on eigenvalues of the Hessian matrix
and the object scale in our new blob detector. Voxels with higher blobness were clustered as lymph node candidates.
Finally some prior anatomical knowledge was utilized for false positive reduction. We applied our method to 5 patients
and compared the results with the performance of the original blobness definition. Both methods achieved sensitivity of
83.3% but the false positive rates per patient were 14 and 26 for our method and the original method, respectively. Our
results indicated that computer-aided lymph node detection with this new blob detector may yield a high sensitivity and
a relatively low FP rate in abdominal CT.
Automated liver lesion characterization using fast kVp switching dual energy computed tomography imaging
Author(s):
Alberto Santamaria-Pang;
Sandeep Dutta;
Sokratis Makrogiannis;
Amy Hara M.D.;
William Pavlicek;
Alvin Silva M.D.;
Brian Thomsen;
Scott Robertson;
Darin Okerlund;
David A. Langan;
Rahul Bhotika
Show Abstract
Hypodense metastases are not always completely distinguishable from benign cysts in the liver using conventional
Computed Tomography (CT) imaging, since the two lesion types present with overlapping intensity distributions
due to similar composition as well as other factors including beam hardening and patient motion. This problem
is extremely challenging for small lesions with diameter less than 1 cm. To accurately characterize such lesions,
multiple follow-up CT scans or additional Positron Emission Tomography or Magnetic Resonance Imaging exam
are often conducted, and in some cases a biopsy may be required after the initial CT finding. Gemstone
Spectral Imaging (GSI) with fast kVp switching enables projection-based material decomposition, offering the
opportunity to discriminate tissue types based on their energy-sensitive material attenuation and density. GSI
can be used to obtain monochromatic images where beam hardening is reduced or eliminated and the images
come inherently pre-registered due to the fast kVp switching acquisition. We present a supervised learning
method for discriminating between cysts and hypodense liver metastases using these monochromatic images.
Intensity-based statistical features extracted from voxels inside the lesion are used to train optimal linear and
nonlinear classifiers. Our algorithm only requires a region of interest within the lesion in order to compute
relevant features and perform classification, thus eliminating the need for an accurate segmentation of the lesion.
We report classifier performance using M-fold cross-validation on a large lesion database with radiologist-provided
lesion location and labels as the reference standard. Our results demonstrate that (a) classification using a single
projection-based spectral CT image, i.e., a monochromatic image at a specified keV, outperforms classification
using an image-based dual energy CT pair, i.e., low and high kVp images derived from the same fast kVp
acquisition and (b) classification using monochromatic images can achieve very high accuracy in separating
benign liver cysts and metastases, especially for small lesions.
What catches a radiologist's eye? A comprehensive comparison of feature types for saliency prediction
Author(s):
Mohammad Alzubaidi;
Vineeth Balasubramanian;
Ameet Patel;
Sethuraman Panchanathan;
John A. Black Jr.
Show Abstract
Experienced radiologists are in short supply, and are sometimes called upon to read many images in a short amount of
time. This leaves them with a limited amount of time to read images, and can lead to fatigue and stress which can be
sources of error, as they overlook subtle abnormalities that they otherwise might not miss. Another factor in error rates
is called satisfaction of search, where a radiologist misses a second (typically subtle) abnormality after finding the first.
These types of errors are due primarily to a lack of attention to an important region of the image during the search. In
this paper we discuss the use of eye tracker technology, in combination with image analysis and machine learning
techniques, to learn what types of features catch the eye experienced radiologists when reading chest x-rays for
diagnostic purposes, and to then use that information to produce saliency maps that predict what regions of each image
might be most interesting to radiologists. We found that, out of 13 popular features types that are widely extracted to
characterize images, 4 are particularly useful for this task: (1) Localized Edge Orientation Histograms (2) Haar
Wavelets, (3) Gabor Filters, and (4) Steerable Filters.
Interactive annotation of textures in thoracic CT scans
Author(s):
Thessa T. J. P. Kockelkorn;
Pim A. de Jong;
Hester A. Gietema;
Jan C. Grutters;
Mathias Prokop;
Bram van Ginneken
Show Abstract
This study describes a system for interactive annotation of thoracic CT scans. Lung volumes in these scans are
segmented and subdivided into roughly spherical volumes of interest (VOIs) with homogeneous texture using a
clustering procedure. For each 3D VOI, 72 features are calculated. The observer inspects the scan to determine
which textures are present and annotates, with mouse clicks, several VOIs of each texture. Based on these
annotations, a k-nearest-neighbor classifier is trained, which classifies all remaining VOIs in the scan. The
algorithm then presents a slice with suggested annotations to the user, in which the user can correct mistakes.
The classifier is retrained, taking into account these new annotations, and the user is presented another slice
for correction. This process continues until at least 50% of all lung voxels in the scan have been classified. The
remaining VOIs are classified automatically. In this way, the entire lung volume is annotated. The system has
been applied to scans of patients with usual and non-specific interstitial pneumonia. The results of interactive
annotation are compared to a setup in which the user annotates all predefined VOIs manually. The interactive
system is 3.7 times as fast as complete manual annotation of VOIs and differences between the methods are similar
to interobserver variability. This is a first step towards precise volumetric quantitation of texture patterns in
thoracic CT in clinical research and in clinical practice.
Rib suppression in chest radiographs to improve classification of textural abnormalities
Author(s):
Laurens E. Hogeweg;
Christian Mol;
Pim A. de Jong;
Bram van Ginneken
Show Abstract
The computer aided diagnosis (CAD) of abnormalities on chest radiographs is difficult due to the presence of overlapping normal anatomy. Suppression of the normal anatomy is expected to improve performance of a CAD system, but such a method has not yet been applied to the computer detection of interstitial abnormalities such as occur in tuberculosis (TB). The aim of this research is to evaluate the effect of rib suppression on a CAD system for TB. Profiles of pixel intensities sampled perpendicular to segmented ribs were used to create a local PCA-based shape model of the rib. The model was normalized to the local background intensity and corrected for gradients perpendicular to the rib. Subsequently rib suppressed images were created by subtracting the models for each rib from the original image. The effect of rib suppression was evaluated using a CAD system for TB detection. Small square image patches were sampled randomly from 15 normal and 35 TB-affected images containing textural abnormalities. Abnormalities were outlined by a radiologist and were given a subtlety rating from 1 to 5. Features based on moments of intensity distributions of Gaussian derivative filtered images were extracted. A supervised learning approach was used to discriminate between normal and diseased image patches. The use of rib suppressed images increased the overall performance of the system, as measured by the area under the receiver operator characteristic (ROC) curve, from 0.75 to 0.78. For the more subtly rated patches (rated 1-3) the performance increased from 0.62 to 0.70.
Combinational feature optimization for classification of lung tissue images
Author(s):
Ravi K. Samala;
Tatyana Zhukov;
Jianying Zhang;
Melvyn Tockman;
Wei Qian
Show Abstract
A novel approach to feature optimization for classification of lung carcinoma using tissue images is presented. The
methodology uses a combination of three characteristics of computational features: F-measure, which is a representation
of each feature towards classification, inter-correlation between features and pathology based information. The metadata
provided from pathological parameters is used for mapping between computational features and biological information.
Multiple regression analysis maps each category of features based on how pathology information is correlated with the
size and location of cancer. Relatively the computational features represented the tumor size better than the location of
the cancer. Based on the three criteria associated with the features, three sets of feature subsets with individual validation
are evaluated to select the optimum feature subset. Based on the results from the three stages, the knowledgebase
produces the best subset of features. An improvement of 5.5% was observed for normal Vs all abnormal cases with Az
value of 0.731 and 74/114 correctly classified. The best Az value of 0.804 with 66/84 correct classification and
improvement of 21.6% was observed for normal Vs adenocarcinoma.
Classification of interstitial lung disease patterns with topological texture features
Author(s):
Markus B. Huber;
Mahesh Nagarajan;
Gerda Leinsinger;
Lawrence A. Ray;
Axel Wismüller
Show Abstract
Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing'
that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution
computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70
axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest
of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features
were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions
(MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier
and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each
texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure
of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions
and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction.
The best classification results were obtained by the MF features, which performed significantly better than all
the standard GLCM and MD features (p < 0.005) for both classifiers. The highest accuracy was found for
MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively). The best standard texture features
were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate
that advanced topological texture features can provide superior classification performance in computer-assisted
diagnosis of interstitial lung diseases when compared to standard texture analysis methods.
Content-based image retrieval applied to bone age assessment
Author(s):
Benedikt Fischer;
André Brosig;
Petra Welter;
Christoph Grouls;
Rolf W. Günther;
Thomas M. Deserno
Show Abstract
Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of
carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated.
For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due
to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is
difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to
previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with
reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout
experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas.
The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs,
two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield
an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to
two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.
Standard moments based vessel bifurcation filter for computer-aided detection of pulmonary nodules
Author(s):
Sergei V. Fotin;
Anthony P. Reeves;
Alberto M. Biancardi;
David F. Yankelevitz;
Claudia I. Henschke
Show Abstract
This work describes a method that can discriminate between a solid pulmonary nodule and a pulmonary vessel
bifurcation point at a given candidate location on a CT scan using the method of standard moments. The
algorithm starts with the estimation of a spherical window around a nodule candidate center that best captures
the local shape properties of the region. Then, given this window, the standard set of moments, invariant to
rotation and scale is computed over the geometric representation of the region. Finally, a feature vector composed
of the moment values is classified as either a nodule or a vessel bifurcation point.
The performance of this technique was evaluated on a dataset containing 276 intraparenchymal nodules and
276 selected vessel bifurcation points. The method resulted in 99% sensitivity and 80% specificity in identifying
nodules, which makes this technique an efficient filter for false positives reduction. Its efficiency was further
evaluated on the dataset of 656 low-dose chest CT scans. Inclusion of this filter into a design of an experimental
detection system resulted in up to a 69% decrease in false positive rate in detection of intraparenchymal nodules
with less than 1% loss in sensitivity.
Micro CT based truth estimation of nodule volume
Author(s):
L. M. Kinnard;
M. A. Gavrielides;
K. J. Myers;
R. Zeng;
B. Whiting;
S. Lin-Gibson;
N. Petrick
Show Abstract
With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced,
with the hope that such methods will be more accurate and consistent than currently used planar measures of size.
However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is
multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A
primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible,
minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this
work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of
micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single
measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical,
spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent
differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean=
-11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).
Approximations of noise structures in helical multi-detector CT scans: application to lung nodule volume estimation
Author(s):
Rongping Zeng;
Nicholas Petrick;
Marios A. Gavrielides;
Kyle J. Myers
Show Abstract
We have previously presented a match filtered (MF) approach for estimating lung nodule size from helical
multi-detector CT (MDCT) images [1], in which we minimized the sum of squared differences between the
simulated CT templates and the actual nodule CT images. The previous study showed the potential of this
approach for reducing the bias and variance in nodule size estimation. However, minimizing SSD is not
statistically optimal because the noise in 3D helical CT images is correlated. The goal of this work is to
investigate the noise properties and explore several approximate descriptions of the three-dimensional (3D)
noise covariance for more accurate estimates. The approximations include: variance only, noise power
spectrum (NPS), axial correlation, two-dimensional (2D) in-plane correlation and fully 3D correlation. We
examine the effectiveness of these second-order noise approximations by applying them to our volume
estimation approach with a simulation study. Our simulations show that: the variance-based pre-whitening
and axial pre-whitening perform very similar to the non-prewhitening case, with accuracy (measured in
RMSE) differences within 1%; the NPS based pre-whitening performs slightly better, with a 4% decrease in
RMSE; the in-plane pre-whitening and 3D fully pre-whitening perform best, with about a 10% decrease in
RMSE over the non-prewhitening case. The simulation results suggest that the NPS, 2D in-plane and fully
3D prewhitening can be beneficial for lung nodule size estimation, albeit with greater computational costs in
determining these noise characterizations.
A shape-dependent variability metric for evaluating panel segmentations with a case study on LIDC
Author(s):
Stephen Siena;
Olga Zinoveva;
Daniela Raicu;
Jacob Furst;
Samuel Armato III
Show Abstract
The segmentation of medical images is challenging because a ground truth is often not available. Computer-Aided
Detection (CAD) systems are dependent on ground truth as a means of comparison; however, in many cases the
ground truth is derived from only experts' opinions. When the experts disagree, it becomes impossible to discern
one ground truth. In this paper, we propose an algorithm to measure the disagreement among radiologist's
delineated boundaries. The algorithm accounts for both the overlap and shape of the boundaries in determining
the variability of a panel segmentation. After calculating the variability of 3788 thoracic computed tomography
(CT) slices in the Lung Image Database Consortium (LIDC), we found that the radiologists have a high consensus
in a majority of lung nodule segmentations. However, our algorithm identified a number of segmentations that
the radiologists significantly disagreed on. Our proposed method of measuring disagreement can assist others
in determining the reliability of panel segmentations. We also demonstrate that it is superior to simply using
overlap, which is currently one of the most common ways of measuring segmentation agreement. The variability
metric presented has applications to panel segmentations, and also has potential uses in CAD systems.
FDA phantom CT database: a resource for the assessment of lung nodule size estimation methodologies and software development
Author(s):
Marios A. Gavrielides;
Lisa M. Kinnard;
Kyle J. Myers;
Rongping Zeng;
Nicholas Petrick
Show Abstract
As part of a more general effort to probe the interrelated factors impacting the accuracy
and precision of lung nodule size estimation, we have been conducting phantom CT
studies with an anthropomorphic thoracic phantom containing a vasculature insert on
which synthetic nodules were inserted or attached. The utilization of synthetic nodules
with known truth regarding size and location allows for bias and variance analysis,
enabled by the acquisition of repeat CT scans. Using a factorial approach to probe
imaging parameters (acquisition and reconstruction) and nodule characteristics (size,
density, shape, location), ten repeat scans have been collected for each protocol and
nodule layout. The resulting database of CT scans is incrementally becoming available to
the public via the National Biomedical Imaging Archive to facilitate the assessment of
lung nodule size estimation methodologies and the development of image analysis
software among other possible applications. This manuscript describes the phantom CT
scan database and associated information including image acquisition and reconstruction
protocols, nodule layouts and nodule truth.
Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images
Author(s):
Marius Erdt;
Georgios Sakas
Show Abstract
This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed
Tomography (CT). The developed computer aided segmentation system is expected to support computer aided
diagnosis and operation planning. We have developed a deformable model based approach based on local shape
constraints that prevents the model from deforming into neighboring structures while allowing the global shape to
adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the
presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation
logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work
flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine
and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies,
the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation
results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared
to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation
times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for
an application in clinical practice.
Automatic diagnosis of lumbar disc herniation with shape and appearance features from MRI
Author(s):
Raja' S. Alomari;
Jason J. Corso;
Vipin Chaudhary;
Gurmeet Dhillon
Show Abstract
Intervertebral disc herniation is a major reason for lower back pain (LBP), which is the second most common
neurological ailment in the United States. Automation of herniated disc diagnosis reduces the large burden
on radiologists who have to diagnose hundreds of cases each day using clinical MRI. We present a method
for automatic diagnosis of lumbar disc herniation using appearance and shape features. We jointly use the
intensity signal for modeling the appearance of herniated disc and the active shape model for modeling the
shape of herniated disc. We utilize a Gibbs distribution for classification of discs using appearance and shape
features. We use 33 clinical MRI cases of the lumbar area for training and testing both appearance and shape
models. We achieve over 91% accuracy in detection of herniation in a cross-validation experiment with specificity
of 91% and sensitivity of 94%.
Feature selection for computer-aided polyp detection using MRMR
Author(s):
Xiaoyun Yang;
Boray Tek;
Gareth Beddoe;
Greg Slabaugh
Show Abstract
In building robust classifiers for computer-aided detection (CAD) of lesions, selection of relevant features is of
fundamental importance. Typically one is interested in determining which, of a large number of potentially
redundant or noisy features, are most discriminative for classification. Searching all possible subsets of features
is impractical computationally. This paper proposes a feature selection scheme combining AdaBoost with the
Minimum Redundancy Maximum Relevance (MRMR) to focus on the most discriminative features. A fitness
function is designed to determine the optimal number of features in a forward wrapper search. Bagging is
applied to reduce the variance of the classifier and make a reliable selection. Experiments demonstrate that by
selecting just 11 percent of the total features, the classifier can achieve better prediction on independent test
data compared to the 70 percent of the total features selected by AdaBoost.
Computer-aided diagnosis of lumbar stenosis conditions
Author(s):
Soontharee Koompairojn;
Kathleen Hua;
Kien A. Hua;
Jintavaree Srisomboon
Show Abstract
Computer-aided diagnosis (CAD) systems are indispensable tools for patients' healthcare in modern medicine.
Nevertheless, the only fully automatic CAD system available for lumbar stenosis today is for X-ray images. Its
performance is limited due to the limitations intrinsic to X-ray images. In this paper, we present a system for
magnetic resonance images. It employs a machine learning classification technique to automatically recognize
lumbar spine components. Features can then be extracted from these spinal components. Finally, diagnosis is done
by applying a Multilayer Perceptron. This classification framework can learn the features of different spinal
conditions from the training images. The trained Perceptron can then be applied to diagnose new cases for various
spinal conditions. Our experimental studies based on 62 subjects indicate that the proposed system is reliable and
significantly better than our older system for X-ray images.
Digital breast tomosynthesis: computerized detection of microcalcifications in reconstructed breast volume using a 3D approach
Author(s):
Heang-Ping Chan;
Berkman Sahiner;
Jun Wei;
Lubomir M. Hadjiiski;
Chuan Zhou;
Mark A. Helvie
Show Abstract
We are developing a computer-aided detection (CAD) system for clustered microcalcifications in digital breast
tomosynthesis (DBT). In this preliminary study, we investigated the approach of detecting microcalcifications in the
tomosynthesized volume. The DBT volume is first enhanced by 3D multi-scale filtering and analysis of the eigenvalues
of Hessian matrices with a calcification response function and signal-to-noise ratio enhancement filtering. Potential
signal sites are identified in the enhanced volume and local analysis is performed to further characterize each object. A
3D dynamic clustering procedure is designed to locate potential clusters using hierarchical criteria. We collected a pilot
data set of two-view DBT mammograms of 39 breasts containing microcalcification clusters (17 malignant, 22 benign)
with IRB approval. A total of 74 clusters were identified by an experienced radiologist in the 78 DBT views. Our
prototype CAD system achieved view-based sensitivity of 90% and 80% at an average FP rate of 7.3 and 2.0 clusters per
volume, respectively. At the same levels of case-based sensitivity, the FP rates were 3.6 and 1.3 clusters per volume,
respectively. For the subset of malignant clusters, the view-based detection sensitivity was 94% and 82% at an average
FP rate of 6.0 and 1.5 FP clusters per volume, respectively. At the same levels of case-based sensitivity, the FP rates
were 1.2 and 0.9 clusters per volume, respectively. This study demonstrated that computerized microcalcification
detection in 3D is a promising approach to the development of a CAD system for DBT. Study is underway to further
improve the computer-vision methods and to optimize the processing parameters using a larger data set.
The reconstruction of microcalcification clusters in digital breast tomosynthesis
Author(s):
Candy P. S. Ho;
Chris E. Tromans;
Julia A. Schnabel;
Sir Michael Brady
Show Abstract
We present a novel method for the detection and reconstruction in 3D of microcalcifications in digital breast
tomosynthesis (DBT) image sets. From a list of microcalcification candidate regions (that is, real microcalcification
points or noise points) found in each DBT projection, our method: (1) finds the set of corresponding points of a
microcalcification in all the other projections; (2) locates its 3D position in the breast; (3) highlights noise points; and (4)
identifies the failure of microcalcification detection in one or more projections, in which case the method predicts the
image locations of the microcalcification in the images in which they are missed.
From the geometry of the DBT acquisition system, an "epipolar curve" is derived for the 2D positions a
microcalcification in each projection generated at different angular positions. Each epipolar curve represents a single
microcalcification point in the breast. By examining the n projections of m microcalcifications in DBT, one expects
ideally m epipolar curves each comprising n points. Since each microcalcification point is at a different 3D position,
each epipolar curve will be at a different position in the same 2D coordinate system. By plotting all the
microcalcification candidates in the same 2D plane simultaneously, one can easily extract a representation of the number
of microcalcification points in the breast (number of epipolar curves) and their 3D positions, the noise points detected
(isolated points not forming any epipolar curve) and microcalcification points missed in some projections (epipolar
curves with less than n points).
Digital breast tomosynthesis: feasibility of automated detection of microcalcification clusters on projections views
Author(s):
Lubomir M. Hadjiiski;
Heang-Ping Chan;
Jun Wei;
Berkman Sahiner;
Chuan Zhou;
Mark A. Helvie
Show Abstract
We are developing a computer-aided detection (CAD) system to assist radiologists in detecting microcalcification
clusters in digital breast tomosynthesis (DBT). The purpose of this study is to investigate the feasibility of a 2D approach
using the projection-view (PV) images as input. In the first stage, automated detection of the microcalcification clusters
on the PVs is performed. In the second stage, the detected cluster candidates or the individual microcalcifications on the
PVs are back-projected to the 3D volume. The true clusters or microcalcifications will therefore converge at their focal
planes and ideally will result in higher cluster or microcalcification scores than the FPs. In the final step an analysis of
the back-projected cluster or microcalcification candidates is performed to differentiate the true and false clusters. In this
pilot study, a limited data set of 39 cases with biopsy proven microcalcification clusters (17 malignant, 22 benign) was
used. The DBT scans were obtained in both CC and MLO views using a GE GEN2 prototype system which acquires 21
PVs over a 60º arc in 3º increments. In the 78 DBT volumes, a total of 74 clusters (33 malignant clusters in 34 breasts
and 41 benign clusters in 44 breasts) were identified by an experienced radiologist. The computer detected 61%
(956/1554) of the clusters on the PVs from the 74 scans. After back-projection of the microcalcification candidates detected
on the individual PVs and excluding the first few PVs that had higher noise in back-projection stage, 84% (62/74) of the true
clusters were detected in the 3D volume. Study is underway to develop methods to reduce FPs and to compare this 2D
approach with 3D or combined 2D and 3D approaches.
Analysis of breast lesions on contrast-enhanced magnetic resonance images using high-dimensional texture features
Author(s):
Mahesh B. Nagarajan;
Markus B. Huber;
Thomas Schlossbauer;
Gerda Leinsinger;
Axel Wismueller
Show Abstract
Haralick texture features derived from gray-level co-occurrence matrices (GLCM) were used to classify the character of
suspicious breast lesions as benign or malignant on dynamic contrast-enhanced MRI studies. Lesions were identified and
annotated by an experienced radiologist on 54 MRI exams of female patients where histopathological reports were
available prior to this investigation. GLCMs were then extracted from these 2D regions of interest (ROI) for four
principal directions (0°, 45°, 90° & 135°) and used to compute Haralick texture features. A fuzzy k-nearest neighbor (k-
NN) classifier was optimized in ten-fold cross-validation for each texture feature and the classification performance was
calculated on an independent test set as a function of area under the ROC curve. The lesion ROIs were characterized by
texture feature vectors containing the Haralick feature values computed from each directional-GLCM; and the classifier
results obtained were compared to a previously used approach where the directional-GLCMs were summed to a nondirectional
GLCM which could further yield a set of texture feature values. The impact of varying the inter-pixel
distance while generating the GLCMs on the classifier's performance was also investigated. Classifier's AUC was found
to significantly increase when the high-dimensional texture feature vector approach was pursued, and when features
derived from GLCMs generated using different inter-pixel distances were incorporated into the classification task. These
results indicate that lesion character classification accuracy could be improved by retaining the texture features derived
from the different directional GLCMs rather than combining these to yield a set of scalar feature values instead.
Heterogeneity of kinetic curve parameters as indicator for the malignancy of breast lesions in DCE MRI
Author(s):
Thomas Buelow;
Axel Saalbach;
Martin Bergtholdt;
Rafael Wiemker;
Hans Buurman;
Lina Arbash Meinel;
Gillian Newstead
Show Abstract
Dynamic contrast enhanced Breast MRI (DCE BMRI) has emerged as powerful tool in the diagnostic work-up of breast
cancer. While DCE BMRI is very sensitive, specificity remains to be an issue. Consequently, there is a need for features
that support the classification of enhancing lesions into benign and malignant lesions. Traditional features include the
morphology and the texture of a lesion, as well as the kinetic parameters of the time-intensity curves, i.e., the temporal
change of image intensity at a given location. The kinetic parameters include initial contrast uptake of a lesion and the
type of the kinetic curve. The curve type is usually assigned to one of three classes: persistent enhancement (Type I),
plateau (Type II), and washout (Type III). While these curve types show a correlation with the tumor type (benign or
malignant), only a small sub-volume of the lesion is taken into consideration and the curve type will depend on the
location of the ROI that was used to generate the kinetic curve. Furthermore, it has been shown that the curve type
significantly depends on which MR scanner was used as well as on the scan parameters.
Recently, it was shown that the heterogeneity of a given lesion with respect to spatial variation of the kinetic curve type
is a clinically significant indicator for malignancy of a tumor. In this work we compare four quantitative measures for the
degree of heterogeneity of the signal enhancement ratio in a tumor and evaluate their ability of predicting the dignity of a
tumor. All features are shown to have an area under the ROC curve of between 0.63 and 0.78 (for a single feature).
Optimization of a fuzzy C-means approach to determining probability of lesion malignancy and quantifying lesion enhancement heterogeneity in breast DCE-MRI
Author(s):
Jeremy Bancroft Brown;
Maryellen L. Giger;
Neha Bhooshan;
Gillian Newstead;
Sanaz Jansen
Show Abstract
Previous research has shown that a fuzzy C-means (FCM) approach to computerized lesion analysis has
the potential to aid radiologists in the interpretation of dynamic contrast-enhanced MRI (DCE-MRI) breast
exams. 1, 2 Our purpose in this study was to optimize the performance of the FCM approach with respect
to binary (benign/malignant) breast lesion classification in DCE-MRI. We used both raw (calculated from
kinetic data points) and empirically fitted3 kinetic features for this study. FCM was used to automatically
select a characteristic kinetic curve (CKC) based on intensity-time point data of voxels within each lesion,
using four different kinetic criteria: (1) maximum initial enhancement, (2) minimum shape index, (3) maximum
washout, and (4) minimum time to peak. We extracted kinetic features from these CKCs, which were
merged using linear discriminant analysis (LDA), and evaluated with receiver operating characteristic (ROC)
analysis. There was comparable performance for methods 1, 2, and 4, while method 3 was inferior. Next,
we modified use of the FCM method by calculating a feature vector for every voxel in each lesion and using
FCM to select a characteristic feature vector (CFV) for each lesion. Using this method, we achieved performance
similar to the four CKC methods. Finally, we generated lesion color maps using FCM membership
matrices, which facilitated the visualization of enhancing voxels in a given lesion.
Computer-aided classification of patients with dementia of Alzheimer's type based on cerebral blood flow determined with arterial spin labeling technique
Author(s):
Yasuo Yamashita;
Hidetaka Arimura;
Takashi Yoshiura M.D.;
Chiaki Tokunaga;
Taiki Magome;
Akira Monji;
Tomoyuki Noguchi;
Fukai Toyofuku;
Masafumi Oki;
Yasuhiko Nakamura;
Hiroshi Honda
Show Abstract
Arterial spin labeling (ASL) is one of promising non-invasive magnetic resonance (MR) imaging techniques for
diagnosis of Alzheimer's disease (AD) by measuring cerebral blood flow (CBF). The aim of this study was to develop
a computer-aided classification system for AD patients based on CBFs measured by the ASL technique. The average
CBFs in cortical regions were determined as functional image features based on the CBF map image, which was
non-linearly transformed to a Talairach brain atlas by using a free-form deformation. An artificial neural network
(ANN) was trained with the CBF functional features in 10 cortical regions, and was employed for distinguishing patients
with AD from control subjects. For evaluation of the method, we applied the proposed method to 20 cases including
ten AD patients and ten control subjects, who were scanned a 3.0-Tesla MR unit. As a result, the area under the
receiver operating characteristic curve obtained by the proposed method was 0.893 based on a leave-one-out-by-case test
in identification of AD cases among 20 cases. The proposed method would be feasible for classification of patients
with AD.
Predictive modeling of neuroanatomic structures for brain atrophy detection
Author(s):
Xintao Hu;
Lei Guo;
Jingxin Nie;
Kaiming Li;
Tianming Liu
Show Abstract
In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain
atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy
detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal
anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under
consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other
reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal
region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio
within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain
atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method
has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.
Spatial prior in SVM-based classification of brain images
Author(s):
Rémi Cuingnet;
Marie Chupin;
Habib Benali;
Olivier Colliot
Show Abstract
This paper introduces a general framework for spatial prior in SVM-based classification of brain images based on
Laplacian regularization. Most existing methods include spatial prior by adding a feature aggregation step before
the SVM classification. The problem of the aggregation step is that the individual information of each feature
is lost. Our framework enables to avoid this shortcoming by including the spatial prior directly in the SVM.
We demonstrate that this framework can be used to derive embedded regularization corresponding to existing
methods for classification of brain images and propose an efficient way to implement them. This framework is
illustrated on the classification of MR images from 55 patients with Alzheimer's disease and 82 elderly controls
selected from the ADNI database. The results demonstrate that the proposed algorithm enables introducing
straightforward and anatomically consistent spatial prior into the classifier.
Model-free functional MRI analysis for detecting low-frequency functional connectivity in the human brain
Author(s):
Axel Wismueller;
Oliver Lange;
Dorothee Auer;
Gerda Leinsinger
Show Abstract
Slowly varying temporally correlated activity fluctuations between functionally related brain areas have been identified
by functional magnetic resonance imaging (fMRI) research in recent years. These low-frequency oscillations of less than
0.08 Hz appear to play a major role in various dynamic functional brain networks, such as the so-called 'default mode'
network. They also have been observed as a property of symmetric cortices, and they are known to be present in the motor
cortex among others. These low-frequency data are difficult to detect and quantify in fMRI. Traditionally, user-based
regions of interests (ROI) or 'seed clusters' have been the primary analysis method. In this paper, we propose unsupervised
clustering algorithms based on various distance measures to detect functional connectivity in resting state fMRI. The
achieved results are evaluated quantitatively for different distance measures. The Euclidian metric implemented by standard
unsupervised clustering approaches is compared with a non-metric topographic mapping of proximities based on the the
mutual prediction error between pixel-specific signal dynamics time-series. It is shown that functional connectivity in the
motor cortex of the human brain can be detected based on such model-free analysis methods for resting state fMRI.
Supervised method to build an atlas database for multi-atlas segmentation-propagation
Author(s):
Kaikai Shen;
Pierrick Bourgeat;
Jurgen Fripp;
Fabrice Mériaudeau;
David Ames;
Kathryn A. Ellis;
Colin L. Masters;
Victor L. Villemagne;
Christopher C. Rowe;
Olivier Salvado
Show Abstract
Multiatlas based segmentation-propagation approaches have been shown to obtain accurate parcelation of brain
structures. However, this approach requires a large number of manually delineated atlases, which are often not
available. We propose a supervised method to build a population specific atlas database, using the publicly
available Internet Brain Segmentation Repository (IBSR). The set of atlases grows iteratively as new atlases
are added, so that its segmentation capability may be enhanced in the multiatlas based approach. Using a
dataset of 210 MR images of elderly subjects (170 elderly control, 40 Alzheimer's disease) from the Australian
Imaging, Biomarkers and Lifestyle (AIBL) study, 40 MR images were segmented to build a population specific
atlas database for the purpose of multiatlas segmentation-propagation. The population specific atlases were used
to segment the elderly population of 210 MR images, and were evaluated in terms of the agreement among the
propagated labels. The agreement was measured by using the entropy H of the probability image produced
when fused by voting rule and the partial moment μ2 of the histogram. Compared with using IBSR atlases, the
population specific atlases obtained a higher agreement when dealing with images of elderly subjects.
Reproducibility of airway wall thickness measurements
Author(s):
Michael Schmidt;
Jan-Martin Kuhnigk;
Stefan Krass;
Michael Owsijewitsch M.D.;
Bartjan de Hoop;
Heinz-Otto Peitgen
Show Abstract
Airway remodeling and accompanying changes in wall thickness are known to be a major symptom of chronic obstructive pulmonary disease (COPD), associated with reduced lung function in diseased individuals. Further investigation of this disease as well as monitoring of disease progression and treatment effect demand for accurate and reproducible assessment of airway wall thickness in CT datasets. With wall thicknesses in the sub-millimeter range, this task remains challenging even with today's high resolution CT datasets. To provide accurate measurements, taking partial volume effects into account is mandatory. The Full-Width-at-Half-Maximum (FWHM) method has been shown to be inappropriate for small airways1,2 and several improved algorithms for objective quantification of airway wall thickness have been proposed.1-8 In this paper, we describe an algorithm based on a closed form solution proposed by Weinheimer et al.7 We locally estimate the lung density parameter required for the closed form solution to account for possible variations of parenchyma density between different lung regions, inspiration states and contrast agent concentrations. The general accuracy of the algorithm is evaluated using basic tubular software and hardware phantoms. Furthermore, we present results on the reproducibility of the algorithm with respect to clinical CT scans, varying reconstruction kernels, and repeated acquisitions, which is crucial for longitudinal observations.
Automated volumetric segmentation method for computerizeddiagnosis of pure nodular ground-glass opacity in high-resolution CT
Author(s):
Wooram Son;
Sang Joon Park;
Chang Min Park;
Jin Mo Goo;
Jong Hyo Kim
Show Abstract
While accurate diagnosis of pure nodular ground glass opacity (PNGGO) is important in order to reduce the number of
unnecessary biopsies, computer-aided diagnosis of PNGGO is less studied than other types of pulmonary nodules (e.g.,
solid-type nodule). Difficulty in segmentation of GGO nodules is one of technical bottleneck in the development of
CAD of GGO nodules. In this study, we propose an automated volumetric segmentation method for PNGGO using a
modeling of ROI histogram with a Gaussian mixture. Our proposed method segments lungs and applies noise-filtering in
the pre-processing step. And then, histogram of selected ROI is modeled as a mixture of two Gaussians representing lung
parenchyma and GGO tissues. The GGO nodule is then segmented by region-growing technique that employs the
histogram model as a probability density function of each pixel belonging to GGO nodule, followed by the elimination
of vessel-like structure around the nodules using morphological image operations. Our results using a database of 26
cases indicate that the automated segmentation method have a promising potential.
Automated quantification of pulmonary emphysema from computed tomography scans: comparison of variation and correlation of common measures in a large cohort
Author(s):
Brad M. Keller;
Anthony P. Reeves;
David F. Yankelevitz;
Claudia I. Henschke
Show Abstract
The purpose of this work was to retrospectively investigate the variation of standard indices of pulmonary emphysema from helical computed tomographic (CT) scans as related to inspiration differences over a 1 year interval and determine the strength of the relationship between these measures in a large cohort. 626 patients that had 2 scans taken at an interval of 9 months to 15 months (μ: 381 days, σ: 31 days) were selected for this work. All scans were acquired at a 1.25mm slice thickness using a low dose protocol. For each scan, the emphysema index (EI), fractal dimension (FD), mean lung density (MLD), and 15th percentile of the histogram (HIST) were computed. The absolute and relative changes for each measure were computed and the empirical 95% confidence interval was reported both in non-normalized and normalized scales. Spearman correlation coefficients are computed between the relative change in each measure and relative change in inspiration between each scan-pair, as well as between each pair-wise combination of the four measures. EI varied on a range of -10.5 to 10.5 on a non-normalized scale and -15 to 15 on a normalized scale, with FD and MLD showing slightly larger but comparable spreads, and HIST having a much larger variation. MLD was found to show the strongest correlation to inspiration change (r=0.85, p<0.001), and EI, FD, and HIST to have moderately strong correlation (r = 0.61-0.74, p<0.001). Finally, HIST showed very strong correlation to EI (r = 0.92, p<0.001), while FD showed the least strong relationship to EI (r = 0.82, p<0.001). This work shows that emphysema index and fractal dimension have the least variability overall of the commonly used measures of emphysema and that they offer the most unique quantification of emphysema relative to each other.
Semi-automated method to measure pneumonia severity in mice through computed tomography (CT) scan analysis
Author(s):
Ansh Johri;
Daniel Schimel;
Audrey Noguchi;
Lewis L. Hsu
Show Abstract
Imaging is a crucial clinical tool for diagnosis and assessment of pneumonia, but quantitative methods are
lacking. Micro-computed tomography (micro CT), designed for lab animals, provides opportunities for non-invasive
radiographic endpoints for pneumonia studies.
HYPOTHESIS: In vivo micro CT scans of mice with early bacterial pneumonia can be scored quantitatively by semiautomated
imaging methods, with good reproducibility and correlation with bacterial dose inoculated, pneumonia
survival outcome, and radiologists' scores.
METHODS: Healthy mice had intratracheal inoculation of E. coli bacteria (n=24) or saline control (n=11). In vivo
micro CT scans were performed 24 hours later with microCAT II (Siemens). Two independent radiologists scored the
extent of airspace abnormality, on a scale of 0 (normal) to 24 (completely abnormal). Using the Amira 5.2 software
(Mercury Computer Systems), a histogram distribution of voxel counts between the Hounsfield range of -510 to 0 was
created and analyzed, and a segmentation procedure was devised.
RESULTS: A t-test was performed to determine whether there was a significant difference in the mean voxel value of
each mouse in the three experimental groups: Saline Survivors, Pneumonia Survivors, and Pneumonia Non-survivors. It
was found that the voxel count method was able to statistically tell apart the Saline Survivors from the Pneumonia
Survivors, the Saline Survivors from the Pneumonia Non-survivors, but not the Pneumonia Survivors vs. Pneumonia
Non-survivors. The segmentation method, however, was successfully able to distinguish the two Pneumonia groups.
CONCLUSION: We have pilot-tested an evaluation of early pneumonia in mice using micro CT and a semi-automated
method for lung segmentation and scoring system. Statistical analysis indicates that the system is reliable and merits
further evaluation.
Quantitative analysis of airway abnormalities in CT
Author(s):
Jens Petersen;
Pechin Lo;
Mads Nielsen;
Goutham Edula;
Haseem Ashraf;
Asger Dirksen;
Marleen de Bruijne
Show Abstract
A coupled surface graph cut algorithm for airway wall segmentation from Computed Tomography (CT) images
is presented. Using cost functions that highlight both inner and outer wall borders, the method combines the
search for both borders into one graph cut.
The proposed method is evaluated on 173 manually segmented images extracted from 15 different subjects
and shown to give accurate results, with 37% less errors than the Full Width at Half Maximum (FWHM)
algorithm and 62% less than a similar graph cut method without coupled surfaces. Common measures of airway
wall thickness such as the Interior Area (IA) and Wall Area percentage (WA%) was measured by the proposed
method on a total of 723 CT scans from a lung cancer screening study. These measures were significantly different
for participants with Chronic Obstructive Pulmonary Disease (COPD) compared to asymptomatic participants.
Furthermore, reproducibility was good as confirmed by repeat scans and the measures correlated well with the
outcomes of pulmonary function tests, demonstrating the use of the algorithm as a COPD diagnostic tool.
Additionally, a new measure of airway wall thickness is proposed, Normalized Wall Intensity Sum (NWIS).
NWIS is shown to correlate better with lung function test values and to be more reproducible than previous
measures IA, WA% and airway wall thickness at a lumen perimeter of 10 mm (PI10).
Towards automatic determination of total tumor burden from PET images
Author(s):
Steffen Renisch;
Roland Opfer;
Rafael Wiemker
Show Abstract
Quantification of potentially cancerous lesions from imaging modalities, most prominently from CT or PET
images, plays a crucial role both in diagnosing and staging of cancer as well as in the assessment of the response
of a cancer to a therapy, e.g. for lymphoma or lung cancer. For PET imaging, several quantifications which might
bear great discriminating potential (e.g. total tumor burden or total tumor glycolysis) involve the segmentation
of the entirety of all of the cancerous lesions. However, this particular task of segmenting the entirety of all
cancerous lesions might be very tedious if it has to be done manually, in particular if the disease is scattered or
metastasized and thus consists of numerous foci; this is one of the reasons why only few clinical studies on those
quantifications are available. In this work, we investigate a way to aid the easy determination of the entirety of
cancerous lesions in a PET image of a human. The approach is designed to detect all hot spots within a PET
image and rank their probability of being a cancerous lesion. The basis of this component is a modified watershed
algorithm; the ranking is performed on a combination of several, primarily morphological measures derived from
the individual basins. This component is embedded in a software suite to assess response to a therapy based on
PET images. As a preprocessing step, potential lesions are segmented and indicated to the user, who can select
the foci which constitute the tumor and discard the false positives. This procedure substantially simplifies the
segmentation of the entire tumor burden of a patient. This approach of semi-automatic hot spot detection is
evaluated on 17 clinical datasets.
Development of CAD prototype system for Crohn's disease
Author(s):
Masahiro Oda;
Takayuki Kitasaka;
Kazuhiro Furukawa;
Osamu Watanabe;
Takafumi Ando;
Hidemi Goto;
Kensaku Mori
Show Abstract
The purpose of this paper is to present a CAD prototype system for Crohn's disease. Crohn's disease causes
inflammation or ulcers of the gastrointestinal tract. The number of patients of Crohn's disease is increasing
in Japan. Symptoms of Crohn's disease include intestinal stenosis, longitudinal ulcers, and fistulae. Optical
endoscope cannot pass through intestinal stenosis in some cases. We propose a new CAD system using abdominal
fecal tagging CT images for efficient diagnosis of Crohn's disease. The system displays virtual unfolded (VU),
virtual endoscopic, curved planar reconstruction, multi planar reconstruction, and outside views of both small
and large intestines. To generate the VU views, we employ a small and large intestines extraction method followed
by a simple electronic cleansing method. The intestine extraction is based on the region growing process, which
uses a characteristic that tagged fluid neighbor air in the intestine. The electronic cleansing enables observation
of intestinal wall under tagged fluid. We change the height of the VU views according to the perimeter of the
intestine. In addition, we developed a method to enhance the longitudinal ulcer on views of the system. We
enhance concave parts on the intestinal wall, which are caused by the longitudinal ulcer, based on local intensity
structure analysis. We examined the small and the large intestines of eleven CT images by the proposed system.
The VU views enabled efficient observation of the intestinal wall. The height change of the VU views helps
finding intestinal stenosis on the VU views. The concave region enhancement made longitudinal ulcers clear on
the views.
Eigenvalue-weighting and feature selection for computer-aided polyp detection in CT colonography
Author(s):
Hongbin Zhu;
Su Wang;
Yi Fan;
Hongbing Lu;
Zhengrong Liang
Show Abstract
With the development of computer-aided polyp detection towards virtual colonoscopy screening, the trade-off between
detection sensitivity and specificity has gained increasing attention. An optimum detection, with least number of false
positives and highest true positive rate, is desirable and involves interdisciplinary knowledge, such as feature extraction,
feature selection as well as machine learning. Toward that goal, various geometrical and textural features, associated
with each suspicious polyp candidate, have been individually extracted and stacked together as a feature vector.
However, directly inputting these high-dimensional feature vectors into a learning machine, e.g., neural network, for
polyp detection may introduce redundant information due to feature correlation and induce the curse of dimensionality.
In this paper, we explored an indispensable building block of computer-aided polyp detection, i.e., principal component
analysis (PCA)-weighted feature selection for neural network classifier of true and false positives. The major concepts
proposed in this paper include (1) the use of PCA to reduce the feature correlation, (2) the scheme of adaptively
weighting each principal component (PC) by the associated eigenvalue, and (3) the selection of feature combinations via
the genetic algorithm. As such, the eigenvalue is also taken as part of the characterizing feature, and the necessary
number of features can be exposed to mitigate the curse of dimensionality. Learned and tested by radial basis neural
network, the proposed computer-aided polyp detection has achieved 95% sensitivity at a cost of average 2.99 false
positives per polyp.
Segmentation of polycystic kidneys from MR images
Author(s):
Dimitri Racimora;
Pierre-Hugues Vivier;
Hersh Chandarana;
Henry Rusinek
Show Abstract
Polycystic kidney disease (PKD) is a disorder characterized by the growth of numerous fluid filled cysts in the kidneys.
Measuring cystic kidney volume is thus crucial to monitoring the evolution of the disease. While T2-weighted MRI
delineates the organ, automatic segmentation is very difficult due to highly variable shape and image contrast. The
interactive stereology methods used currently involve a compromise between segmentation accuracy and time. We have
investigated semi-automated methods: active contours and a sub-voxel morphology based algorithm. Coronal T2-
weighted images of 17 patients were acquired in four breath-holds using the HASTE sequence on a 1.5 Tesla MRI unit.
The segmentation results were compared to ground truth kidney masks obtained as a consensus of experts. Automatic
active contour algorithm yielded an average 22% ± 8.6% volume error. A recently developed method (Bridge Burner)
based on thresholding and constrained morphology failed to separate PKD from the spleen, yielding 37.4% ± 8.7%
volume error. Manual post-editing reduced the volume error to 3.2% ± 0.8% for active contours and 3.2% ± 0.6% for
Bridge Burner. The total time (automated algorithm plus editing) was 15 min ± 5 min for active contours and 19 min ±
11 min for Bridge Burner. The average volume errors for stereology method were 5.9%, 6.2%, 5.4% for mesh size 6.6,
11, 16.5 mm. The average processing times were 17, 7, 4 min. These results show that nearly two-fold improvement in
PKD segmentation accuracy over stereology technique can be achieved with a combination of active contours and postediting.
A model based method for recognizing psoas major muscles in torso CT images
Author(s):
Naoki Kamiya;
Xiangrong Zhou;
Huayue Chen;
Takeshi Hara;
Ryujiro Yokoyama;
Masayuki Kanematsu;
Hiroaki Hoshi;
Hiroshi Fujita
Show Abstract
In aging societies, it is important to analyze age-related hypokinesia. A psoas major muscle has many important
functional capabilities such as capacity of balance and posture control. These functions can be measured by its cross
sectional area (CSA), volume, and thickness. However, these values are calculated manually in the clinical situation. The
purpose of our study is to propose an automated recognition method of psoas major muscles in X-ray torso CT images.
The proposed recognition process involves three steps: 1) determination of anatomical points such as the origin and
insertion of the psoas major muscle, 2) generation of a shape model for the psoas major muscle, and 3) recognition of the
psoas major muscles by use of the shape model. The model was built using quadratic function, and was fit to the
anatomical center line of psoas major muscle. The shape model was generated using 20 CT cases and tested by 20 other
CT cases. The applied database consisted of 12 male and 8 female cases from the ages of 40's to 80's. The average value
of Jaccard similarity coefficient (JSC) values employed in the evaluation was 0.7. Our experimental results indicated that
the proposed method was effective for a volumetric analysis and could be possible to be used for a quantitative
measurement of psoas major muscles in CT images.
Accurate motion parameter estimation for colonoscopy tracking using a regression method
Author(s):
Jianfei Liu;
Kalpathi R. Subramanian;
Terry S. Yoo
Show Abstract
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information
during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to
compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding
patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum
of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the
improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion
parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution
of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking
results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates
better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments
demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the
ascending colon, and 410 to 1316 in the transverse colon.
Segmentation of liver portal veins by global optimization
Author(s):
Pieter Bruyninckx;
Dirk Loeckx;
Dirk Vandermeulen;
Paul Suetens
Show Abstract
We present an algorithm for the segmentation of the liver portal veins from an arterial phase CT. The developed
segmentation algorithm incorporates a physiological model that states that the vasculature pattern is organized
such that the whole organ is perfused using minimal mechanical energy. This model is, amongst others, applicable
to the lungs, the liver, and the kidneys. The algorithm first locally detects probable candidate vessel segments in
the image. The subset of these segments that generates the most probable vessel tree according the image and
the physiological model is afterwards sought by a global optimization method. The algorithm has already been
applied successfully to segment heavily simplified lung vessel trees from CT images. Now the general feasibility
of this approach is evaluated by applying it to the segmentation of the liver portal veins from an arterial phase
CT scan. This is more challenging, because the intensity difference between the vessels and the parenchyma
is small. To cope with the low contrast a support vector machines approach with a robust feature vector is
used to locally detect vessels. This approach has been applied to a set of five images, for which a ground truth
segmentation is available. This algorithm is a first step towards an automatic segmentation of all of the liver
vasculature.
Haustral fold registration in CT colonography and its application to registration of virtual stretched view of the colon
Author(s):
Eiichiro Fukano;
Masahiro Oda;
Takayuki Kitasaka;
Yasuhito Suenaga;
Tetsuji Takayama M.D.;
Hirotsugu Takabatake M.D.;
Masaki Mori M.D.;
Hiroshi Natori M.D.;
Shigeru Nawano M.D.;
Kensaku Mori
Show Abstract
This paper proposes a method for making correspondence between the supine and the prone positions of the colon in CT volumes. In CT colonography, two CT volumes in the supine and the prone positions are often taken to observe the whole colonic wall by comparing them. However, the colonic wall is soft and changes its shape when a patient changes positions. Therefore, physicians need to take the positional relations into account when comparing the two CT volumes. Calculation of the positional relations between the two positions of the colon can reduce load of physicians. A large number of haustral folds exists in the colon and the order doesn't change even when a patient change positions. Therefore, haustral folds are suitable for registering the supine and the prone positions of the colon. We also find sharply bending points of the centerline of the colon as landmarks for brief registration. The precise registration is then performed by finding positional correspondence of the haustral
folds in the supine and the prone positions. In correspondence search, we first find the correspondence among long haustral folds, followed by small haustral folds. As the result of experiment using six pairs of 3D abdominal CT volumes, 65.1% of the correspondence of large haustral folds were correct, 25.6% were incorrect, and 9.3%
could not be judged. On the other hand, 13.3% of the correspondence of small haustral folds were correct, 42.9% were incorrect, and 32.7% could not be judged.
An open source implementation of colon CAD in 3D slicer
Author(s):
Haiyong Xu;
H. Donald Gage;
Pete Santago
Show Abstract
Most colon CAD (computer aided detection) software products, especially commercial products, are designed for use by
radiologists in a clinical environment. Therefore, those features that effectively assist radiologists in finding polyps are
emphasized in those tools. However, colon CAD researchers, many of whom are engineers or computer scientists, are
working with CT studies in which polyps have already been identified using CT Colonography (CTC) and/or optical
colonoscopy (OC). Their goal is to utilize that data to design a computer system that will identify all true polyps with no
false positive detections. Therefore, they are more concerned with how to reduce false positives and to understand the
behavior of the system than how to find polyps. Thus, colon CAD researchers have different requirements for tools not
found in current CAD software. We have implemented a module in 3D Slicer to assist these researchers. As with clinical
colon CAD implementations, the ability to promptly locate a polyp candidate in a 2D slice image and on a 3D colon
surface is essential for researchers. Our software provides this capability, and uniquely, for each polyp candidate, the
prediction value from a classifier is shown next to the 3D view of the polyp candidate, as well as its CTC/OC finding.
This capability makes it easier to study each false positive detection and identify its causes. We describe features in our
colon CAD system that meets researchers' specific requirements. Our system uses an open source implementation of a
3D Slicer module, and the software is available to the pubic for use and for extension
(http://www2.wfubmc.edu/ctc/download/).
Prostate cancer region prediction using MALDI mass spectra
Author(s):
Ayyappa Vadlamudi;
Shao-Hui Chuang;
Xiaoyan Sun;
Lisa Cazares;
Julius Nyalwidhe;
Dean Troyer;
O. John Semmes;
Jiang Li;
Frederic D. McKenzie
Show Abstract
For the early detection of prostate cancer, the analysis of the Prostate-specific antigen (PSA) in serum is currently the
most popular approach. However, previous studies show that 15% of men have prostate cancer even their PSA
concentrations are low. MALDI Mass Spectrometry (MS) proves to be a better technology to discover molecular tools
for early cancer detection. The molecular tools or peptides are termed as biomarkers. Using MALDI MS data from
prostate tissue samples, prostate cancer biomarkers can be identified by searching for molecular or molecular
combination that can differentiate cancer tissue regions from normal ones. Cancer tissue regions are usually identified by
pathologists after examining H&E stained histological microscopy images. Unfortunately, histopathological examination
is currently done on an adjacent slice because the H&E staining process will change tissue's protein structure and it will
derogate MALDI analysis if the same tissue is used, while the MALDI imaging process will destroy the tissue slice so
that it is no longer available for histopathological exam. For this reason, only the most confident cancer region resulting
from the histopathological examination on an adjacent slice will be used to guide the biomarker identification. It is
obvious that a better cancer boundary delimitation on the MALDI imaging slice would be beneficial. In this paper, we
proposed methods to predict the true cancer boundary, using the MALDI MS data, from the most confident cancer region
given by pathologists on an adjacent slice.
Automated scheme for measuring polyp volume in CT colonography using Hessian matrix-based shape extraction and 3D volume growing
Author(s):
Kenji Suzuki;
Mark L. Epstein;
Jianwu Xu;
Piotr Obara M.D.;
Don C. Rockey M.D.;
Abraham H. Dachman M.D.
Show Abstract
Current measurement of the single longest dimension of a polyp is subjective and has variations among radiologists. Our
purpose was to develop an automated measurement of polyp volume in CT colonography (CTC). We developed a
computerized segmentation scheme for measuring polyp volume in CTC, which consisted of extraction of a highly
polyp-like seed region based on the Hessian matrix, segmentation of polyps by use of a 3D volume-growing technique,
and sub-voxel refinement to reduce a bias of segmentation. Our database consisted of 30 polyp views (15 polyps) in
CTC scans from 13 patients. To obtain "gold standard," a radiologist outlined polyps in each slice and calculated
volumes by summation of areas. The measurement study was repeated three times at least one week apart for minimizing
a memory effect bias. We used the mean volume of the three studies as "gold standard." Our measurement scheme
yielded a mean polyp volume of 0.38 cc (range: 0.15-1.24 cc), whereas a mean "gold standard" manual volume was 0.40
cc (range: 0.15-1.08 cc). The mean absolute difference between automated and manual volumes was 0.11 cc with
standard deviation of 0.14 cc. The two volumetrics reached excellent agreement (intra-class correlation coefficient was
0.80) with no statistically significant difference (p(F≤f) = 0.42). Thus, our automated scheme efficiently provides
accurate polyp volumes for radiologists.
Computerized evaluation method of white matter hyperintensities related to subcortical vascular dementia in brain MR images
Author(s):
Hidetaka Arimura;
Yasuo Kawata;
Yasuo Yamashita;
Taiki Magome;
Masafumi Ohki;
Fukai Toyofuku;
Yoshiharu Higashida;
Kazuhiro Tsuchiya
Show Abstract
We have developed a computerized evaluation method of white matter hyperintensity (WMH) regions for the diagnosis
of vascular dementia (VaD) based on magnetic resonance (MR) images, and implemented the proposed method as a
graphical interface program. The WMH regions were segmented using either a region growing technique or a level set
method, one of which was selected by using a support vector machine. We applied the proposed method to MR images
acquired from 10 patients with a diagnosis of VaD. The mean similarity index between WMH regions determined by a
manual method and the proposed method was 78.2±11.0%. The proposed method could effectively assist
neuroradiologists in evaluating WMH regions.
Prediction of brain tumor progression using a machine learning technique
Author(s):
Yuzhong Shen;
Debrup Banerjee;
Jiang Li;
Adam Chandler;
Yufei Shen;
Frederic D. McKenzie;
Jihong Wang
Show Abstract
A machine learning technique is presented for assessing brain tumor progression by exploring six patients' complete
MRI records scanned during their visits in the past two years. There are ten MRI series, including diffusion tensor image
(DTI), for each visit. After registering all series to the corresponding DTI scan at the first visit, annotated normal and
tumor regions were overlaid. Intensity value of each pixel inside the annotated regions were then extracted across all of
the ten MRI series to compose a 10 dimensional vector. Each feature vector falls into one of three categories:normal,
tumor, and normal but progressed to tumor at a later time. In this preliminary study, we focused on the trend of brain
tumor progression during three consecutive visits, i.e., visit A, B, and C. A machine learning algorithm was trained using
the data containing information from visit A to visit B, and the trained model was used to predict tumor progression from
visit A to visit C. Preliminary results showed that prediction for brain tumor progression is feasible. An average of
80.9% pixel-wise accuracy was achieved for tumor progression prediction at visit C.
Parkinson's disease prediction using diffusion-based atlas approach
Author(s):
Roxana Oana Teodorescu;
Daniel Racoceanu;
Nicolas Smit;
Vladimir Ioan Cretu;
Eng King Tan;
Ling Ling Chan
Show Abstract
We study Parkinson's disease (PD) using an automatic specialized diffusion-based atlas. A total of 47 subjects,
among who 22 patients diagnosed clinically with PD and 25 control cases, underwent DTI imaging. The EPIs
have lower resolution but provide essential anisotropy information for the fiber tracking process. The two
volumes of interest (VOI) represented by the Substantia Nigra and the Putamen are detected on the EPI and FA
respectively. We use the VOIs for the geometry-based registration. We fuse the anatomical detail detected on FA
image for the putamen volume with the EPI. After 3D fibers growing on the two volumes, we compute the fiber
density (FD) and the fiber volume (FV). Furthermore, we compare patients based on the extracted fibers and
evaluate them according to Hohen&Yahr (H&Y) scale. This paper introduces the method used for automatic
volume detection and evaluates the fiber growing method on these volumes. Our approach is important from the
clinical standpoint, providing a new tool for the neurologists to evaluate and predict PD evolution. From the
technical point of view, the fusion approach deals with the tensor based information (EPI) and the extraction
of the anatomical detail (FA and EPI).
TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury
Author(s):
Shimiao Li;
Tianxia Gong;
Jie Wang;
Ruizhe Liu;
Chew Lim Tan;
Tze Yun Leong;
Boon Chuan Pang;
C. C. Tchoyoson Lim;
Cheng Kiang Lee;
Qi Tian;
Zhuo Zhang
Show Abstract
Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is
widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology
department. Such data and the associated patient information contain valuable information for clinical diagnosis
and outcome prediction. However, current hospital database system does not provide an efficient and intuitive
tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc
system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based
system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked
according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present
diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary
feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we
propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is
used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The
system is expected to improve the current hospital data management in TBI and to give better support for the
clinical decision-making process. It may also contribute to the computer-aided education in TBI.
Shape similarity analysis of regions of interest in medical images
Author(s):
Qiang Wang;
Amalia Charisi;
Longin Jan Latecki;
James Gee;
Vasilis Megalooikonomou
Show Abstract
In this work, we introduce a new representation technique of 2D contour shapes and a sequence similarity measure to
characterize 2D regions of interest in medical images. First, we define a distance function on contour points in order to
map the shape of a given contour to a sequence of real numbers. Thus, the computation of shape similarity is reduced to
the matching of the obtained sequences. Since both a query and a target sequence may be noisy, i.e., contain some outlier
elements, it is desirable to exclude the outliers in order to obtain a robust matching performance. For the computation of
shape similarity, we propose the use of an algorithm which performs elastic matching of two sequences. The contribution
of our approach is that, unlike previous works that require images to be warped according to a template image for
measuring their similarity, it obviates this need, therefore it can estimate image similarity for any type of medical image
in a fast and efficient manner. To demonstrate our method's applicability, we analyzed a brain image dataset consisting
of corpus callosum shapes, and we investigated the structural differences between children with chromosome 22q11.2
deletion syndrome and controls. Our findings indicate that our method is quite effective and it can be easily applied on
medical diagnosis in all cases of which shape difference is an important clue.
Population analysis of the cingulum bundle using the tubular surface model for schizophrenia detection
Author(s):
Vandana Mohan;
Ganesh Sundaramoorthi;
Marek Kubicki;
Douglas Terry;
Allen Tannenbaum
Show Abstract
We propose a novel framework for population analysis of DW-MRI data using the Tubular Surface Model. We
focus on the Cingulum Bundle (CB) - a major tract for the Limbic System and the main connection of the Cingulate
Gyrus, which has been associated with several aspects of Schizophrenia symptomatology. The Tubular Surface
Model represents a tubular surface as a center-line with an associated radius function. It provides a natural way
to sample statistics along the length of the fiber bundle and reduces the registration of fiber bundle surfaces to that
of 4D curves. We apply our framework to a population of 20 subjects (10 normal, 10 schizophrenic) and obtain
excellent results with neural network based classification (90% sensitivity, 95% specificity) as well as unsupervised
clustering (k-means). Further, we apply statistical analysis to the feature data and characterize the discrimination
ability of local regions of the CB, as a step towards localizing CB regions most relevant to Schizophrenia.
Robustness of interactive intensity thresholding based breast density assessment in MR-mammography
Author(s):
Sa. Reed;
G. Ertas;
S. Doran;
R. M. Warren;
M. O. Leach
Show Abstract
The efficiency of breast density assessment using interactive intensity thresholding applied to intensity uniformity corrected
T1-weighted MR images is investigated for 20 healthy women who attended the UK multi-centre study of MRI screening for
breast cancer. Mammographic density is estimated on the medial-lateral oblique X-ray mammograms using CUMULUS. MR
density assessment is performed using both high and low-resolution T1-weighted images. The left and the right breast
regions anterior to the pectoral muscle were segmented on these images using active contouring. For each region, intensity
uniformities were corrected using proton density images and a user selected uniformity factor. An interactively selected
threshold is applied to the corrected images to detect fibrogulandular tissue. The breast density is calculated as the ratio of the
classified fibroglandular tissue to the segmented breast volume.
There is no systematic difference, good consistency and a high correlation between the left and the right breast densities
estimated from X-ray mammograms and the high and low-resolution MR images. The correlation is the highest and the
consistency is the best for the low-resolution MR measurements (r=0.976, MeanAbsoluteDifference = 2.12%). Mean breast
densities calculated over the left and the right breasts on high and low-resolution MR images are highly correlated with
mammographic density (r=0.923 and 0.903, respectively) but are approximately 50% lower.
Interactive intensity thresholding of T1-weighted MR images provides an easy, reproducible and reliable way to assess breast
density. High and low-resolution measurements are both highly correlated with the mammographic density but the latter
requires less processing and acquisition time.
Repeatability and classifier bias in computer-aided diagnosis for breast ultrasound
Author(s):
K. Drukker;
L. L. Pesce;
M. L. Giger
Show Abstract
The purpose was to investigate the repeatability and bias of the output of two classifiers commonly used in computeraided
diagnosis for the task of distinguishing benign from malignant lesions. Classifier training and testing were
performed within a bootstrap approach using a dataset of 125 sonographic breast lesions (54 malignant, 71 benign). The
classifiers investigated were linear discriminant analysis (LDA) and a Bayesian Neural Net (BNN) with 5 hidden units.
Both used the same 4 input lesion features. The bootstrap .632plus area under the ROC curve (AUC) was used as a
summary performance metric. On an individual case basis, the variability of the classifier output was used in a detailed
performance evaluation of repeatability and bias. The LDA obtained an AUC value of 0.87 with 95% confidence interval
[0.81; 0.92]. For the BNN, those values were 0.86 and [.76; .93], respectively. The classifier outputs for individual cases
displayed better repeatability (less variability) for the LDA than for the BNN and for the LDA the maximum
repeatability (lowest variability) lied in the middle of the range of possible outputs, while the BNN was least repeatable
(highest variability) in this region. There was a small but significant systematic bias in the LDA output, however, while
for the BNN the bias appeared to be weak. In summary, while ROC analysis suggested similar classifier performance,
there were substantial differences in classifier behavior on a by-case basis. Knowledge of this behavior is crucial for
successful translation and implementation of computer-aided diagnosis in clinical decision making.
Effect of variable gain on computerized texture analysis on digitized mammograms
Author(s):
Hui Li;
Maryellen L. Giger;
Li Lan;
Yading Yuan;
Neha Bhooshan;
Olufunmilayo I. Olopade
Show Abstract
Computerized texture analysis of mammographic images has emerged as a means to characterize breast parenchyma
and estimate breast percentage density, and thus, to ultimately assess the risk of developing breast cancer. However,
during the digitization process, mammographic images may be modified and optimized for viewing purposes, or
mammograms may be digitized with different scanners. It is important to demonstrate how computerized texture
analysis will be affected by differences in the digital image acquisition. In this study, mammograms from 172
subjects, 30 women with the BRCA1/2 gene-mutation and 142 low-risk women, were retrospectively collected and
digitized. Contrast enhancement based on a look-up table that simulates the histogram of a mixed-density breast
was applied on very dense and very fatty breasts. Computerized texture analysis was performed on these
transformed images, and the effect of variable gain on computerized texture analysis on mammograms was
investigated. Area under the receiver operating characteristic curve (AUC) was used as a figure of merit to assess
the individual texture feature performance in the task of distinguishing between the high-risk and the low-risk
women for developing breast cancer. For those features based on coarseness measures and fractal measures, the
histogram transformation (contrast enhancement) showed little effect on the classification performance of these
features. However, as expected, for those features based on gray-scale histogram analysis, such as balance and
skewnesss, and contrast measures, large variations were observed in terms of AUC values for those features.
Understanding this effect will allow us to better assess breast cancer risk using computerized texture analysis.
Breast MRI intensity non-uniformity correction using mean-shift
Author(s):
Aliaksei Makarau;
Henkjan Huisman;
Roel Mus;
Miranda Zijp;
Nico Karssemeijer
Show Abstract
In breast MRI, intensity inhomogeneity due to coil profile hampers development of robust segmentation and
automated processing methods. The purpose of this paper is to evaluate the performance in breast MRI of a
number of existing non-uniformity correction methods, mostly developed for brain imaging, and a novel correction
method first presented here. Ten breast MRI exams, which were manually segmented into background and five
tissue classes, were used for performance assessment. Results show that the relatively simple and fast bias field
correction method presented in this paper outperforms the other methods in a number of aspects.
Improving performance and reliability of interactive CAD schemes
Author(s):
Xiao-Hui Wang;
Sang Cheol Park;
Jun Tan;
Joseph K. Leader;
Bin Zheng
Show Abstract
An interactive computer-aided detection or diagnosis (ICAD) scheme allows observers to query suspicious
abnormalities (lesions) depicted on medical images. Once a suspicious region is queried, ICAD segments the abnormal
region, computes a set of image features, searches for and identifies the reference regions depicted on the verified lesions
that are similar to the queried one. Based on the distribution of the selected similar regions, ICAD generates a detection
(or classification) score of the queried region depicting true-positive disease. In this study, we assessed the performance
and reliability of an ICAD scheme when using a database including total 1500 positive images depicted verified breast
masses and 1500 negative images depicted ICAD-cued false-positive regions as well as the leave-one-out testing method.
We conducted two experiments. In the first experiment, we tested the relationship between ICAD performance and the
size of reference database by systematically increasing the size of reference database from 200 to 3000 images. In the
second experiment, we tested the relationship between ICAD performance and the similarity level between the queried
image and the retrieved similar references by applying a set of thresholds to systematically remove the queried images
whose similarity level to their most "similar" reference images are lower than threshold. The performance was compared
based on the areas under ROC curves (AUC). The results showed that (1) as the increase of reference database, AUC
value monotonically increased from 0.636±0.041 to 0.854±0.004 and (2) as the increase of similarity threshold values,
AUC value also monotonically increased from 0.854±0.004 to 0.932±0.016. The increase of AUC values and the
decrease of their standard deviations indicate the improvement of both CAD performance and reliability. The study
suggested that (1) assembling the large and diverse reference databases and (2) assessing and reporting the reliability of
ICAD-generated results based on the similarity measurement are important in development and application of the ICAD
schemes.
Automated estimation of breast density on mammogram using combined information of histogram statistics and boundary gradients
Author(s):
Youngwoo Kim;
Changwon Kim;
Jong-Hyo Kim
Show Abstract
This paper presents an automated scheme for breast density estimation on mammogram using statistical and boundary
information. Breast density is regarded as a meaningful indicator for breast cancer risk, but measurement of breast
density still relies on the qualitative judgment of radiologists. Therefore, we attempted to develop an automated system
achieving objective and quantitative measurement. For preprocessing, we first segmented the breast region, performed
contrast stretching, and applied median filtering. Then, two features were extracted: statistical information including
standard deviation of fat and dense regions in breast area and boundary information which is the edge magnitude of a set
of pixels with the same intensity. These features were calculated for each intensity level. By combining these features,
the optimal threshold was determined which best divided the fat and dense regions. For evaluation purpose, 80 cases of
Full-Field Digital Mammography (FFDM) taken in our institution were utilized. Two observers conducted the
performance evaluation. The correlation coefficients of the threshold and percentage between human observer and
automated estimation were 0.9580 and 0.9869 on average, respectively. These results suggest that the combination of
statistic and boundary information is a promising method for automated breast density estimation.
Similarity based false-positive reduction for breast cancer using radiographic and pathologic imaging features
Author(s):
Akshay Pai;
Ravi K. Samala;
Jianying Zhang;
Wei Qian
Show Abstract
Mammography reading by radiologists and breast tissue image interpretation by pathologists often leads to high False
Positive (FP) Rates. Similarly, current Computer Aided Diagnosis (CADx) methods tend to concentrate more on
sensitivity, thus increasing the FP rates. A novel method is introduced here which employs similarity based method to
decrease the FP rate in the diagnosis of microcalcifications. This method employs the Principal Component Analysis
(PCA) and the similarity metrics in order to achieve the proposed goal. The training and testing set is divided into
generalized (Normal and Abnormal) and more specific (Abnormal, Normal, Benign) classes. The performance of this
method as a standalone classification system is evaluated in both the cases (general and specific). In another approach
the probability of each case belonging to a particular class is calculated. If the probabilities are too close to classify, the
augmented CADx system can be instructed to have a detailed analysis of such cases. In case of normal cases with high
probability, no further processing is necessary, thus reducing the computation time. Hence, this novel method can be
employed in cascade with CADx to reduce the FP rate and also avoid unnecessary computational time. Using this
methodology, a false positive rate of 8% and 11% is achieved for mammography and cellular images respectively.
Classification of mammographic masses: influence of regions used for feature extraction on the classification performance
Author(s):
Florian Wagner;
Thomas Wittenberg;
Matthias Elter
Show Abstract
Computer-assisted diagnosis (CADx) for the characterization of mammographic masses as benign or malignant has a very high potential to help radiologists during the critical process of diagnostic decision making.
By default, the characterization of mammographic masses is performed by extracting features from a region of interest (ROI) depicting the mass.
To investigate the influence of the region on the classification performance, textural, morphological, frequency- as well as moment-based features are calculated in subregions of the ROI, which has been delineated manually by an expert.
The investigated subregions are
(a) the semi-automatically segmented area which includes only the core of the mass,
(b) the outer border region of the mass, and
(c) the combination of the outer and the inner border region, referred to as mass margin.
To extract the border region and the margin of a mass an extended version of the rubber band straightening transform (RBST) was developed. Furthermore, the effectiveness of the features extracted from the RBST transformed border region and mass margin is compared to the effectiveness of the same features extracted from the untransformed regions.
After the feature extraction process a preferably optimal feature subset is selected for each feature extractor. Classification is done using a k-NN classifier.
The classification performance was evaluated using the area Az under the receiver operating characteristic curve.
A publicly available mammography database was used as data set. Results showed that the manually drawn ROI lead to superior classification performances for the morphological feature extractors and that the transformed outer border region and the mass margin are not suitable for moment-based features but yield to promising results for textural and frequency-based features.
Beyond that the mass margin, which combines the inner and the outer border region, leads to better classification performances compared to the outer border region for its own.
An improved method for segmentation of mammographic masses
Author(s):
Matthias Elter;
Christian Held
Show Abstract
Computer aided diagnosis (CADx) systems can support the radiologist in the complex task of discriminating benign and malignant mammographic lesions. Automatic segmentation of mammographic lesions in regions of interest (ROIs) is a core module of many CADx systems. Previously, we have proposed a novel method for segmentation of mammographic masses. The approach was based on the observation that the optical density of a mass is usually high near its core and decreases towards its boundary. In the work at hand, we improve this approach by integration of a pre-processing module for the correction of inhomogeneous background tissue and by improved selection of the optimal mass contour from a list of candidates based on a cost function. We evaluate the performance of the proposed approach using ten-fold cross-validation on a database of mass lesions and ground-truth segmentations. Furthermore, we compare the improved segmentation approach with the previously proposed approach and with implementations of two state of the art approaches. The results of our study indicate that the proposed approach outperforms both the original method and the two state of the art methods.
Computer-aided diagnosis of digital mammography images using unsupervised clustering and biclustering techniques
Author(s):
Mohamed A. Al-Olfe;
Fadhl M. Al-Akwaa;
Wael A. Mohamed;
Yasser M. Kadah
Show Abstract
A new methodology for computer aided diagnosis in digital mammography using unsupervised classification and classdependent
feature selection is presented. This technique considers unlabeled data and provides unsupervised classes that
give a better insight into classes and their interrelationships, thus improving the overall effectiveness of the diagnosis.
This technique is also extended to utilize biclustering methods, which allow for definition of unsupervised clusters of
both pathologies and features. This has potential to provide more flexibility, and hence better diagnostic accuracy, than
the commonly used feature selection strategies. The developed methods are applied to diagnose digital mammographic
images from the Mammographic Image Analysis Society (MIAS) database and the results confirm the potential for
improving the current diagnostic rates.
Multi-agent method for masses classification in mammogram
Author(s):
Fangqing Peng;
Lihua Li;
Weidong Xu;
Wei Liu
Show Abstract
In this paper a new approach to mass classification based on multi-agent (MA) method is proposed for CAD in
mammography. Multi-agent method is used here as a method that fuses the classification information from multiple
classifiers in order to obtain a better decision result. Each agent receives the measurement value of individual classifier
as initial value in classifying a sample and sends a message to a decision center. The decision center responds to this
message with analysis of the correlation among these classifiers and their own decisions information. If the analysis
result is conformable to a given standard, the center will provide a final result. Otherwise the message of agent had to be
modified iteratively. 128 ROIs, including 64 benign masses and 64 malignant masses, from the DDSM, were used in the
mass classification experiment. In comparison with the majority voting based fusion method, we evaluated the
performance of proposed multi-agent fusion approach in distinguishing malignant and benign masses. The results
demonstrated that the multi-agent method outperforms the majority voting method. Multi-agent fusion method yielded
an accuracy of 95.47%, while the majority voting method had an accuracy of 92.23%. In addition, a preliminary study of
MA method for mass classification under the bi-view model is reported. All of these experiments showed that the
multi-agent method can play a significant role in multiple classifier fusion to improve mass classification in
mammography.
Computer aided breast calcification auto-detection in cone beam breast CT
Author(s):
Xiaohua Zhang;
Ruola Ning;
Jiangkun Liu
Show Abstract
In Cone Beam Breast CT (CBBCT), breast calcifications have higher intensities than the surrounding tissues. Without
the superposition of breast structures, the three-dimensional distribution of the calcifications can be revealed. In this
research, based on the fact that calcifications have higher contrast, a local thresholding and a histogram thresholding
were used to select candidate calcification areas. Six features were extracted from each candidate calcification: average
foreground CT number value, foreground CT number standard deviation, average background CT number value,
background CT number standard deviation, foreground-background contrast, and average edge gradient. To reduce the
false positive candidate calcifications, a feed-forward back propagation artificial neural network was designed. The
artificial neural network was trained with the radiologists confirmed calcifications and used as classifier in the
calcification auto-detection task. In the preliminary experiments, 90% of the calcifications in the testing data sets were
detected correctly with an average of 10 false positives per data set.
Evaluation of a 3D lesion segmentation algorithm on DBT and breast CT images
Author(s):
I. Reiser;
S. P. Joseph;
R. M. Nishikawa;
M. L. Giger;
J. Boone;
K. Lindfors;
A. Edwards;
N. Packard;
R. H. Moore;
D. B. Kopans
Show Abstract
Recently, tomosynthesis (DBT) and CT (BCT) have been developed for breast imaging. Since each modality
produces a fundamentally different representation of the breast volume, our goal was to investigate whether
a 3D segmentation algorithm for breast masses could be applied to both DBT and breast BCT images. A
secondary goal of this study was to investigate a simplified method for comparing manual outlines to a computer
segmentation.
The seeded mass lesion segmentation algorithm is based on maximizing the radial gradient index (RGI) along
a constrained region contour. In DBT, the constraint function was a prolate spherical Gaussian, with a larger
FWHM along the depth direction where the resolution is low, while it was a spherical Gaussian for BCT. For
DBT, manual lesion outlines were obtained in the in-focus plane of the lesion, which was used to compute the
overlap ratio with the computer segmentation. For BCT, lesions were manually outlined in three orthogonal
planes, and the average overlap ratio from the three planes was computed.
In DBT, 81% of all lesions were segmented at an overlap ratio of 0.4 or higher, based on manual outlines in
one slice through the lesion center. In BCT, 93% of all segmentations achieved an average overlap ratio of 0.4,
based on the manual outlines in three orthogonal planes.
Our results indicate mass lesions in both BCT and DBT images can be segmented with the proposed 3D
segmentation algorithm, by selecting an appropriate set of parameters and after images have undergone specific
pre-processing.
Automatic detection of plaques with severe stenosis in coronary vessels of CT angiography
Author(s):
M. S. Dinesh;
Pandu Devarakota;
Jitendra Kumar
Show Abstract
Coronary artery disease is the end result of the accumulation of atheromatous plaques within the walls of coronary
arteries and is the leading cause of death worldwide. Computed tomography angiography (CTA) has been proved to be
very useful for accurate noninvasive diagnosis and quantification of plaques. However, the existing methods to measure
the stenosis in the plaques are not accurate enough in mid and distal segments where the vessels become narrower. To
alleviate this, we propose a method that consists of three stages namely, automatic extraction of coronary vessels; vessels
straightening; lumen extraction and stenosis evaluation.
In the first stage, the coronary vessels are segmented using a parametric approach based on circular vessel model at each
point on the centerline. It is assumed that centerline information is available in advance. Vessel straightening in the
second stage performs multi-planar reformat (MPR) to straighten the curved vessels. MPR view of a vessel helps to
visualize and measure the plaques better. On the straightened vessel, lumen and vessel wall are segregated using a
nearest neighbor classification. To detect the plaques with severe stenosis in the vessel lumen, we propose a "Diameter
Luminal Stenosis" method for analyzing the smaller segments of the vessel. Proposed measurement technique identifies
the segments that have plaques and reports the top three severely stenosed segments. Proposed algorithm is applied on 24
coronary vessels belonging to multiple cases acquired from Sensation 64 - slice CT and initial results are promising.
Automatic lumen segmentation from intravascular OCT images
Author(s):
Rafik Bourezak;
Guy Lamouche;
Farida Cheriet
Show Abstract
In the last decade intravascular optical coherence tomography has known a tremendous progress. Its high resolution (5-
10μm) allows coronary plaque characterization, vulnerable plaque assessment, and the guidance of intravascular
interventions. However, one intravascular OCT sequence contains hundreds of frames, and their interpretation requires a
lot of time and energy. Therefore, there is a strong need for automated segmentation algorithms to process this large
amount of data. In this article, we present an automated algorithm to extract lumen contours from images obtained with
intravascular Optical Coherence Tomography (OCT). Unlike existing methods, our algorithm requires no post- or preprocessing
of the image. First, a sliding window is passed on every A-scan to locate the artery tissue, this location being
determined from the largest distribution of the grey level values. Once all the tissue is extracted from the image, every
segmented A-scan is binarized separately. For a single A-scan, the level of amplitude often varies strongly across the
tissue. A global threshold would cause low amplitude parts of the tissue to be considered as belonging to the background.
Our solution is to determine local thresholds for every A-scan. That is, instead of having a single global threshold, we
allow the threshold itself to smoothly vary across the image. Subsequently, on the binarized image the Prewitt mask is
moved from the detected tissue position toward the probe to segment the lumen. The proposed method has been
validated qualitatively on images acquired under different conditions without changing any parameter of the algorithm.
Experimental results show that the proposed method is accurate and robust to extract lumen borders.
Automated myocardial perfusion from coronary x-ray angiography
Author(s):
Corstiaan J. Storm;
Cornelis H. Slump
Show Abstract
The purpose of our study is the evaluation of an algorithm to determine the physiological relevance of a coronary
lesion as seen in a coronary angiogram. The aim is to extract as much as possible information from a standard
coronary angiogram to decide if an abnormality, percentage of stenosis, as seen in the angiogram, results in
physiological impairment of the blood supply of the region nourished by the coronary artery. Coronary angiography,
still the golden standard, is used to determine the cause of angina pectoris based on the demonstration
of an important stenose in a coronary artery. Dimensions of a lesion such as length and percentage of narrowing
can at present easily be calculated by using an automatic computer algorithm such as Quantitative Coronary
Angiography (QCA) techniques resulting in just anatomical information ignoring the physiological relevance of
the lesion. In our study we analyze myocardial perfusion images in standard coronary angiograms in rest and
in artificial hyperemic phases, using a drug e.g. papaverine intracoronary. Setting a Region of Interest (ROI) in
the angiogram without overlying major vessels makes it possible to calculate contrast differences as a function of
time, so called time-density curves, in the basal and hyperemic phases. In minimizing motion artifacts, end diastolic
images are selected ECG based in basal and hyperemic phase in an identical ROI in the same angiographic
projection. The development of new algorithms for calculating differences in blood supply in the region as set
are presented together with the results of a small clinical case study using the standard angiographic procedure.
An adaptive 3D region growing algorithm to automatically segment and identify thoracic aorta and its centerline using computed tomography angiography scans
Author(s):
F. Ferreira;
J. Dehmeshki;
H. Amin;
M. E. Dehkordi;
A. Belli;
A. Jouannic;
S. Qanadli
Show Abstract
Thoracic Aortic Aneurysm (TAA) is a localized swelling of the thoracic aorta. The progressive growth of an aneurysm
may eventually cause a rupture if not diagnosed or treated. This necessitates the need for an accurate measurement which
in turn calls for the accurate segmentation of the aneurysm regions. Computer Aided Detection (CAD) is a tool to
automatically detect and segment the TAA in the Computer tomography angiography (CTA) images. The fundamental
major step of developing such a system is to develop a robust method for the detection of main vessel and measuring its
diameters. In this paper we propose a novel adaptive method to simultaneously segment the thoracic aorta and to
indentify its center line. For this purpose, an adaptive parametric 3D region growing is proposed in which its seed will be
automatically selected through the detection of the celiac artery and the parameters of the method will be re-estimated
while the region is growing thorough the aorta. At each phase of region growing the initial center line of aorta will also
be identified and modified through the process. Thus the proposed method simultaneously detect aorta and identify its
centerline. The method has been applied on CT images from 20 patients with good agreement with the visual assessment
by two radiologists.
Filter learning and evaluation of the computer aided visualization and analysis (CAVA) paradigm for pulmonary nodules using the LIDC-IDRI database
Author(s):
Rafael Wiemker;
Ekta Dharaiya;
Amnon Steinberg;
Thomas Buelow;
Axel Saalbach;
Torbjörn Vik
Show Abstract
We present a simple rendering scheme for thoracic CT datasets which yields a color coding based on local differential
geometry features rather than Hounsfield densities. The local curvatures are computed on several resolution scales and
mapped onto different colors, thereby enhancing nodular and tubular structures. The rendering can be used as a
navigation device to quickly access points of possible chest anomalies, in particular lung nodules and lymph nodes. The
underlying principle is to use the nodule enhancing overview as a possible alternative to classical CAD approaches by
avoiding explicit graphical markers. For performance evaluation we have used the LIDC-IDRI lung nodule data base.
Our results indicate that the nodule-enhancing overview correlates well with the projection images produced from the
IDRI expert annotations, and that we can use this measure to optimize the combination of differential geometry filters.
Modeling uncertainty in classification design of a computer-aided detection system
Author(s):
Rahil Hosseini;
Jamshid Dehmeshki;
Sarah Barman;
Mahdi Mazinani;
Salah Qanadli
Show Abstract
A computerized image analysis technology suffers from imperfection, imprecision and vagueness of the input data and
its propagation in all individual components of the technology including image enhancement, segmentation and pattern
recognition. Furthermore, a Computerized Medical Image Analysis System (CMIAS) such as computer aided detection
(CAD) technology deals with another source of uncertainty that is inherent in image-based practice of medicine. While
there are several technology-oriented studies reported in developing CAD applications, no attempt has been made to
address, model and integrate these types of uncertainty in the design of the system components, even though uncertainty
issues directly affect the performance and its accuracy. In this paper, the main uncertainty paradigms associated with
CAD technologies are addressed. The influence of the vagueness and imprecision in the classification of the CAD, as a
second reader, on the validity of ROC analysis results is defined. In order to tackle the problem of uncertainty in the
classification design of the CAD, two fuzzy methods are applied and evaluated for a lung nodule CAD application.
Type-1 fuzzy logic system (T1FLS) and an extension of it, interval type-2 fuzzy logic system (IT2FLS) are employed as
methods with high potential for managing uncertainty issues. The novelty of the proposed classification methods is to
address and handle all sources of uncertainty associated with a CAD system. The results reveal that IT2FLS is superior
to T1FLS for tackling all sources of uncertainty and significantly, the problem of inter and intra operator observer
variability.
Usefulness of texture features for segmentation of lungs with severe diffuse interstitial lung disease
Author(s):
Jiahui Wang;
Feng Li;
Qiang Li
Show Abstract
We developed an automated method for the segmentation of lungs with severe diffuse interstitial lung disease (DILD) in
multi-detector CT. In this study, we would like to compare the performance levels of this method and a thresholdingbased
segmentation method for normal lungs, moderately abnormal lungs, severely abnormal lungs, and all lungs in our
database. Our database includes 31 normal cases and 45 abnormal cases with severe DILD. The outlines of lungs were
manually delineated by a medical physicist and confirmed by an experienced chest radiologist. These outlines were used
as reference standards for the evaluation of the segmentation results. We first employed a thresholding technique for CT
value to obtain initial lungs, which contain normal and mildly abnormal lung parenchyma. We then used texture-feature
images derived from co-occurrence matrix to further segment lung regions with severe DILD. The segmented lung
regions with severe DILD were combined with the initial lungs to generate the final segmentation results. We also
identified and removed the airways to improve the accuracy of the segmentation results. We used three metrics, i.e.,
overlap, volume agreement, and mean absolute distance (MAD) between automatically segmented lung and reference
lung to evaluate the performance of our segmentation method and the thresholding-based segmentation method. Our
segmentation method achieved a mean overlap of 96.1%, a mean volume agreement of 98.1%, and a mean MAD of 0.96
mm for the 45 abnormal cases. On the other hand the thresholding-based segmentation method achieved a mean overlap
of 94.2%, a mean volume agreement of 95.8%, and a mean MAD of 1.51 mm for the 45 abnormal cases. Our new
method obtained higher performance level than the thresholding-based segmentation method.
Realistic simulated lung nodule dataset for testing CAD detection and sizing
Author(s):
Robert D. Ambrosini;
Walter G. O'Dell
Show Abstract
The development of computer-aided diagnosis (CAD) methods for the processing of CT lung scans continues to become
increasingly popular due to the potential of these algorithms to reduce image reading time, errors caused by user fatigue,
and user subjectivity when screening for the presence of malignant lesions. This study seeks to address the critical need
for a realistic simulated lung nodule CT image dataset based on real tumor morphologies that can be used for the
quantitative evaluation and comparison of these CAD algorithms. The manual contouring of 17 different lung
metastases was performed and reconstruction of the full 3-D surface of each tumor was achieved through the utilization
of an analytical equation comprised of a spherical harmonics series. 2-D nodule slice representations were then
computed based on these analytical equations to produce realistic simulated nodules that can be inserted into CT datasets
with well-circumscribed, vascularized, or juxtapleural borders and also be scaled to represent nodule growth. The 3-D
shape and intensity profile of each simulated nodule created from the spherical harmonics reconstruction was compared
to the real patient CT lung metastasis from which its contour points were derived through the calculation of a 3-D
correlation coefficient, producing an average value of 0.8897 (±0.0609). This database of realistic simulated nodules can
fulfill the need for a reproducible and reliable gold standard for CAD algorithms with regards to nodule detection and
sizing, especially given its virtually unlimited capacity for expansion to other nodule shape variants, organ systems, and
imaging modalities.
Predicting LIDC diagnostic characteristics by combining spatial and diagnostic opinions
Author(s):
William H. Horsthemke;
Daniela S. Raicu;
Jacob D. Furst
Show Abstract
Computer-aided diagnostic characterization (CADc) aims to support medical imaging decision making by objectively
rating the radiologists' subjective, perceptual opinions of visual diagnostic characteristics of suspicious lesions. This
research uses the publicly available Lung Image Database Consortium (LIDC) collection of radiologists' outlines of
nodules and ratings of boundary and shape characteristics: spiculation, margin, lobulation, and sphericity. The approach
attempts to reduce the observed disagreement between radiologists on the extent of nodules by combining their spatial
opinion using probability maps to create regions of interest (ROIs). From these ROIs, images features are extracted and
combined using machine learning models to predict a combined opinion, the median rating and a thresholded, binary
version of their diagnostic characteristics. The results show slight to fair agreement-linear-weighted Kappa-between
the CADc models and median radiologist opinion for the full scale five-level rating and fair to moderate agreement using
a binary version of the median radiologist opinion.
Improving CAD performance in pulmonary embolism detection: preliminary investigation
Author(s):
Sang Cheol Park;
Brian Chapman;
Christopher Deible;
Sean Lee;
Bin Zheng
Show Abstract
In this preliminary study, a new computer-aided detection (CAD) scheme for pulmonary embolism (PE)
detection was developed and tested. The scheme applies multiple steps including lung segmentation, candidate
extraction using intensity mask and tobogganing method, feature extraction, false positive reduction using a multifeature
based artificial neural network (ANN) and a k-nearest neighbor (KNN) classifier to detect and classify suspicious
PE lesions. In particular, a new method to define the surrounding background regions of interest (ROI) depicting PE
candidates was proposed and tested in an attempt to reduce the detection of false positive regions. In this study, the
authors also investigated following methods to improve CAD performance, which include a grouping and scoring
method, feature selection using genetic algorithm, and limitation on allowed suspicious lesions to be cued in one
examination. To test the scheme performance, a set of 20 chest CT examinations were selected. Among them, 18 are
positive cases depicted 44 verified PE lesions and the remaining 2 were negative cases. The dataset was also divided into
a training subset (9 examinations) and a testing subset (11 examinations), respectively. The experimental results showed
when applying to the testing dataset CAD scheme using tobogganing method alone achieved 2D region-based sensitivity
of 72.1% (220/305) and 3D lesion-based sensitivity of 83.3% (20/24) with total 19,653 2D false-positive (FP) PE
regions (1,786.6 per case or approximately 6.3 per CT slice). Applying the proposed new method to improve lung region
segmentation and better define the surrounding background ROI, the scheme reduced the region-based sensitivity by
6.5% to 65.6% or lesion-based sensitivity by 4.1% to 79.2% while reducing the FP rate by 65.6% to 6,752 regions (or
613.8 per case). After applying the methods of grouping, the maximum scoring, a genetic algorithm (GA) to delete
"redundant" features, and limiting the maximum number of cued-lesions in one examination, CAD scheme further
reduced FP rate to 50 per case. Based on the FROC curve, an operating threshold was set up in which the CAD scheme
could ultimately achieve 63.2% detection sensitivity with 18.4 FP regions per case when applying to the testing dataset.
This study investigated the feasibility of several methods applying to the CAD scheme in detecting PE lesions and
demonstrated that CAD performance could depend on many factors including better defining candidate ROI and its
background, optimizing the 2D region grouping and scoring methods, selecting the optimal feature set, and limiting the
number of allowed cueing lesions per examination.
Selective reduction of CAD false-positive findings
Author(s):
N. Camarlinghi;
I. Gori;
A. Retico;
F. Bagagli
Show Abstract
Computer-Aided Detection (CAD) systems are becoming widespread supporting tools to radiologists' diagnosis,
especially in screening contexts. However, a large amount of false positive (FP) alarms would inevitably lead
both to an undesired possible increase in time for diagnosis, and to a reduction in radiologists' confidence in
CAD as a useful tool. Most CAD systems implement as final step of the analysis a classifier which assigns a
score to each entry of a list of findings; by thresholding this score it is possible to define the system performance
on an annotated validation dataset in terms of a FROC curve (sensitivity vs. FP per scan). To use a CAD as
a supportive tool for most clinical activities, an operative point has to be chosen on the system FROC curve,
according to the obvious criterion of keeping the sensitivity as high as possible, while maintaining the number
of FP alarms still acceptable. The strategy proposed in this study is to choose an operative point with high
sensitivity on the CAD FROC curve, then to implement in cascade a further classification step, constituted by
a smarter classifier. The key issue of this approach is that the smarter classifier is actually a meta-classifier of
more then one decision system, each specialized in rejecting a particular type of FP findings generated by the
CAD.
The application of this approach to a dataset of 16 lung CT scans previously processed by the VBNACAD
system is presented. The lung CT VBNACAD performance of 87.1% sensitivity to juxtapleural nodules with 18.5
FP per scan is improved up to 10.1 FP per scan while maintaining the same value of sensitivity. This work has
been carried out in the framework of the MAGIC-V collaboration.
A model for the relationship between semantic and content based similarity using LIDC
Author(s):
Grace M. Dasovich;
Robert Kim;
Daniela S. Raicu;
Jacob D. Furst
Show Abstract
There is considerable research in the field of content-based image retrieval (CBIR); however, few of the current
systems incorporate radiologists' visual impression of image similarity. Our objective is to bridge the semantic
gap between radiologists' ratings and image features. We have been developing a conceptual-based similarity
model derived from content-based similarity to improve CBIR. Previous work in our lab reduced the Lung Image
Database Consortium (LIDC) data set into a selection of 149 images of unique nodules, each containing nine
semantic ratings by four radiologists and 64 computed image features. After evaluating the similarity measures
for both content-based and semantic-based features, we selected 116 nodule pairs with a high correlation between
both similarities. These pairs were used to generate a linear regression model that predicts semantic similarity
with content similarity input with an R2 value of 0.871. The characteristics and features of nodules that were
used for the model were also investigated.
Variation compensation and analysis on diaphragm curvature analysis for emphysema quantification on whole lung CT scans
Author(s):
Brad M. Keller;
Anthony P. Reeves;
R. Graham Barr;
David F. Yankelevitz;
Claudia I. Henschke
Show Abstract
CT scans allow for the quantitative evaluation of the anatomical bases of emphysema. Recently, a non-density based geometric measurement of lung diagphragm curvature has been proposed as a method for the quantification of emphysema from CT. This work analyzes variability of diaphragm curvature and evaluates the effectiveness of a compensation methodology for the reduction of this variability as compared to emphysema index. Using a dataset of 43 scan-pairs with less than a 100 day time-interval between scans, we find that the diaphragm curvature had a trend towards lower overall variability over emphysema index (95% CI:-9.7 to + 14.7 vs. -15.8 to +12.0), and that the variation of both measures was reduced after compensation. We conclude that the variation of the new measure can be considered comparable to the established measure and the compensation can reduce the apparent variation of quantitative measures successfully.
Adjacent slice prostate cancer prediction to inform MALDI imaging biomarker analysis
Author(s):
Shao-Hui Chuang;
Xiaoyan Sun;
Lisa Cazares;
Julius Nyalwidhe;
Dean Troyer;
O. John Semmes;
Jiang Li;
Frederic D. McKenzie
Show Abstract
Prostate cancer is the second most common type of cancer among men in US [1]. Traditionally, prostate cancer
diagnosis is made by the analysis of prostate-specific antigen (PSA) levels and histopathological images of biopsy
samples under microscopes. Proteomic biomarkers can improve upon these methods. MALDI molecular spectra imaging
is used to visualize protein/peptide concentrations across biopsy samples to search for biomarker candidates.
Unfortunately, traditional processing methods require histopathological examination on one slice of a biopsy sample
while the adjacent slice is subjected to the tissue destroying desorption and ionization processes of MALDI. The highest
confidence tumor regions gained from the histopathological analysis are then mapped to the MALDI spectra data to
estimate the regions for biomarker identification from the MALDI imaging. This paper describes a process to provide a
significantly better estimate of the cancer tumor to be mapped onto the MALDI imaging spectra coordinates using the
high confidence region to predict the true area of the tumor on the adjacent MALDI imaged slice.
Automatic recognition of abnormal cells in cytological tests using multispectral imaging
Author(s):
A. Gertych;
G. Galliano M.D.;
S. Bose M.D.;
D. L. Farkas
Show Abstract
Cervical cancer is the leading cause of gynecologic disease-related death worldwide, but is almost completely
preventable with regular screening, for which cytological testing is a method of choice. Although such testing has
radically lowered the death rate from cervical cancer, it is plagued by low sensitivity and inter-observer variability.
Moreover, its effectiveness is still restricted because the recognition of shape and morphology of nuclei is compromised
by overlapping and clumped cells. Multispectral imaging can aid enhanced morphological characterization of cytological
specimens. Features including spectral intensity and texture, reflecting relevant morphological differences between
normal and abnormal cells, can be derived from cytopathology images and utilized in a detection/classification scheme.
Our automated processing of multispectral image cubes yields nuclear objects which are subjected to classification
facilitated by a library of spectral signatures obtained from normal and abnormal cells, as marked by experts. Clumps are
processed separately with reduced set of signatures. Implementation of this method yields high rate of successful
detection and classification of nuclei into predefined malignant and premalignant types and correlates well with those
obtained by an expert. Our multispectral approach may have an impact on the diagnostic workflow of cytological tests.
Abnormal cells can be automatically highlighted and quantified, thus objectivity and performance of the reading can be
improved in a way which is currently unavailable in clinical setting.
Segmentation of follicular regions on H&E slides using a matching filter and active contour model
Author(s):
Kamel Belkacem-Boussaid;
Jeffrey Prescott;
Gerard Lozanski M.D.;
Metin N. Gurcan
Show Abstract
Follicular Lymphoma (FL) accounts for 20-25% of non-Hodgkin lymphomas in the United States. The first step in
follicular lymphoma grading is the identification of follicles. The goal of this paper is to develop a technique to segment
follicular regions in H&E stained images. The method is based on a robust active contour model, which is initialized by a
seed point selected inside the follicle manually by the user. The novel aspect of this method is the introduction of a
matched filter for the flattening of background in the L channel of the Lab color space. The performance of the algorithm
was tested by comparing it against the manual segmentations of trained readers using the Zijbendos similarity index. The
mean accuracy of the final segmentation compared to the manual ground truth was 0.71 with a standard deviation of
0.12.
Classification of left and right eye retinal images
Author(s):
Ngan Meng Tan;
Jiang Liu;
Damon W. K. Wong;
Zhuo Zhang;
Shijian Lu;
Joo Hwee Lim;
Huiqi Li;
Tien Yin Wong M.D.
Show Abstract
Retinal image analysis is used by clinicians to diagnose and identify, if any, pathologies present in a patient's eye. The
developments and applications of computer-aided diagnosis (CAD) systems in medical imaging have been rapidly
increasing over the years. In this paper, we propose a system to classify left and right eye retinal images automatically.
This paper describes our two-pronged approach to classify left and right retinal images by using the position of the
central retinal vessel within the optic disc, and by the location of the macula with respect to the optic nerve head. We
present a framework to automatically identify the locations of the key anatomical structures of the eye- macula, optic
disc, central retinal vessels within the optic disc and the ISNT regions. A SVM model for left and right eye retinal image
classification is trained based on the features from the detection and segmentation. An advantage of this is that other
image processing algorithms can be focused on regions where diseases or pathologies and more likely to occur, thereby
increasing the efficiency and accuracy of the retinal CAD system/pathology detection.
We have tested our system on 102 retinal images, consisting of 51 left and right images each and achieved and accuracy
of 94.1176%. The high experimental accuracy and robustness of this system demonstrates that there is potential for this
system to be integrated and applied with other retinal CAD system, such as ARGALI, for a priori information in
automatic mass screening and diagnosis of retinal diseases.
Enhancement of optic cup detection through an improved vessel kink detection framework
Author(s):
Damon W. K. Wong;
Jiang Liu;
Ngan Meng Tan;
Zhuo Zhang;
Shijian Lu;
Joo Hwee Lim;
Huiqi Li;
Tien Yin Wong M.D.
Show Abstract
Glaucoma is a leading cause of blindness. The presence and extent of progression of glaucoma can be determined if the
optic cup can be accurately segmented from retinal images. In this paper, we present a framework which improves the
detection of the optic cup. First, a region of interest is obtained from the retinal fundus image, and a pallor-based
preliminary cup contour estimate is determined. Patches are then extracted from the ROI along this contour. To improve
the usability of the patches, adaptive methods are introduced to ensure the patches are within the optic disc and to
minimize redundant information. The patches are then analyzed for vessels by an edge transform which generates pixel
segments of likely vessel candidates. Wavelet, color and gradient information are used as input features for a SVM
model to classify the candidates as vessel or non-vessel. Subsequently, a rigourous non-parametric method is adopted in
which a bi-stage multi-resolution approach is used to probe and localize the location of kinks along the vessels. Finally,
contenxtual information is used to fuse pallor and kink information to obtain an enhanced optic cup segmentation. Using
a batch of 21 images obtained from the Singapore Eye Research Institute, the new method results in a 12.64% reduction
in the average overlap error against a pallor only cup, indicating viable improvements in the segmentation and supporting
the use of kinks for optic cup detection.
Automated measurement of retinal blood vessel tortuosity
Author(s):
Vinayak Joshi;
Joseph M. Reinhardt;
Michael D. Abramoff
Show Abstract
Abnormalities in the vascular pattern of the retina are
associated with retinal diseases and are also risk
factors for systemic diseases, especially
cardiovascular diseases. The three-dimensional
retinal vascular pattern is mostly formed
congenitally, but is then modified over life, in
response to aging, vessel wall dystrophies and long
term changes in blood flow and pressure. A
characteristic of the vascular pattern that is
appreciated by clinicians is vascular tortuosity, i.e.
how curved or kinked a blood vessel, either vein or
artery, appears along its course. We developed a new
quantitative metric for vascular tortuosity, based on
the vessel's angle of curvature, length of the curved
vessel over its chord length (arc to chord ratio),
number of curvature sign changes, and combined
these into a unidimensional metric, Tortuosity Index
(TI). In comparison to other published methods this
method can estimate appropriate TI for vessels with
constant curvature sign and vessels with equal arc to
chord ratios, as well. We applied this method to a
dataset of 15 digital fundus images of 8 patients with
Facioscapulohumeral muscular dystrophy (FSHD),
and to the other publically available dataset of 60
fundus images of normal cases and patients with
hypertensive retinopathy, of which the arterial and
venous tortuosities have also been graded by masked
experts (ophthalmologists). The method produced
exactly the same rank-ordered list of vessel tortuosity
(TI) values as obtained by averaging the tortuosity
grading given by 3 ophthalmologists for FSHD
dataset and a list of TI values with high ranking
correlation with the ophthalmologist's grading for
the other dataset. Our results show that TI has
potential to detect and evaluate abnormal retinal
vascular structure in early diagnosis and prognosis of
retinopathies.
New algorithm for detecting smaller retinal blood vessels in fundus images
Author(s):
Robert LeAnder;
Praveen I. Bidari;
Tauseef A. Mohammed;
Moumita Das;
Scott E. Umbaugh
Show Abstract
About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the
disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to
automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty,
584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was
used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter
of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to
mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding
operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing:
The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At
this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and
bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the
Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM)
was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully
preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically
identifying diseases that affect retinal blood vessels.
Vertical cup-to-disc ratio measurement for diagnosis of glaucoma on fundus images
Author(s):
Yuji Hatanaka;
Atsushi Noudo;
Chisako Muramatsu;
Akira Sawada;
Takeshi Hara;
Tetsuya Yamamoto;
Hiroshi Fujita
Show Abstract
Glaucoma is a leading cause of permanent blindness. Retinal fundus image examination is useful for early detection of
glaucoma. In order to evaluate the presence of glaucoma, the ophthalmologists determine the cup and disc areas and they
diagnose glaucoma using a vertical cup-to-disc ratio. However, determination of the cup area is very difficult, thus we
propose a method to measure the cup-to-disc ratio using a vertical profile on the optic disc. First, the blood vessels were
erased from the image and then the edge of optic disc was then detected by use of a canny edge detection filter. Twenty
profiles were then obtained around the center of the optic disc in the vertical direction on blue channel of the color image,
and the profile was smoothed by averaging these profiles. After that, the edge of the cup area on the vertical profile was
determined by thresholding technique. Lastly, the vertical cup-to-disc ratio was calculated. Using seventy nine images,
including twenty five glaucoma images, the sensitivity of 80% and a specificity of 85% were achieved with this method.
These results indicated that this method can be useful for the analysis of the optic disc in glaucoma examinations.
3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma
Author(s):
Li Tang;
Young H. Kwon;
Wallace L. M. Alward;
Emily C. Greenlee;
Kyungmoo Lee;
Mona K. Garvin;
Michael D. Abràmoff
Show Abstract
The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust
stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with
glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions
and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper,
multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and
gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity
based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale
space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D
segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In
experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape,
and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.
Fundus image registration for vestibularis research
Author(s):
Vamsi K. Ithapu;
Armin Fritsche;
Ariane Oppelt;
Martin Westhofen M.D.;
Thomas M. Deserno
Show Abstract
In research on vestibular nerve disorders, fundus images of both left and right eyes are acquired systematically
to precisely assess the rotation of the eye ball that is induced by the rotation of entire head. The measurement is
still carried out manually. Although various methods have been proposed for medical image registration, robust
detection of rotation especially in images with varied quality in terms of illumination, aberrations, blur and noise
still is challenging. This paper evaluates registration algorithms operating on different levels of semantics: (i)
data-based using Fourier transform and log polar maps; (ii) point-based using scaled image feature transform
(SIFT); (iii) edge-based using Canny edge maps; (iv) object-based using matched filters for vessel detection; (v)
scene-based detecting papilla and macula automatically and (vi) manually by two independent medical experts.
For evaluation, a database of 22 patients is used, where each of left and right eye images is captured in upright
head position and in lateral tilt of ±200. For 66 pairs of images (132 in total), the results are compared with
ground truth, and the performance measures are tabulated. Best correctness of 89.3% were obtained using
the pixel-based method and allowing 2.5° deviation from the manual measures. However, the evaluation shows
that for applications in computer-aided diagnosis involving a large set of images with varied quality, like in
vestibularis research, registration methods based on a single level of semantics are not sufficiently robust. A
multi-level semantics approach will improve the results since failure occur on different images.
Toward automatic phenotyping of retinal images from genetically determined mono- and dizygotic twins using amplitude modulation-frequency modulation methods
Author(s):
P. Soliz;
B Davis;
V. Murray;
M. Pattichis;
S. Barriga;
S. Russell
Show Abstract
This paper presents an image processing technique for automatically categorize age-related macular degeneration
(AMD) phenotypes from retinal images. Ultimately, an automated approach will be much more precise and consistent in
phenotyping of retinal diseases, such as AMD. We have applied the automated phenotyping to retina images from a
cohort of mono- and dizygotic twins. The application of this technology will allow one to perform more quantitative
studies that will lead to a better understanding of the genetic and environmental factors associated with diseases such as
AMD. A method for classifying retinal images based on features derived from the application of amplitude-modulation
frequency-modulation (AM-FM) methods is presented. Retinal images from identical and fraternal twins who presented
with AMD were processed to determine whether AM-FM could be used to differentiate between the two types of twins.
Results of the automatic classifier agreed with the findings of other researchers in explaining the variation of the disease
between the related twins. AM-FM features classified 72% of the twins correctly. Visual grading found that genetics
could explain between 46% and 71% of the variance.
Interobserver variability effects on computerized volume analysis of treatment response of head and neck lesions in CT
Author(s):
Lubomir Hadjiiski;
Heang-Ping Chan;
Mohannad Ibrahim;
Berkman Sahiner;
Sachin Gujar;
Suresh K. Mukherji
Show Abstract
A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in
estimation of the response to treatment of malignant lesions. The system performs 3D segmentation based on a level set
model and uses as input an approximate bounding box for the lesion of interest. We investigated the effect of the
interobserver variability of radiologists' marking of the bounding box on the automatic segmentation performance. In
this preliminary study, CT scans from a pre-treatment exam and a post one-cycle chemotherapy exam of 34 patients with
primary site head and neck neoplasms were used. For each tumor, an experienced radiologist marked the lesion with a
bounding box and provided a reference standard by outlining the full 3D contour on both the pre- and post treatment
scans. A second radiologist independently marked each tumor again with another bounding box. The correlation between
the automatic and manual estimates for both the pre-to-post-treatment volume change and the percent volume change
was r=0.95. Based on the bounding boxes by the second radiologist, the correlation between the automatic and manual
estimate for the pre-to-post-treatment volume change was r=0.89 and for the percent volume change was r=0.91. The
correlation for the automatic estimates obtained from the bounding boxes by the two radiologists was as follows: (1) pretreatment
volume r=0.92, (2) post-treatment volume r=0.88, (3) pre-to-post-treatment change r=0.89 and (4) percent preto-
post-treatment change r=0.90. The difference between the automatic estimates based on the two sets of bounding
boxes did not achieve statistical significance for any of the estimates (p>0.29). The preliminary results indicate that the
automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's
hand segmentation as reference standard, and that the performance was robust against inter-observer variability in
marking the input bounding boxes.
Source separation on hyperspectral cube applied to dermatology
Author(s):
J. Mitra;
R. Jolivot;
P. Vabres;
F. S. Marzani
Show Abstract
This paper proposes a method of quantification of the components underlying the human skin that are supposed
to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is
based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin
absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the
ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial
fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each
source component in the examined skin patch.
Segmentation of individual ribs from low-dose chest CT
Author(s):
Jaesung Lee;
Anthony P. Reeves
Show Abstract
Segmentation of individual ribs and other bone structures in chest CT images is important for anatomical
analysis, as the segmented ribs may be used as a baseline reference for locating organs within a chest as well
as for identification and measurement of any geometric abnormalities in the bone. In this paper we present a
fully automated algorithm to segment the individual ribs from low-dose chest CT scans. The proposed algorithm
consists of four main stages. First, all the high-intensity bone structure present in the scan is segmented. Second,
the centerline of the spinal canal is identified using a distance transform of the bone segmentation. Then, the
seed region for every rib is detected based on the identified centerline, and each rib is grown from the seed region
and separated from the corresponding vertebra. This algorithm was evaluated using 115 low-dose chest CT scans
from public databases with various slice thicknesses. The algorithm parameters were determined using 5 scans,
and remaining 110 scans were used to evaluate the performance of the segmentation algorithm. The outcome of
the algorithm was inspected by an author for the correctness of the segmentation. The results indicate that over
98% of the individual ribs were correctly segmented with the proposed algorithm.
A comparison of basic deinterlacing approaches for a computer assisted diagnosis approach of videoscope images
Author(s):
Andreas Kage;
Marcia Canto;
Emmanuel Gorospe;
Antonio Almario;
Christian Münzenmayer
Show Abstract
In the near future, Computer Assisted Diagnosis (CAD) which is well known in the area of mammography
might be used to support clinical experts in the diagnosis of images derived from imaging modalities
such as endoscopy. In the recent past, a few first approaches for computer assisted endoscopy have been
presented already. These systems use a video signal as an input that is provided by the endoscopes
video processor. Despite the advent of high-definition systems most standard endoscopy systems today
still provide only analog video signals. These signals consist of interlaced images that can not be used
in a CAD approach without deinterlacing. Of course, there are many different deinterlacing approaches
known today. But most of them are specializations of some basic approaches. In this paper we present
four basic deinterlacing approaches. We have used a database of non-interlaced images which have been
degraded by artificial interlacing and afterwards processed by these approaches. The database contains
regions of interest (ROI) of clinical relevance for the diagnosis of abnormalities in the esophagus. We
compared the classification rates on these ROIs on the original images and after the deinterlacing. The
results show that the deinterlacing has an impact on the classification rates. The Bobbing approach
and the Motion Compensation approach achieved the best classification results in most cases.
Segmentation and classification of dermatological lesions
Author(s):
Aurora Sáez;
Begoña Acha;
Carmen Serrano
Show Abstract
Certain skin diseases are chronic, inflammatory and without cure. However, there are many treatment options that can
clear them for a period of time. Measuring their severity and assessing their extent, is a fundamental issue to determine
the efficacy of the treatment under test. Two of the most important parameters of severity assessment are Erythema
(redness) and Scaliness. Physicians classify these parameters into several grades by visual grading method. In this paper
a color image segmentation and classification algorithm is developed to obtain an assessment of erythema and scaliness
of dermatological lesions. Color digital photographs taken under an acquisition protocol form the database. Difference
between green band and blue band of images in RGB color space shows two modes (healthy skin and lesion) with clear
separation. Otsu's method is applied to this difference in order to isolate the lesion. After the skin disease is segmented,
some color and texture features are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural
network classifies them into the five grades of erythema and the five grades of scaliness. The method has been tested
with 31 images with a success percentage of 83.87 % when the images are classified in erythema, and 77.42 % for
scaliness classification.
Pathology detection on medical images based in oriented active appearance models
Author(s):
Xinjian Chen;
Jayaram K. Udupa;
Abass Alavi;
Drew A. Torigian
Show Abstract
In this paper, we propose a novel, general paradigm based on creating a statistical geographic model of shape and
appearance of normal body regions. Any deviations from the normality information captured in a given patient image are
highlighted and expressed as a fuzzy pathology image. We study the feasibility of this idea in 2D images via Oriented
Active Appearance Models (OAAM). The OAAM synergistically combines AAM and live-wire concepts. The approach
consists of three main stages: model building, segmentation, and pathology detection. The model is built on image data
from normal subjects. The model currently includes shape and texture information. A variety of other information
(functional, morphometric) can be added in the future. For segmentation, a novel automatic object recognition method is
proposed which strategically combines the AAM with the live-wire method. A two level dynamic programming method
is used to do the finer delineation. During the process of segmentation, a multi-object strategy is used for improving
recognition and delineation accuracy. For pathology detection, the model is first fit to the given image as best as possible
via recognition and delineation of the objects included in the model. Subsequently, a fuzzy pathology image is generated
that expresses deviations in appearance of the given image form the texture information contained in the model. The
proposed method was tested on two clinical CT medical image datasets each consisting of 40 images. Our preliminary
results indicate high segmentation accuracy (TPVF>97%, FPVF<0.5%) for delineating objects by the multi-object
strategy with good pathology detection results suggesting the feasibility of the proposed system.
Automated segmentation of mucosal change in rhinosinusitis patients
Author(s):
William F. Sensakovic;
Jayant M. Pinto;
Faud M. Baroody;
Adam Starkey;
Samuel G. Armato III
Show Abstract
Rhinosinusitis is a sinonasal disease affecting 16% of the population. Volumetric segmentation can provide objective
data that is useful when determining stage and therapeutic response. An automated volumetric segmentation method was
developed and tested. Four patients underwent baseline and follow-up CT scans. For each patient, five sections were
outlined by two otolaryngologists and the automated method. The median Dice coefficient between otolaryngologists
was 0.74. The otolaryngologist and automated segmentations demonstrated acceptable agreement with a median Dice
coefficient of 0.61. This automated method represents the first step in the creation of a computerized system for the
quantitative 3D analysis of rhinosinusitis.
Diagnosis of disc herniation based on classifiers and features generated from spine MR images
Author(s):
Jaehan Koh;
Vipin Chaudhary;
Gurmeet Dhillon
Show Abstract
In recent years the demand for an automated method for diagnosis of disc abnormalities has grown as more
patients suffer from lumbar disorders and radiologists have to treat more patients reliably in a limited amount of
time. In this paper, we propose and compare several classifiers that diagnose disc herniation, one of the common
problems of the lumbar spine, based on lumbar MR images. Experimental results on a limited data set of 68
clinical cases with 340 lumbar discs show that our classifiers can diagnose disc herniation with 97% accuracy.
Navigated tracking of skin lesion progression with optical spectroscopy
Author(s):
Alexandru Duliu;
Tobias Lasser;
Thomas Wendler;
Asad Safi;
Sibylle Ziegler;
Nassir Navab
Show Abstract
Cutaneous T-Cell Lymphoma (CTCL) is a cancer type externally characterized by alterations in the coloring of skin.
Optical spectroscopy has been proposed for quantification of minimal changes in skin offering itself as an interesting tool
for monitoring of CTCL in real-time. However, in order to be used in a valid way, measurements on the lesions have to
be taken at the same position and with the same orientation in each session. Combining hand-held optical spectroscopy
devices with tracking and acquiring synchronously spectral information with position and orientation, we introduce a
novel computer-assisted scheme for valid spectral quantification of disease progression. We further present an
implementation for an augmented reality guidance system that allows to find a point previously analyzed with an
accuracy of 0.8[mm] and 5.0[deg] (vs. 1.6[mm] and 6.6[deg] without guidance). The intuitive guidance, as well as the
preliminary results shows that the presented approach has great potential towards innovative computer-assistance
methods for quantification of disease progression.
Computer aided diagnosis of osteoporosis using multi-slice CT images
Author(s):
Eiji Takahashi;
Shinsuke Saita;
Yoshiki Kawata;
Noboru Niki;
Masako Ito;
Hiromu Nishitani;
Noriyuki Moriyama
Show Abstract
The patients of osteoporosis comprised about 11 million people in Japan and it is one of the problems the aging society
has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. The development of Multislice
CT technology made it possible to perform the three dimensional (3-D) image analysis with higher body axis
resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as
a support to diagnose osteoporosis and at the same time can be used for lung cancer screening which may lead to its early
detection. We develop an automatic extraction algorithm of vertebra, and the analysis algorithm of the vertebral body
using shape analysis and a bone density measurement for the computer aided diagnosis of osteoporosis.