Show all abstracts
View Session
- Front Matter: Volume 6514
- Mammogram Analysis
- CT Colon
- Pathology Imaging
- Databases and Pattern Recognition
- Thoracic CT
- MRI Applications
- CT Lung Nodules
- Breast Tomosynthesis
- Cardiac/ New Applications
- Breast Imaging
- Thoracic/Skeletal Imaging
- Poster Session: Breast Imaging
- Poster Session: CAD Issues
- Poster Session: Cardiac/Vasculature/Brain Imaging
- Poster Session: Colonography
- Poster Session: New Applications
- Poster Session: Thoracic Imaging
Front Matter: Volume 6514
Front Matter: Volume 6514
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 6514, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Mammogram Analysis
Mass detection with digitized screening mammograms by using Gabor features
Show abstract
Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can successfully detect early-stage (with small values of Assessment & low Subtlety) malignant masses. In addition, Gabor filtered images are used in both stages of classifications, which greatly simplifies the GMD algorithm.
Incorporation of a multiscale texture-based approach to mutual information matching for improved knowledge-based detection of masses in screening mammograms
Show abstract
Mutual information is a popular intensity-based image similarity measure mainly used in image registration. This measure has been also very successful as the similarity metric in our knowledge-based computer-assisted detection (CADe) system for the detection of masses in screening mammograms. Our CADe system is designed to assess a new, query case based on its similarity with known cases stored in the knowledge database. However, intensity-based mutual information captures only relationships between the gray level values of corresponding pixels. This study presents a novel advancement of our CADe system by incorporating neighborhood textural information when estimating the mutual information of two images. Specifically, an entropy filter is applied to the images, effectively replacing each image pixel value with its neighborhood entropy. This pixel-based entropy is a localized measure of image texture. Then, the information-theoretic CAD system is asked to make a decision regarding the query case using the texture-based mutual information similarity metric. The entropy-based image enhancement and MI-based decision making processes are repeated at different neighborhood scales. Finally, an artificial network merges intensity-based and texture-based decisions to investigate possible improvements in mass detection performance. Given a database of 1,820 regions of interest (ROIs) extracted from screening mammograms (901 depicting a biopsy-proven mass and 919 depicting normal parenchyma) and a leave-one out sampling scheme, the study showed that our CADe system achieves an ROC area of 0.87±0.01 using the intensity-based ROC. The ROC performance for the texture-based CADe system ranges from 0.69±0.01 to 0.83±0.01 depending on the scale of analysis. The synergistic approach of the ANN using both intensity-based and texture-based information resulted in statistically significantly better performance with an ROC area index of 0.93±0.01.
Contribution of Haar wavelets and MPEG-7 textural features for false positive reduction in a CAD system for the detection of masses in mammograms
Show abstract
The study investigates the significance of wavelet-based and MPEG-7 homogeneous textural features in an attempt to
improve the specificity of an in-house CAD system for the detection of masses in screening mammograms. The
detection scheme has been presented before and it relies on the concept of morphologic concentric layer (MCL) analysis
to identify suspicious locations in a mammogram. The locations were deemed suspicious due to their morphology;
especially an increased activity of iso-intensity layers around these locations. On a set of 270 mammographic images,
the MCL detection scheme achieved 93% (131/141) mass detection rate with 4.8 FPs/image (1,296/270). In the present
study, the textural signature of the detected location is analyzed for possible false positive reduction. For texture
analysis, HAAR wavelet and MPEG-7 HTD textural features were extracted. In addition, the contribution of directional
neighborhood (DN) features was studied as well. The extracted features were combined with a back-propagation
artificial neural network (BPANN) to discriminate true masses from false positives. Using a database of 1,427
suspicious seeds (131 true masses and 1,296 FPs) and a 5-fold cross-validation sampling scheme, the ROC area index of
the BPNN using the different sets of features were as follows: Az(HAAR)=0.87±0.01, Az(HTD)=0.91±0.02,
Az(DN)=0.84±0.01. Averaging the scores of the three BPANNs resulted in statistically significantly better performance
Az(ALL)=0.94±0.01. At 95% sensitivity, the FP rate was reduced by 77.5%. The overall performance of the system
after incorporation of textural and directional features was 87.9% sensitivity for malignant masses at 1.1 FPs/image.
Computer-aided detection of breast masses on prior mammograms
Show abstract
An important purpose of a CAD system is that it can serve as a second reader to alert radiologists to subtle cancers that
may be overlooked. In this study, we are developing new computer vision techniques to improve the detection
performance for subtle masses on prior mammograms. A data set of 159 patients containing 318 current mammograms
and 402 prior mammograms was collected. A new technique combining gradient field analysis with Hessian analysis
was developed to prescreen for mass candidates. A suspicious structure in each identified location was initially
segmented by seed-based region growing and then refined by using an active contour method. Morphological, gray
level histogram and run-length statistics features were extracted. Rule-based and LDA classifiers were trained to
differentiate masses from normal tissues. We randomly divided the data set into two independent sets; one set of 78
cases for training and the other set of 81 cases for testing. With our previous CAD system, the case-based sensitivities
on prior mammograms were 63%, 48% and 32% at 2, 1 and 0.5 FPs/image, respectively. With the new CAD system,
the case-based sensitivities were improved to 74%, 56% and 35%, respectively, at the same FP rates. The difference in
the FROC curves was statistically significant (p<0.05 by AFROC analysis). The performances of the two systems for
detection of masses on current mammograms were comparable. The results indicated that the new CAD system can
improve the detection performance for subtle masses without a trade-off in detection of average masses.
CT Colon
Computer-aided detection of colonic polyps using volume rendering
Show abstract
This work utilizes a novel pipeline for the computer-aided detection (CAD) of colonic polyps, assisting radiologists in locating polyps when using a virtual colonoscopy system. Our CAD pipeline automatically detects polyps while reducing the number of false positives (FPs). It integrates volume rendering and conformal colon flattening with texture and shape analysis. The colon is first digitally cleansed, segmented, and extracted from the CT dataset of the abdomen. The colon surface is then mapped to a 2D rectangle using conformal mapping. Using this colon flattening method, the CAD problem is converted from 3D into 2D. The flattened image is rendered using a direct volume rendering of the 3D colon dataset with a translucent transfer function. Suspicious polyps are detected by applying a clustering method on the 2D volume rendered image. The FPs are reduced by analyzing shape and texture features of the suspicious areas detected by the clustering step. Compared with shape-based methods, ours is much faster and much more efficient as it avoids computing curvature and other shape parameters for the whole colon wall. We tested our method with 178 datasets and found it to be 100% sensitive to adenomatous polyps with a low rate of FPs. The CAD results are seamlessly integrated into a virtual colonoscopy system, providing the radiologists with visual cues and likelihood indicators of areas likely to contain polyps, and allowing them to quickly inspect the suspicious areas and further exploit the flattened colon view for easy navigation and bookmark placement.
Using Pareto fronts to evaluate polyp detection algorithms for CT colonography
Show abstract
We evaluate and improve an existing curvature-based region growing algorithm for colonic polyp detection for our CT colonography (CTC) computer-aided detection (CAD) system by using Pareto fronts. The performance of a polyp detection algorithm involves two conflicting objectives, minimizing both false negative (FN) and false positive (FP) detection rates. This problem does not produce a single optimal solution but a set of solutions known as a Pareto front. Any solution in a Pareto front can only outperform other solutions in one of the two competing objectives. Using evolutionary algorithms to find the Pareto fronts for multi-objective optimization problems has been common practice for years. However, they are rarely investigated in any CTC CAD system because the computation cost is inherently expensive. To circumvent this problem, we have developed a parallel program implemented on a Linux cluster environment. A data set of 56 CTC colon surfaces with 87 proven positive detections of polyps sized 4 to 60 mm is used to evaluate an existing one-step, and derive a new two-step region growing algorithm. We use a popular algorithm, the Strength Pareto Evolutionary Algorithm (SPEA2), to find the Pareto fronts. The performance differences are evaluated using a statistical approach. The new algorithm outperforms the old one in 81.6% of the sampled Pareto fronts from 20 simulations. When operated at a suitable sensitivity level such as 90.8% (79/87) or 88.5% (77/87), the FP rate is decreased by 24.4% or 45.8% respectively.
Efficient detection of polyps in CT colonography
Show abstract
Colon cancer is a widespread disease and, according to the American Cancer Society, it is estimated that in 2006
more than 55,000 people will die of colon cancer in the US. However, early detection of colorectal polyps helps
to drastically reduces mortality. Computer-Aided Detection (CAD) of colorectal polyps is a tool that could help
physicians finding such lesions in CT scans of the colon.
In this paper, we present the first phase, candidate generation (CG), of our technique for the detection of
colonic polyp candidate locations in CT colonoscopy. Since polyps typically appear as protrusions on the surface
of the colon, our cutting-plane algorithm identifies all those areas that can be "cut-off" using a plane. The key
observation is that for any protruding lesion there is at least one plane that cuts a fragment off. Furthermore,
the intersection between the plane and the polyp will typically be small and circular. On the other hand, a
plane cannot cut a small circular cross-section from a wall or a fold, due to their concave or elongated paraboloid
morphology, because these structures yield cross-sections that are much larger or non-circular.
The algorithm has been incorporated as part of a prototype CAD system. An analysis on a test set of
more than 400 patients yielded a high per-patient sensitivity of 95% and 90% in clean and tagged preparation
respectively for polyps ranging from 6mm to 20mm in size.
Delineation of tagged region by use of local iso-surface roughness in electronic cleansing for CT colonography
Show abstract
Electronic cleansing (EC) is an emerging method for segmentation of fecal materials, which are tagged by an X-ray-opaque oral contrast agent in CT colonography (CTC) images, effectively removing them for digital cleansing of the colon. Due to the partial volume effect, voxels at the boundary between the lumen and tagged materials, called the L-T boundary, not only have CT values that are close to those of soft-tissue structures, but also have gradient values that are similar to a thin soft-tissue structure that is sandwiched between the tagged region and the lumen, which we call a tagging-tissue-air layer. Degradation of thin colonic wall and folds, as well as creation of pseudo soft-tissue structures at the periphery of tagged regions, are main artifacts in existing EC approaches, which tend to use a gradient-based method to delineate tagged regions. In this study, we developed a novel delineation method of tagged regions by applying local iso-surface roughness. The local iso-surface roughness is defined by the sum of differences between the local curvedness at adjacent scales over all scales. In our approach, the roughness values around the periphery of the tagged regions are integrated into the speed function of a level-set segmentation method for delineation of the tagged regions. As a result, L-T boundaries are subtracted along with tagged regions, whereas the thin soft-tissue structures within the tagging-tissue-air layers are preserved. Application of our computer-aided detection (CAD) scheme showed that the use of the new EC method substantially reduced the number of false-positive detections compared with that of our previous gradient-based method.
Pseudo-enhancement correction for computer-aided detection in fecal-tagging CT colonography
Show abstract
Fecal-tagging CT colonography (CTC) presents an opportunity to minimize colon cleansing while maintaining high diagnostic accuracy for the detection of colorectal lesions. However, the pseudo-enhancement introduced by tagging agents presents several problems for the application of computer-aided detection (CAD). We developed a correction method that minimizes pseudo-enhancement in CTC data by modeling of the pseudo-enhancement as a cumulative Gaussian energy distribution. The method was optimized by use of an anthropomorphic colon phantom, and its effect on our fully automated CAD scheme was tested by use of leave-one-patient-out evaluation on 23 clinical CTC cases with reduced colon cleansing based upon dietary fecal tagging. There were 28 colonoscopy-confirmed polyps ≥6 mm. Visual evaluation indicated that the method reduced CT attenuation of pseudo-enhanced polyps to standard soft-tissue Hounsfield unit (HU) range without affecting untagged regions. At a 90% detection sensitivity for polyps ≥6 mm, CAD yielded 8.5 false-positive (FP) detections and 3.9 FP detections per volumetric scan without and with the application of the pseudo-enhancement correction method. These results indicate that the pseudo-enhancement correction method is a potentially useful pre-processing step for automated detection of polyps in fecal-tagging CTC, and that CAD can yield a high detection sensitivity with a relatively low FP rate in CTC with patient-friendly reduced colon preparation.
Towards a computer-aided diagnosis system for colon motility dysfunctions
Show abstract
Colon motility disorders are a very common problem. A precise diagnosis with current methods is almost unachievable. This makes it extremely difficult for the clinical experts to decide for the right intervention such as colon resection. The use of cine MRI for visualizing the colon motility is a very promising technique. In addition, if image segmentation and qualitative motion analysis provide the necessary tools, it could provide the appropriate diagnostic solution. In this work we defined necessary steps in the image processing workflow to gain valuable measurements for a computer aided diagnosis of colon motility disorders. For each step, we developed methods to deal with the dynamic image data. There is need for compensating the breathing motion since no respiratory gating could be used. We segment the colon using a graph cuts approach in 2D and 3D for further analysis and visualization. The analysis of the large bowel motility is done by tracking the extension of the colon during a propagating peristaltic wave. The main objective of this work is to extract a motion model to define a clinical index that can be used in diagnosis of large bowel motility dysfunction. We aim at the classification and localization of such pathologies.
Intra-patient colon surface registration based on tæniæ coli
Show abstract
CT colonography, a prevalent tool to diagnose colon cancer in its early stages,
is often limited by bad distention, or retained fluids, which
will cause segments of the colon to be impossible to process by CAD tools.
By scanning patients in both prone and supine positions, collapsed segments
and retained fluids will not be in the same place in both images, increasing
the length of the colon that can be processed correctly. In order to fully use
these two scans, they must be registered, so that a lesion identified on one of
them can be mapped to the other, thus increasing sensitivity and specificity of
CAD tools.
The surface of the colon is however large (more than half a million vertices
on our images), and has no canonical shape, which makes atlases and other
widely used registration algorithms non optimal. We present in this paper a
fast method to register the colon surface between prone and supine scans using
landmarks present on the colon, the teniae coli. Our method is composed
of three steps. First, we register the body, based on manually placed
landmarks. Then we register the three teni&tildeae; coli, and, from this
registration, we compute a deformation field for each vertex of the colon
surface.
We tested our method on 5 cases, by measuring the RMS error after body
registration, quantifying the intrisic movement of the colon, and after colon
surface registration. The RMS error was reduced from 1.8 cm to 0.49 cm, a
reduction of 71%.
Pathology Imaging
Computer aided classification of cell nuclei in the gastrointestinal tract by volume and principal axis
Show abstract
Normal function of the gastrointestinal tract involves the coordinated activity of several cell types Human disorders of motor function of the gastrointestinal tract are often associated with changes in the number of these cells. For example, in diabetic patients, abnormalities in gastrointestinal transit are associated with changes in nerves and interstitial cells of Cajal (ICC), two key cells that generate and regulate motility. ICC are cells of mesenchymal origin that function as pacemakers and amplify neuronal signals in the gastrointestinal tract. Quantifying the changes in number of specific cell types in tissues from patients with motility disorders is challenging and requires immunolabeling for specific antigens. The shape of nuclei differs between the cell types in the wall of the gastrointestinal tract. Therefore the objective of this study was to determine whether cell nuclei can be classified by analyzing the 3D morphology of the nuclei. Furthermore, the orientation of the long axis of nuclei changes within and between the muscle layers. These features can be used to classify and differentially label the nuclei in confocal volume images of the tissue by computing the principal axis of the coordinates of the set of voxels forming each nucleus and thereby to identify cells by their nuclear morphology. Using this approach, we were able to separate and quantify nuclei in the smooth muscle layers of the tissue. Therefore we conclude that computer-aided classification of cell nuclei can be used to identify changes in the cell types expressed in gastrointestinal smooth muscle.
Challenges in automated detection of cervical intraepithelial neoplasia
Show abstract
Cervical Intraepithelial Neoplasia (CIN) is a precursor to invasive cervical cancer, which annually accounts for about 3700 deaths in the United States and about 274,000 worldwide. Early detection of CIN is important to reduce the fatalities due to cervical cancer. While the Pap smear is the most common screening procedure for CIN, it has been proven to have a low sensitivity, requiring multiple tests to confirm an abnormality and making its implementation impractical in resource-poor regions. Colposcopy and cervicography are two diagnostic procedures available to trained physicians for non-invasive detection of CIN. However, many regions suffer from lack of skilled personnel who can precisely diagnose the bio-markers due to CIN. Automatic detection of CIN deals with the precise, objective and non-invasive identification and isolation of these bio-markers, such as the Acetowhite (AW) region, mosaicism and punctations, due to CIN. In this paper, we study and compare three different approaches, based on Mathematical Morphology (MM), Deterministic Annealing (DA) and Gaussian Mixture Models (GMM), respectively, to segment the AW region of the cervix. The techniques are compared with respect to their complexity and execution times. The paper also presents an adaptive approach to detect and remove Specular Reflections (SR). Finally, algorithms based on MM and matched filtering are presented for the precise segmentation of mosaicism and punctations from AW regions containing the respective abnormalities.
Computer-aided cytological cancer diagnosis: cell type classification as a step towards fully automatic cancer diagnostics on cytopathological specimens of serous effusions
Show abstract
Compared to histopathological methods cancer can be detected earlier, specimens can be obtained easier and
with less discomfort for the patient by cytopathological methods. Their downside is the time needed by an expert
to find and select the cells to be analyzed on a specimen. To increase the use of cytopathological diagnostics,
the cytopathologist has to be supported in this task.
DNA image cytometry (DNA-ICM) is one important cytopathological method that measures the DNA content
of cells based on the absorption of light within Feulgen stained cells. The decision whether or not the patient has
cancer is based on the histogram of the DNA values. To support the cytopathologist it is desirable to replace
manual screening of the specimens by an automatic selection of relevant cells for DNA-ICM. This includes
automated acquisition and segmentation of focused cells, a recognition of cell types, and a selection of cells to
be measured. As a step towards automated cell type detection we show the discrimination of cell types in serous
effusions on a selection of about 3, 100 manually classified cells. We present a set of 112 features and the results
of feature selection with ranking and a floating-search method combined with different objective functions. The
validation of the best feature sets with a k-nearest neighbor and a fuzzy k-nearest neighbor classifier on a disjoint
set of cells resulted in classification rates of 96% for lymphocytes and 96.8% for the diagnostically relevant cells
(mesothelial+ cells), which includes benign and malign mesothelial cells and metastatic cancer cells.
Databases and Pattern Recognition
Training a CAD classifier with correlated data
Show abstract
Most methods for classifier design assume that the training samples
are drawn independently and identically from an unknown data
generating distribution (i.i.d.), although this assumption is violated in several real life problems. Relaxing this i.i.d. assumption, we
develop training algorithms for the more realistic situation where
batches or sub-groups of training samples may have internal
correlations, although the samples from different batches may be
considered to be uncorrelated; we also consider the extension to
cases with hierarchical--i.e. higher order--correlation structure
between batches of training samples. After describing efficient
algorithms that scale well to large datasets, we provide some
theoretical analysis to establish their validity. Experimental
results from real-life Computer Aided Detection (CAD) problems
indicate that relaxing the i.i.d. assumption leads to statistically
significant improvements in the accuracy of the learned classifier.
Mixture of expert artificial neural networks with ensemble training for reduction of various sources of false positives in CAD
Show abstract
Our purpose was to reduce false-positive (FP) detections generated by a computerized lesion detection scheme by using a "mixture of expert" massive-training artificial neural networks (MTANNs). Multiple MTANNs were trained with "ensemble training" for reduction of diverse types of non-lesions. We started from a seed MTANN trained with lesions and non-lesions of a seed type. We applied the trained seed MTANN to lesions and various types of non-lesions to analyze the weakness of the seed MTANN. We arranged the output scores of the MTANN for lesions and non-lesions in an ascending order to form the score scale representing the degree of difficulty in distinction between lesions and non-lesions by the seed MTANN. The score scale was divided into several segments, and ten non-lesions were sampled from the center of each segment so that sets of non-lesion samples covered diverse difficulties. We trained several MTANNs with several sets of non-lesions so that each MTANN became an expert for the non-lesions at a certain level of difficulty. We then combined expert MTANNs with a mixing ANN to form a "mixture of expert" MTANNs. Our database consisted of CT colonography datasets acquired from 100 patients, including 26 polyps. We applied our initial CAD scheme to this CTC database. FP sources included haustral folds, stool, colonic walls, the ileocecal valves, and rectal tubes. The mixture of expert MTANNs distinguished all polyps correctly from more than 50% of the non-polyps. Thus, the mixture of expert MTANNs was able reduce one half of the FPs generated by a computerized polyp detection scheme while the original sensitivity was maintained. We compare the effectiveness of ensemble training with that of training with manually selected cases. The performance of the MTANNs with ensemble training was superior to that of the MTANNs trained with manually selected cases.
The Lung Image Database Consortium (LIDC): pulmonary nodule measurements, the variation, and the difference between different size metrics
Show abstract
Size is an important metric for pulmonary nodule
characterization. Furthermore, it is an important parameter in
measuring the performance of computer aided detection systems since
they are always qualified with respect to a given size range of
nodules. The first 120 whole-lung CT scans documented by the Lung Image
Database Consortium using their protocol for nodule evaluation
were used in this study. For documentation, each inspected lesion was
reviewed independently by four expert radiologists and, when a lesion
was considered to be a nodule larger than 3mm, the radiologist
provided boundary markings in each image in which the nodule was
contained. Three size metrics were considered: a uni-dimensional and
a bi-dimensional measure on a single image slice and a volumetric
measurement based on all the image slices. In this study we analyzed
the boundary markings of these nodules in the context of these three
size metrics to characterize the inter-radiologist variation and to
examine the difference between these metrics. A data set of 63 nodules
each having four observations was analyzed for inter-observer
variation and an extended set of 252 nodules each having at least one
observation was analyzed for the difference between the metrics. A
very high inter-observer variation was observed for all these metrics
and also a very large difference among the metrics was observed.
The Lung Image Database Consortium (LIDC) data collection process for nodule detection and annotation
Show abstract
The LIDC is developing a publicly available database of thoracic computed tomography (CT) scans as a medical imaging research resource. A unique multi-center data collection process and communication system were developed to share image data and to capture the location and spatial extent of lung nodules as marked by expert radiologists. A two-phase data collection process was designed to allow multiple radiologists at different centers to asynchronously review and annotate each CT image series. Four radiologists reviewed each case using this process. In the first or "blinded" phase, each radiologist reviewed the CT series independently. In the second or "unblinded" review phase, the results from all four blinded reviews are compiled and presented to each radiologist for a second review. This allows each radiologist to review their own annotations along with those of the other radiologists. The results from each radiologist's unblinded review were compiled to form the final unblinded review. There is no forced consensus in this process. An XML-based message system was developed to communicate the results of each reading. This two-phase data collection process was designed, tested and implemented across the LIDC. It has been used for more than 130 CT cases that have been read and annotated by four expert readers and are publicly available at (http://ncia.nci.nih.gov). A data collection process was developed, tested and implemented that allowed multiple readers to review each case multiple times and that allowed each reader to observe the annotations of other readers.
The effect of nodule segmentation on the accuracy of computerized lung nodule detection on CT scans: comparison on a data set annotated by multiple radiologists
Show abstract
In computerized nodule detection systems on CT scans, many features that are useful for classifying whether a nodule candidate identified by prescreening is a true positive depend on the shape of the segmented object. We designed two segmentation algorithms for detailed delineation of the boundaries for nodule candidates. The first segmentation technique was a three-dimensional (3D) region-growing (RG) method which grew the object across multiple CT sections. The second technique was based on a 3D active contour (AC) model. A training set of 94 CT scans was used for algorithm design. An independent set of 62 scans, each read by multiple radiologists, was used for testing. Thirty-three scans were collected from patient files at the University of Michigan and 29 scans by the Lung Imaging Database Consortium (LIDC). In this study, we concentrated on the detection of internal lung nodules having a size ≥3 mm that were not pure ground-glass opacities. Of the lesions marked by one or multiple radiologists, 124 nodules satisfied these criteria and were considered true nodules. The performance of the detection system in the AC feature space, RG feature space, and the combined feature space were compared using free-response receiver operating curves (FROC). The FROC curve using the combined feature space was significantly higher than that using the RG feature space or the AC feature space alone (p=0.02 and 0.03, respectively). At a sensitivity of 70% for internal non-GGO nodules, the FP rates were 2.2, 2.2, and 1.5 per scan, respectively, for the RG, AC, and the combined methods. Our results indicate that the 3D AC algorithm can provide useful features to improve nodule detection on CT scans.
Particle swarm optimization of neural network CAD systems with clinically relevant objectives
Show abstract
Neural networks (NN) are typically developed to minimize the squared difference between the network's output and the target value for a set of training patterns; namely the mean squared error (MSE). However, lower MSE does not necessarily translate into a clinically more useful decision model. The purpose of this study was to investigate the particle swarm optimization (PSO) algorithm as an alternative way of NN optimization with clinically relevant objective functions (e.g., ROC and partial ROC area indices). The PSO algorithm was evaluated with respect to a NN-based CAD system developed to discriminate mammographic regions of interest (ROIs) that contained masses from normal regions based on 8 computer-extracted morphology-oriented features. Neural networks were represented as points (particle locations) in a D-dimensional search/optimization space where each dimension corresponded to one adaptable NN parameter. The study database of 1,337 ROIs (681 with masses, 656 normal) was split into two subsets to implement two-fold cross-validation sampling scheme. Neural networks were optimized with the PSO algorithm and the following objective functions (1) MSE, (2) ROC area index AUC, and (3) partial ROC area indices TPFAUC with TPF=0.90 and TPF=0.98. For comparison, performance of neural networks of the same architecture trained with the traditional backpropagation algorithm was also evaluated. Overall, the study showed that when the PSO algorithm optimized network parameters using a particular training objective, the NN test performance was superior with respect to the corresponding performance index. This was particularly true for the partial ROC area indices where statistically significant improvements were observed.
Thoracic CT
Automated characterization of normal and pathologic lung tissue by topological texture analysis of multidetector CT
Show abstract
Reliable and accurate methods for objective quantitative assessment of parenchymal alterations in the lung are necessary for diagnosis, treatment and follow-up of pulmonary diseases. Two major types of alterations are pulmonary emphysema and fibrosis, emphysema being characterized by abnormal enlargement of the air spaces distal to the terminal, nonrespiratory bronchiole, accompanied by destructive changes of the alveolar walls. The main characteristic of fibrosis is coursening of the interstitial fibers and compaction of the pulmonary tissue. With the ability to display anatomy free from superimposing structures and greater visual clarity, Multi-Detector-CT has shown to be more sensitive than the chest radiograph in identifying alterations of lung parenchyma.
In automated evaluation of pulmonary CT-scans, quantitative image processing techniques are applied for objective evaluation of the data. A number of methods have been proposed in the past, most of which utilize simple densitometric tissue features based on the mean X-ray attenuation coefficients expressed in terms of Hounsfield Units [HU]. Due to partial volume effects, most of the density-based methodologies tend to fail, namely in cases, where emphysema and fibrosis occur within narrow spatial limits.
In this study, we propose a methodology based upon the topological assessment of graylevel distribution in the 3D image data of lung tissue which provides a way of improving quantitative CT evaluation. Results are compared to the more established density-based methods.
Toward computer-aided emphysema quantification on ultralow-dose CT: reproducibility of ventrodorsal gravity effect measurement and correction
Show abstract
Computer aided quantification of emphysema in high resolution CT data is based on identifying low attenuation areas below clinically determined Hounsfield thresholds. However, the emphysema quantification is prone to error since a gravity effect can influence the mean attenuation of healthy lung parenchyma up to ± 50 HU between ventral and dorsal lung areas. Comparing ultra-low-dose (7 mAs) and standard-dose (70 mAs) CT scans of each patient we show that measurement of the ventrodorsal gravity effect is patient specific but reproducible. It can be measured and corrected in an unsupervised way using robust fitting of a linear function.
Automatic detection of rib metastasis in chest CT volume data
Show abstract
We describe a system for the automatic detection of rib metastasis in thoracic CT volume. Rib metastasis manifest itself as alterations of bone intensities or shapes, and the detection of these alterations is the goal of the algorithm. According to the tubular shape of the rib structures, the detection is based on the construction of 2D cross-sections planes along the full lengths of each of the individual ribs. The set of planes is orthogonal to the rib centerline, with is extracted by a previously developed segmentation algorithm based on recursive tracing. On each of these planes, a 2D image is constructed by interpolation in the region of interest around the centerline intersection and the plane. From this image the cortical and trabecular bones are segmented separately. The appearance and geometric properties of the bone structures are analyzed and categorized according to a set of rules that summarize the possible variation types due to metastasis. The features extracted from the cross-sections along a short length of the centerline are jointly evaluated. A positive detection is accepted only if the alteration of shape and appearance is consistent with a number of consecutive cross-sections along the rib centerline.
Efficient detection of diffuse lung disease
Show abstract
Automated methods of detecting lung disease typically involve the following: 1) Subdividing the lung into small
regions of interest (ROIs). 2) Calculating the features of these small ROIs. 3) Applying a machine learnt classifier
to determine the class of each ROI. When the number of features that need to be calculated is large, as in the
case of filter bank methods or in methods calculating a large range of textural properties, the classification
can run quite slowly. This is even more noticeable when a number of disease patterns are considered. In this
paper, we investigate the possibility of using a cascade of classifiers to concentrate the processing power on
promising regions. In particular, we focused on the detection of the honeycombing disease pattern. We used
knowledge of the appearance and the distribution of honeycombing to selectively classify ROIs. This avoids the
need to explicitly classify all ROIs in the lung; making the detection process more effcient. We evaluated the
performance of the system over 42 HRCT slices from 8 different patients and show that the system performs
the task of detecting honeycombing with a high degree of accuracy (accuracy = 86.2%, sensitivity = 90.0%,
specificity = 82.2%).
Pulmonary nodule registration in serial CT scans using rib anatomy and nodule template matching
Show abstract
The goal of this study was to develop an automated method to identify corresponding nodules in serial CT scans for
interval change analysis. The method uses the rib centerlines as the reference for initial nodule registration. From an
automatically-identified starting point near the spine, each rib is locally tracked and segmented by expectation-maximization.
The ribs are automatically labeled, and the centerlines are estimated using skeletonization. 3D rigid affine
transformation is used to register the individual ribs in the reference and target scans. For a given nodule in the
reference scan, a search volume of interest (VOI) in the target scan is defined by using the registered ribs. Template
matching guided by the normalized cross-correlation between the nodule template and target locations within the search
VOI is used for refining the registration. The method was evaluated on 48 CT scans from 20 patients. The slice thickness
ranged from 0.625 to 7 mm, and the in-plane pixel size from 0.556 to 0.82 mm. Experienced radiologists identified 101
pairs of nodules. Two metrics were used for performance evaluation: 1) the Euclidean distance between the nodule
centers identified by the radiologist and the computer registration, and 2) a volume overlap measure defined as the
intersection of the VOIs identified by the radiologist and the computer registration relative to the radiologist's VOI. The
average Euclidean distance error was 2.7 ± 3.3 mm. Only 2 pairs had an error >10 mm. The average volume overlap
measure was 0.71 ± 0.24. Eight-three out of 101 pairs had overlap ratios > 0.5 and only 2 pairs had no overlap.
MRI Applications
Classification of brain tumors using MRI and MRS data
Show abstract
We study the problem of classifying brain tumors as benign or malignant using information from magnetic resonance (MR) imaging and magnetic resonance spectroscopy (MRS) to assist in clinical diagnosis. The proposed approach consists of several steps including segmentation, feature extraction, feature selection, and classification model construction. Using an automated segmentation technique based on fuzzy connectedness we accurately outline the tumor mass boundaries in the MR images so that further analysis concentrates on these regions of interest (ROIs). We then apply a concentric circle technique on the ROIs to extract features that are utilized by the classification algorithms. To remove redundant features, we perform feature selection where only those features with discriminatory information (among classes) are used in the model building process. The involvement of MRS features further improves the classification accuracy of the model. Experimental results demonstrate the effectiveness of the proposed approach in classifying brain tumors in MR images.
Segmentation of suspicious lesions in dynamic contrast-enhanced breast MR images
Show abstract
Dynamic contrast enhanced breast MRI (DCE BMRI) is an emerging tool for breast cancer diagnosis. There is
a clear clinical demand for computer-aided diagnosis (CADx) tools to support radiologists in the diagnostic
reading process of DCE BMRI studies. A crucial step in a CADx system is the segmentation of tumors,
which allows for accurate assessment of the 3D lesion size and morphology. In this paper we propose a semiautomatic
segmentation procedure for suspicious breast lesions. The proposed methodology consists of four steps:
(1) Robust seed point selection. This interaction mode ensures robustness of the segmentation result against
variations in seed-point placement. (2) Automatic intensity threshold estimation in the subtraction image.
(3)Connected component analysis based on the estimated threshold. (4) A post-processing step that includes
non-enhancing portions of the lesion into the segmented area and removes attached vessels. The proposed
methodology was applied to DCE BMRI data acquired at different institutions using different protocols.
Effect of calibration on computerized analysis of prostate lesions using quantitative dynamic contrast-enhanced magnetic resonance imaging
Show abstract
In this study, we investigated the effect of different patient calibration methods on the performance of our CAD
system when discriminating prostate cancer from non-malignant suspicious enhancing areas in the peripheral
zone and the normal peripheral zone.
Our database consisted of 34 consecutive patients with histologically proven adenocarcinoma of the prostate.
Both carcinoma and normal tissue were annotated on MR images by a radiologist and a researcher using whole
mount step-section histopathology as standard of reference. The annotated regions were used as regions of interest
in the contrast enhanced MRI images. A feature set comprising pharmacokinetic parametes was extracted from
the ROIs to train a support vector machine as classifier. The output of the classifier was used as a measure of
likelihood of malignancy. General performance of the scheme was evaluated using the area under the ROC curve.
The diagnostic accuracy obtained for differentiating normal peripheral zone and non-malignant suspicious
enhancing areas from malignant lesions was 0.88 (0.81-0.95) when per patient calibration was performed, whereas
fixed calibration resulted in a diagnostic accuracy of 0.77 (0.69-0.85). These preliminary results indicate that
when per patient calibration is used, the performance is improved with statistical significance (p=0.026).
Multispectral brain tumor segmentation based on histogram model adaptation
Show abstract
Brain tumor segmentation and quantification from MR images is a challenging task. The boundary of a tumor
and its volume are important parameters that can have direct impact on surgical treatment, radiation therapy,
or on quantitative measurements of tumor regression rates. Although a wide range of different methods has
already been proposed, a commonly accepted approach is not yet established. Today, the gold standard at many
institutions still consists of a manual tumor outlining, which is potentially subjective, and a time consuming and
tedious process.
We propose a new method that allows for fast multispectral segmentation of brain tumors. An efficient initialization
of the segmentation is obtained using a novel probabilistic intensity model, followed by an iterative
refinement of the initial segmentation. A progressive region growing that combines probability and distance
information provides a new, flexible tumor segmentation. In order to derive a robust model for brain tumors
that can be easily applied to a new dataset, we retain information not on the anatomical, but on the global
cross-subject intensity variability. Therefore, a set of multispectral histograms from different patient datasets
is registered onto a reference histogram using global affine and non-rigid registration methods. The probability
model is then generated from manual expert segmentations that are transferred to the histogram feature domain.
A forward and backward transformation of a manual segmentation between histogram and image domain allows
for a statistical analysis of the accuracy and robustness of the selected features. Experiments are carried out on
patient datasets with different tumor shapes, sizes, locations, and internal texture.
Automatic detection of pelvic lymph nodes using multiple MR sequences
Show abstract
A system for automatic detection of pelvic lymph nodes is developed by incorporating complementary information
extracted from multiple MR sequences. A single MR sequence lacks sufficient diagnostic information for lymph
node localization and staging. Correct diagnosis often requires input from multiple complementary sequences
which makes manual detection of lymph nodes very labor intensive. Small lymph nodes are often missed even by
highly-trained radiologists. The proposed system is aimed at assisting radiologists in finding lymph nodes faster
and more accurately. To the best of our knowledge, this is the first such system reported in the literature. A
3-dimensional (3D) MR angiography (MRA) image is employed for extracting blood vessels that serve as a guide
in searching for pelvic lymph nodes. Segmentation, shape and location analysis of potential lymph nodes are then
performed using a high resolution 3D T1-weighted VIBE (T1-vibe) MR sequence acquired by Siemens 3T scanner.
An optional contrast-agent enhanced MR image, such as post ferumoxtran-10 T2*-weighted MEDIC sequence, can also be incorporated to further improve detection accuracy of malignant nodes. The system outputs a list of potential lymph node locations that are overlaid onto the corresponding MR sequences and presents them
to users with associated confidence levels as well as their sizes and lengths in each axis. Preliminary studies
demonstrates the feasibility of automatic lymph node detection and scenarios in which this system may be used
to assist radiologists in diagnosis and reporting.
Computer-aided differential diagnosis in movement disorders using MRI morphometry
Show abstract
Background: Reported error rates for initial clinical diagnosis in parkinsonian disorders can reach up to 35%. Reducing this initial error rate is an important research goal. The objective of this work is to evaluate the ability of an automated MR-based classification technique in the differential diagnosis of Parkinson's disease (PD), multiple systems atrophy (MSA) and progressive supranuclear palsy (PSP).
Methods: A total of 172 subjects were included in this study: 152 healthy subjects, 10 probable PD patients and 10 age-matched patients with diagnostic of either probable MSA or PSP. T1-weighted (T1w) MR images were acquired and subsequently corrected, scaled, resampled and aligned within a common referential space. Tissue transformation and deformation features were then automatically extracted. Classification of patients was performed using forward, stepwise linear discriminant analysis within a multidimensional transformation/deformation feature space built from healthy subjects data. Leave-one-out classification was used to avoid over-determination.
Findings: There were no age difference between groups. Highest accuracy (agreement with long-term clinical follow-up) of 85% was achieved using a single MR-based deformation feature.
Interpretation: These preliminary results demonstrate that a classification approach based on quantitative parameters of 3D brainstem morphology extracted automatically from T1w MRI has the potential to perform differential diagnosis of PD versus MSA/PSP with high accuracy.
CT Lung Nodules
Automated volumetric segmentation method for growth consistency of nonsolid pulmonary nodules in high-resolution CT
Show abstract
There is widespread clinical interest in the study of pulmonary nodules for early diagnosis of lung cancer. These nodules can be broadly classified into one of three types, solid, nonsolid and part-solid. Solid nodules have been extensively studied, while little research has focused on the characterization of nonsolid and part-solid nodules. Nonsolid nodules have an appearance in high-resolution CT consisting of voxels only slightly more dense than that of the surrounding lung parenchyma. For the solid nodule, robust techniques are available to estimate growth rate and this is commonly used to distinguish benign from malignant. For the nonsolid types, these techniques are less well developed. In this research, we propose an automated volumetric segmentation method for nonsolid nodules that accurately determines a nonsolid nodule's growth rate. Our method starts with an initial noise-filtering stage in the parenchyma region. Each voxel is then classified into one of three tissue types; lung parenchyma, nonsolid and solid. Removal of vessel attachments to the lesion is achieved with the use of a filter that focuses on vessel characteristics. Our results indicate that the automated method is more consistent than the radiologist with a median growth consistency of 1.87 compared to 3.12 for the radiologist on a database of 25 cases.
Simulating solid lung nodules in MDCT images for CAD evaluation: modeling, validation, and applications
Show abstract
A new lung nodule simulation model was designed to create and insert synthetic solid lung nodules, with shapes
and density similar to real nodules, into normal MDCT chest exams. Nodule shapes were modeled using linearly
deformed superquadrics with added randomly generated high dimensional deformations. Nodule density statistics
and attenuation profiles were extracted from a group of real nodule samples, by dissecting each real nodule
digitally layer by layer from the border to the core. A nodule created with modeled shape and density was
inserted into real CT images by creating volume average layers using weighted averaging between nodule density
and background density for each voxel. The nodule simulation model was validated both subjectively by human
experts and quantitatively by comparing density attenuation profiles of simulated nodules with real nodules.
These validation studies demonstrated a high level of similarity between the synthetic nodules and real nodules.
This nodule simulation model was used to create objective test databases for use in evaluating a CAD system. The
evaluation study showed that the CAD system was accurate in detection and volume measurement for isolated
nodules, and also performed relatively well for juxta-vascular nodules. The CAD system also demonstrated
stable performances for different dosages.
Automated detection of pulmonary nodules from low-dose computed tomography scans using a two-stage classification system based on local image features
Show abstract
The automated detection of lung nodules in CT scans is an important problem in computer-aided diagnosis. In this paper an approach to nodule candidate detection is presented which utilises the local image features of shape index and curvedness. False-positive candidates are removed by means of a two-step approach using kNN classification. The kNN classifiers are trained using features of the image intensity gradients and grey-values in addition to further measures of shape index and curvedness profiles in the candidate regions. The training set consisted of data from 698 scans while the independent test set comprised a further 142 images. At 84% sensitivity an average of 8.2 false-positive detections per scan were observed.
Computer-aided diagnosis for interval change analysis of lung nodule features in serial CT examinations
Show abstract
A CAD system was developed to extract and analyze features from corresponding malignant and benign lung nodules on temporal pairs of CT scans. The lung nodules on the current and prior CT scans were automatically segmented using a 3-dimensional (3D) active contour model. Three-dimensional run length statistics (RLS) texture features, 3D morphological and gray-level features were extracted from each nodule. In addition, 3D nodule profile features (PROF) that describe the gray level variation inside and outside the nodule surface were extracted by estimating the gradient magnitude values along the radial vectors from the nodule centroid to a band of voxels surrounding the nodule surface. Interval change features were calculated as the difference between the corresponding features extracted from the prior and the current scans of the same nodule. Stepwise feature selection with simplex optimization was used to select the best feature subset from the feature space that combined both the interval change features and features from the single current exam. A linear discriminant classifier was used to merge the selected features for classification of malignant and benign nodules. In this preliminary study, a data set of 103 nodule temporal pairs (39 malignant and 64 benign) was used. A leave-one-case-out resampling scheme was used for feature selection and classification. An average of 5 features was selected from the training subsets. The most frequently selected features included a difference PROF feature and 4 RLS features. The classifier achieved a test Az of 0.85±0.04. In comparison a classifier using features extracted from the current CT scans alone achieved a test Az of 0.78±0.05. This study indicates that our CAD system using interval change information is useful for classification of lung nodules on CT scans.
Statistical image quantification toward optimal scan fusion and change quantification
Show abstract
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice.
In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
Computer-aided characterization of solitary pulmonary nodules (SPNs) using structural 3D, texture, and functional dynamic contrast features
Show abstract
The purpose of this paper was to investigate the effects of integrating nodule 3D morphological features, texture features and functional dynamic contrast-enhanced features in differentiating between benign and malignant solitary pulmonary nodules (SPNs). In this study, 42 cases with solitary lung nodules were examined in this study. The dynamic CT helical scans were acquired image at five time intervals: prior to contrast injection (baseline) and then at 45, 90, 180, 300 seconds after administrating the contrast agent. The nodule boundaries were contoured by radiologists on all series. Using these boundaries, several types of nodule features were computed, including: 3D morphology and Shape Index of the nodule contrast intensity surface; Dynamic contrast related features; 3D texture features. AdaBoost was performed to select the best features. Logistic Regression Analysis (LRA) and AdaBoost were used to analyze the diagnostic accuracy of features in each feature category. The performance when integrating all feature types was also evaluated. For 42 patients, when using only six SI and 3D structural features, the accuracy of AdaBoost was 81.4%, with accuracies of AdaBoost using functional contrast related features (include 8 features) and texture features(include 18 features) were 65.1% and 69.1% respectively. After combining all types' features together, the overall accuracy was improved to over 88%. In conclusion: Combining 3D structural, textural and functional contrast features can provide a more comprehensive examination of the SPNs by coupling dynamic CT scan techniques with image processing to quantify multiple properties that relate to tumor geometry and tumor angiogenesis. This integration may assist radiologists in characterizing SPNs more accurately.
Breast Tomosynthesis
Feasibility study of breast tomosynthesis CAD system
Show abstract
The purpose of this study was to investigate feasibility of computer-aided detection of masses and calcification clusters in breast tomosynthesis images and obtain reliable estimates of sensitivity and false positive rate on an independent test set. Automatic mass and calcification detection algorithms developed for film and digital mammography images were applied without any adaptation or retraining to tomosynthesis projection images. Test set contained 36 patients including 16 patients with 20 known malignant lesions, 4 of which were missed by the radiologists in conventional mammography images and found only in retrospect in tomosynthesis. Median filter was applied to tomosynthesis projection images. Detection algorithm yielded 80% sensitivity and 5.3 false positives per breast for calcification and mass detection algorithms combined. Out of 4 masses missed by radiologists in conventional mammography images, 2 were found by the mass detection algorithm in tomosynthesis images.
Breast mass detection in tomosynthesis projection images using information-theoretic similarity measures
Show abstract
The purpose of this project is to study Computer Aided Detection (CADe) of breast masses for digital
tomosynthesis. It is believed that tomosynthesis will show improvement over conventional mammography in
detection and characterization of breast masses by removing overlapping dense fibroglandular tissue. This study
used the 60 human subject cases collected as part of on-going clinical trials at Duke University. Raw projections
images were used to identify suspicious regions in the algorithm's high-sensitivity, low-specificity stage using a
Difference of Gaussian (DoG) filter. The filtered images were thresholded to yield initial CADe hits that were then
shifted and added to yield a 3D distribution of suspicious regions. These were further summed in the depth direction
to yield a flattened probability map of suspicious hits for ease of scoring. To reduce false positives, we developed an
algorithm based on information theory where similarity metrics were calculated using knowledge databases
consisting of tomosynthesis regions of interest (ROIs) obtained from projection images. We evaluated 5 similarity
metrics to test the false positive reduction performance of our algorithm, specifically joint entropy, mutual
information, Jensen difference divergence, symmetric Kullback-Liebler divergence, and conditional entropy. The
best performance was achieved using the joint entropy similarity metric, resulting in ROC Az of 0.87 ± 0.01. As a
whole, the CADe system can detect breast masses in this data set with 79% sensitivity and 6.8 false positives per
scan. In comparison, the original radiologists performed with only 65% sensitivity when using mammography alone,
and 91% sensitivity when using tomosynthesis alone.
Computer-aided detection of masses in digital tomosynthesis mammography: combination of 3D and 2D detection information
Show abstract
We are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBTs). The CAD system includes two parallel processes. In the first process, mass detection and feature analysis are performed in the reconstructed 3D DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant (LDA) classifier. In the second process, mass detection and feature analysis are applied to the individual projection view (PV) images. A mass likelihood score is estimated for each mass candidate using another LDA classifier. The mass likelihood images derived from the PVs are back-projected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The mass likelihood scores estimated by the two processes at the corresponding 3D location are then merged and evaluated using FROC analysis. In this preliminary study, a data set of 52 DBT cases acquired with a GE prototype system at the Massachusetts General Hospital was used. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In an FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at an average FP rate of 1.6 and 3.0 per breast, respectively. In comparison, the average FP rates of the combined system were 1.2 and 2.3 per breast, respectively, at the same sensitivities. The combined system is a promising approach to improving mass detection on DBTs.
Analysis of parenchymal texture properties in breast tomosynthesis images
Show abstract
We have analyzed breast parenchymal texture in tomosynthesis images. Tomosynthesis is a novel x-ray imaging
modality in which 3D images of the breast are reconstructed from multiple 2D x-ray source projection images acquired
by varying the angle of the x-ray tube. Our ultimate goal is to examine the correlation between tomosynthesis texture
descriptors and breast cancer risk. As a first step, we investigated the effect of tomosynthesis acquisition parameters on
texture in the source projection images; this avoids the influence of the reconstruction algorithm. We computed
statistical texture descriptors which have been shown in the literature to be highly indicative of breast cancer risk. We
compared skewness, coarseness, and contrast computed from the central source projection images and the corresponding
mammograms. Our analysis showed that differences exist between mammographic and tomosynthetic texture in
projection images. Retroareolar ROIs in tomosynthesis images appeared to be less skewed with lower coarseness and
higher contrast measures compared to mammograms; however, corresponding texture descriptors for tomosynthesis and
mammography are correlated. Examination of the ROIs demonstrates that the texture in tomosynthesis source
projections visually differs from the x-ray mammograms. We attribute this observation to acquisition differences,
including radiation dose, compression force, and x-ray scatter. As with mammography, tomosynthesis parenchymal
texture is related to the Gail-model cancer risk. Although preliminary, we believe that texture analysis of 3D breast
tomosynthesis images will ultimately yield more accurate and precise measures of risk.
Cardiac/ New Applications
Segmentation of coronary arteries from CT angiography images
Show abstract
We present an automated method for delineation of coronary arteries from Cardiac CT Angiography (CTA) images. Coronary arteries are narrow blood vessels and when imaged using CTA, appear as thin cylindrical structures of varying curvature. This appearance is often affected by heart motion and image reconstruction artifacts. Moreover, when an artery is diseased, it may appear as a non-continuous structure of widely varying width and image intensity. Defining the boundaries of the coronary arteries is an important and necessary step for further analysis and diagnosis of coronary disease. For this purpose, we developed a method using cylindrical structure modeling. For each vessel segment a best fitting cylindrical template is found. By applying this technique sequentially along the vessel, its entire volume can be reconstructed. The algorithm is seeded with a manually specified starting point at the most distal discernible portion of an artery and then it proceeds iteratively toward the aorta. The algorithm makes necessary corrections to account for CTA image artifacts and is able to perform in diseased arteries. It stops when it identifies the vessels junction with the aorta. Five cardiac 3D CT angiography studies were used for algorithm validation. For each study, the four longest visually discernible branches of the major coronary arteries were evaluated. Central axes obtained from our automated method were compared with ground truth markings made by an experienced radiologist. In 75% of the cases, our algorithm was able to extract the entire length of the artery from single initialization.
Anatomically constrained maximum likelihood estimation for estimating retinal thickness from scanning laser ophthalmoscope data
Show abstract
A multistage algorithm is presented, whose components are based upon maximum likelihood estimation (MLE). From
3D scanning laser ophthalmoscope (SLO) image data, the algorithm finds the positions of the two anatomical boundaries
of the eye's fundus that define the retina, which are the internal limiting membrane (ILM) and the retinal pigment
epithelium (RPE). he retinal thickness is then calculated by subtraction. Retinal thickness is useful for indicating,
assessing risk of, and following several diseases, including various forms of macular edema and cysts.
Computer-aided septal defect diagnosis and detection
Show abstract
To facilitate the clinical diagnosis, surgical planning and after operation follow-up, a computer aided septal defect diagnosis and detection framework is proposed. The framework is consisted of four steps: image registration, flow balance measurement, heart wall tracking and septal defects detection. First, a global smooth constrained localized registration method is employed to register the image; Then, flow balance measurement is employed to determine the unbalance of in and out blood flow, which usually indicate the septal defects in the heart; After that, the wall of the heart is tracked using the same framework used in the registration to improve the efficiency and accuracy; Defects along septal are detected using a Bayesian based information fusion to analyze the profile lines from registered image, difference image and original grey image for the whole sequence (3D+T). The proposed method is tested using gated cardiac MRI, which is a well-established clinical diagnosis method for septal defect detection. Experimental results show that the proposed framework is able to successfully detect the septal defects and provide the visual assistance to the radiologist for further diagnosis. The proposed detection can be widely used in both clinical practice, surgical planning and after operation follow-up. To the best of our knowledge, the work is first such an effort.
Computer-aided assessment of cardiac computed tomographic images
Show abstract
The accurate interpretation of cardiac CT images is commonly hindered by the presence of motion artifacts. Since motion
artifacts commonly can obscure the presence of coronary lesions, physicians must spend much effort analyzing images
at multiple cardiac phases in order to determine which coronary structures are assessable for potential lesions. In this
study, an artificial neural network (ANN) classifier was designed to assign assessability indices to calcified plaques in
individual region-of-interest (ROI) images reconstructed at multiple cardiac phases from two cardiac scans obtained at
heart rates of 66 bpm and 90 bpm. Six individual features (volume, circularity, mean intensity, margin gradient, velocity,
and acceleration) were used for analyzing images. Visually-assigned assessability indices were used as a continuous truth,
and jack-knife analysis with four testing sets was used to evaluate the performance of the ANN classifier. In a study in
which all six features were inputted into the ANN classifier, correlation coefficients of 0.962 ± 0.006 and 0.935 ± 0.023
between true and ANN-assigned assessability indices were obtained for databases corresponding to 66 bpm and 90 bpm,
respectively.
A method for extracting multi-organ from four-phase contrasted CT images based on CT value distribution estimation using EM-algorithm
Show abstract
This paper presents a method for extracting multi-organs from four-phase contrasted CT images taken at different
contrast timings (non-contrast, early, portal, and late phases). First, we apply a median filter to each CT image
and align four-phase CT images by performing non-rigid volumetric image registration. Then, a three-dimensional
joint histogram of CT values is computed from three-phase (early-, portal-, and late-) CT images. We assume
that this histogram is a mixture of normal distributions corresponding to the liver, spleen, kidney, vein, artery,
muscle, and bone regions. The EM algorithm is employed to estimate each normal distribution. Organ labels
are assigned to each voxel using the mahalanobis distance measure. Connected component analysis is applied
to correct the shape of each organ region. After that, the pancreas region is extracted from non-contrasted CT
images in which other extracted organs and vessel regions are excluded. The EM algorithm is also employed for
estimating the distribution of CT values inside the pancreas. We applied this method to seven cases of four-phase
CT images. Extraction results show that the proposed method extracted multi-organs satisfactorily.
Automatic polyp region segmentation for colonoscopy images using watershed algorithm and ellipse segmentation
Show abstract
In the US, colorectal cancer is the second leading cause of all cancer deaths behind lung cancer. Colorectal polyps are the precursor lesions of colorectal cancer. Therefore, early detection of polyps and at the same time removal of these precancerous lesions is one of the most important goals of colonoscopy. To objectively document detection and removal of colorectal polyps for quality purposes, and to facilitate real-time detection of polyps in the future, we have initiated a computer-based research program that analyzes video files created during colonoscopy. For computer-based detection of polyps, texture based techniques have been proposed. A major limitation of the existing texture-based analytical methods is that they depend on a fixed-size analytical window. Such a fixed-sized window may work for still images, but is not efficient for analysis of colonoscopy video files, where a single polyp can have different relative sizes and color features, depending on the viewing position and distance of the camera. In addition, the existing methods do not consider shape features. To overcome these problems, we here propose a novel polyp region segmentation method primarily based on the elliptical shape that nearly all small polyps and many larger polyps possess. Experimental results indicate that our proposed polyp detection method achieves a sensitivity and specificity of 93% and 98%, respectively.
Breast Imaging
The effect of image quality on the appearance of lesions on breast ultrasound: implications for CADx
Show abstract
With the emergence of recent technology in breast ultrasound, sonographic image
quality has changed profoundly. Most notably, the technique of real-time
spatial compounding impacts the appearance of lesions and parenchyma. During
image acquisition, spatial compounding can be turned on or off at the
discretion of the radiologist, but this information is not stored along with
the image data.
The ability to distinguishing between lesions imaged with and without spatial
compounding, using either single image features or a Bayesian neural net (BNN), was assessed using ROC analysis. Our database consisted of consecutively
collected HDI5000 images of 129 lesions imaged without spatial compounding
(357 images, cancer prevalence of 18%) and 370 lesions imaged with spatial
compounding (965 images, cancer prevalence 15%). These were used in automated
feature selection and BNN training. An additional 33 lesions were imaged for
which identical views with and without spatial compounding were available (70
images, cancer prevalence 15%). These served as an independent test dataset.
Lesions were outlined by a radiologist and image features, mathematically
describing lesion characteristics, were calculated.
In feature selection, the 4 best performing features were related to gradient
strength and entropy. The average gradient strength within a lesion obtained an
area under the ROC curve (AUC) of 0.78 in the task of distinguishing lesions
imaged with and without spatial compounding. The BNN, using 4 features,
achieved an AUC on the independent test dataset of 0.98 in this task.
The sonographic appearance of breast lesions is affected by spatial compound
imaging and lesion features may be used to automatically separate images as
obtained with or without this technique. In computer-aided diagnosis (CADx),
it will likely be beneficial
to separate images as such before using separate classifiers for assessment of
malignancy.
Neural network vector quantization improves the diagnostic quality of computer-aided diagnosis in dynamic breast MRI
Show abstract
We quantitatively evaluate a novel neural network pattern recognition approach for characterization of diagnostically challenging breast lesions in contrast-enhanced dynamic breast MRI. Eighty-two women with 84 indeterminate mammographic lesions (BIRADS III-IV, 38/46 benign/malignant lesions confirmed by histopathology and follow-up, median lesion diameter 12mm) were examined by dynamic contrast-enhanced breast MRI. The temporal signal dynamics results in an intensity time-series for each voxel represented by a 6-dimensional feature vector. These vectors were clustered by minimal-free-energy Vector Quantization (VQ), which identifies groups of pixels with similar enhancement kinetics as prototypical time-series, so-called codebook vectors. For comparison, conventional analysis based on lesion-specific averaged signal-intensity time-courses was performed according to a standardized semi-quantitative evaluation score. For quantitative assessment of diagnostic accuracy, areas under ROC curves (AUC) were computed for both VQ and standard classification methods. VQ increased the diagnostic accuracy for classification between benign and malignant lesions, as confirmed by quantitative ROC analysis: VQ results (AUC=0.760) clearly outperformed the conventional evaluation of lesion-specific averaged time-series (AUC=0.693). Thus, the diagnostic benefit of neural network VQ for MR mammography analysis is quantitatively documented by ROC evaluation in a large data base of diagnostically challenging small focal breast lesions. VQ outperforms the conventional method w.r.t. diagnostic accuracy.
Joint feature selection and classification using a Bayesian neural network with automatic relevance determination priors: potential use in CAD of medical imaging
Show abstract
Bayesian neural network (BNN) with automatic relevance determination (ARD) priors has the ability to assess the relevance of each input feature during network training. Our purpose is to investigate the potential use of BNN-with-ARD-priors for joint feature selection and classification in computer-aided diagnosis (CAD) of medical imaging. With ARD priors, each group of weights that connect an input feature to the hidden units is associated with a hyperparameter controlling the magnitudes of the weights. The hyperparameters and the weights are updated simultaneously during neural network training. A smaller hyperparameter will likely result in larger weight values and the corresponding feature will likely be more relevant to the output, and thus, to the classification task. For our study, a multivariate normal feature space is designed to include one feature with high classification performance in terms of both ideal observer and linear observer, two features with high ideal observer performance but low linear observer performance and 7 useless features. An exclusive-OR (XOR) feature space is designed to include 2 XOR features and 8 useless features. Our simulation results show that the ARD-BNN approach has the ability to select the optimal subset of features on the designed nonlinear feature spaces on which the linear approach fails. ARD-BNN has the ability to recognize features that have high ideal observer performance. Stepwise linear discriminant analysis (SWLDA) has the ability to select features that have high linear observer performance but fails to select features that have high ideal observer performance and low linear observer performance. The cross-validation results on clinical breast MRI data show that ARD-BNN yields statistically significant better performance than does the SWLDA-LDA approach. We believe that ARD-BNN is a promising method for pattern recognition in computer-aided diagnosis of medical imaging.
Learning distance metrics for interactive search-assisted diagnosis of mammograms
Show abstract
The goal of interactive search-assisted diagnosis (ISAD) is to enable doctors to make more informed decisions about a given case by providing a selection of similar annotated cases. For instance, a radiologist examining a suspicious mass could study labeled mammograms with similar conditions and weigh the outcome of their biopsy results before determining whether to recommend a biopsy. The fundamental challenge in developing ISAD systems is the identification of similar cases, not simply in terms of superficial image characteristics, but in a medically-relevant sense. This task involves three aspects: extraction of a representative set of features, identifying an appropriate measure of similarity in the high-dimensional feature space, and return the most similar matches at interactive speed. The first has been an active research area for several decades. The second has largely been ignored by the medical imaging community. The third can be achieved using the Diamond framework, an open-source platform that enables efficient exploration of large distributed complex data repositories. This paper focuses on the second aspect. We show that the choice of distance metric affects the accuracy of an ISAD system and that machine learning enables the construction of effective domain-specific distance metrics. In the learned distance, data points with the same labels (e.g., malignant masses) are closer than data points with different labels (e.g., malignant vs. benign). Thus, the labels of the near neighbors of a new case are likely to be informative. We present and evaluate several novel methods for distance metric learning and evaluate them on a database involving 2522 mass regions of interest (ROI) extracted from digital mammograms, with ground truth defined by biopsy results (1800 malignant, 722 benign). Our results show that learned distance metrics improve both classification (ROC curve) and retrieval performance.
Determination of subjective and objective similarity for pairs of masses on mammograms for selection of similar images
Show abstract
Presentation of images with known pathology similar to that of a new unknown lesion would be helpful for radiologists in their diagnosis of breast cancer. In order to find images that are really similar and useful to radiologists, we determined the radiologists' subjective similarity ratings for pairs of masses, and investigated objective similarity measures that would agree well with the subjective ratings. Fifty sets of images, each of which included one image in the center and six other images to be compared with the center image, were selected; thus, 300 pairs of images were prepared. Ten breast radiologists provided the subjective similarity ratings for each image pair in terms of the overall impression for diagnosis. The objective similarity measures based on cross-correlation of the images, differences in feature values, and psychophysical measures by use of an artificial neural network were determined. The objective measures based on the cross-correlation were found to be not correlated with the subjective similarity ratings (r < 0.1). The differences in the features characterizing the margin were relatively strong indicators of the similarity (r > 0.40). When several image features were used, the differences-based objective measure was moderately correlated (r = 0.59) with the subjective ratings. The relatively high correlation coefficient (r = 0.74) was obtained for the psychophysical similarity measure. The similar images selected by use of the psychophysical measure can be useful to radiologists in the diagnosis of breast cancer.
Classification of mammographic masses using support vector machines and Bayesian networks
Show abstract
In this paper, we compare two state-of-the-art classification techniques characterizing masses as either benign
or malignant, using a dataset consisting of 271 cases (131 benign and 140 malignant), containing both a MLO
and CC view. For suspect regions in a digitized mammogram, 12 out of 81 calculated image features have been
selected for investigating the classification accuracy of support vector machines (SVMs) and Bayesian networks
(BNs). Additional techniques for improving their performance were included in their comparison: the Manly
transformation for achieving a normal distribution of image features and principal component analysis (PCA) for
reducing our high-dimensional data. The performance of the classifiers were evaluated with Receiver Operating
Characteristics (ROC) analysis. The classifiers were trained and tested using a k-fold cross-validation test method
(k=10). It was found that the area under the ROC curve (Az) of the BN increased significantly (p=0.0002)
using the Manly transformation, from Az = 0.767 to Az = 0.795. The Manly transformation did not result in
a significant change for SVMs. Also the difference between SVMs and BNs using the transformed dataset was
not statistically significant (p=0.78). Applying PCA resulted in an improvement in classification accuracy of the
naive Bayesian classifier, from Az = 0.767 to Az = 0.786. The difference in classification performance between
BNs and SVMs after applying PCA was small and not statistically significant (p=0.11).
Thoracic/Skeletal Imaging
Assessment of femoral bone quality using co-occurrence matrices and adaptive regions of interest
Show abstract
The surgical treatment of femur fractures, which often result from osteoporosis, is highly dependent on the quality of the femoral bone. Unsatisfying results of surgical interventions like early loosening of implants may be one result of altered bone quality. However, clinical diagnostic techniques to quantify local bone quality are limited and often highly observer dependent. Therefore, the development of tools, which automatically and reproducibly place regions of interest (ROI) and asses the local quality of the femoral bone in these ROIs would be of great help for clinicians.
For this purpose, a method to position and deform ROIs automatically and reproducibly depending on the size and shape of the femur will be presented. Moreover, an approach to asses the femur quality, which is based on calculating texture features using co-occurrence matrices and these adaptive regions, will be proposed.
For testing purposes, 15 CT-datasets of anatomical specimen of human femora are used. The correlation between the texture features and biomechanical properties of the proximal femoral bone is calculated. First results are very promising and show high correlation between the calculated features and biomechanical properties. Testing the method on a larger data pool and refining the algorithms to further increase its sensitivity for altered bone quality will be the next steps in this project.
Imputation methods for temporal radiographic texture analysis in the detection of periprosthetic osteolysis
Show abstract
Periprosthetic osteolysis is a disease triggered by the body's response to tiny wear fragments from total hip replacements (THR), which leads to localized bone loss and disappearance of the trabecular bone texture. We have been investigating methods of temporal radiographic texture analysis (tRTA) to help detect periprosthetic osteolysis. One method involves merging feature measurements at multiple time points using an LDA or BANN. The major drawback of this method is that several cases do not meet the inclusion criteria because of missing data, i.e., missing image data at the necessary time intervals. In this research, we investigated imputation methods to fill in missing data points using feature averaging, linear interpolation, and first and second order polynomial fitting. The database consisted of 101 THR cases with full data available from four follow-up intervals. For 200 iterations, missing data were randomly created to simulate a typical THR database, and the missing points were then filled in using the imputation methods. ROC analysis was used to assess the performance of tRTA in distinguishing between osteolysis and normal cases for the full database and each simulated database. The calculated values from the 200 iterations showed that the imputation methods produced negligible bias, and substantially decreased the variance of the AUC estimator, relative to excluding incomplete cases. The best performing imputation methods were those that heavily weighted the data points closest to the missing data. The results suggest that these imputation methods appear to be acceptable means to include cases with missing data for tRTA.
Computer aided root decay detection using level set and complex wavelets
Show abstract
A computer aided root lesion detection method for digital dental X-rays is proposed using level set and complex wavelets. The detection method consists of two stages: preprocessing and root lesion detection. During preprocessing, a level set segmentation is applied to separate the teeth from the background. Tailored for the dental clinical environment, a segmentation clinical acceleration scheme is applied by using a support vector machine (SVM) classifier and individual principal component analysis (PCA) to provide an initial contour. Then, based on the segmentation result, root lesion detection is performed. Firstly, the teeth are isolated by the average intensity profile. Secondly, a center-line zero crossing based candidate generation is applied to generate the possible root lesion areas. Thirdly, the Dual-Tree Complex Wavelets Transform (DT-CWT) is used to further remove false positives. Lastly when the root lesion is detected, the area of root lesion is automatically marked with color indication representing different levels of seriousness. 150 real dental X-rays with various degrees of root lesions are used to test the proposed method. The results were validated by the dentist. Experimental results show that the proposed method is able to successfully detect the root lesion and provide visual assistance to the dentist.
Computerized method for detection of vertebral fractures on lateral chest radiographs based on morphometric data
Show abstract
Vertebral fractures are the most common osteoporosis-related fractures. It is important to detect vertebral fractures, because they are associated with increased risk of subsequent fractures, and because pharmacologic therapy can reduce the risk of subsequent fractures. Although vertebral fractures are often not clinically recognized, they can be visualized on lateral chest radiographs taken for other purposes. However, only 15-60% of vertebral fractures found on lateral chest radiographs are mentioned in radiology reports.
The purpose of this study was to develop a computerized method for detection of vertebral fractures on lateral chest radiographs in order to assist radiologists' image interpretation. Our computerized method is based on the automated identification of upper and lower vertebral edges. In order to develop the scheme, radiologists provided morphometric data for each identifiable vertebra, which consisted of six points for each vertebra, for 25 normals and 20 cases with severe fractures. Anatomical information was obtained from morphometric data of normal cases in terms of vertebral heights, heights of vertebral disk spaces, and vertebral centerline. Computerized detection of vertebral fractures was based on the reduction in the heights of fractured vertebrae compared to adjacent vertebrae and normal reference data. Vertebral heights from morphometric data on normal cases were used as reference.
On 138 chest radiographs (20 with fractures) the sensitivity of our method for detection of fracture cases was 95% (19/20) with 0.93 (110/118) false-positives per image. In conclusion, the computerized method would be useful for detection of potentially overlooked vertebral fractures on lateral chest radiographs.
Semi-automated location identification of catheters in digital chest radiographs
Show abstract
Localization of catheter tips is the most common task in intensive care unit imaging. In this work, catheters appearing in digital chest radiographs acquired by portable chest x-rays were tracked using a semi-automatic method. Due to the fact that catheters are synthetic objects, its profile does not vary drastically over its length. Therefore, we use forward looking registration with normalized cross-correlation in order to take advantage of a priori information of the catheter profile. The registration is accomplished with a two-dimensional template representative of the catheter to be tracked generated using two seed points given by the user. To validate catheter tracking with this method, we look at two metrics: accuracy and precision. The algorithms results are compared to a ground truth established by catheter midlines marked by expert radiologists. Using 12 objects of interest comprised of naso-gastric, endo-tracheal tubes, and chest tubes, and PICC and central venous catheters, we find that our algorithm can fully track 75% of the objects of interest, with a average tracking accuracy and precision of 85.0%, 93.6% respectively using the above metrics. Such a technique would be useful for physicians wishing to verify the positioning of catheter tips using chest radiographs.
Poster Session: Breast Imaging
Mass margins spiculations: agreement between ratings by observers and a computer scheme
Show abstract
This study investigated the agreement between breast mass spiculation levels as rated subjectively by observers and a computer scheme. An image dataset with 1,263 mass regions was selected. First, three experienced observers independently rated subjectively the visualized spiculation levels of these mass regions and classified them into three categories (none/minimal, moderate, and severe/significant). We then developed a computerized scheme to detect mass margins and classify the spiculation levels of the suspected mass regions. The scheme applied a hybrid region growth algorithm to segment the mass regions. An edge map was computed inside a 30-pixel-wide band surrounding the mass boundary contour. The scheme then applied a threshold to convert the edge map into a binary image followed by labeling and detecting line orientation. In the original edge map the scheme computed the average local pixel value fluctuation. In the binary edge map, the scheme computed the ratio between the number of "spiculated" pixels and the number of total pixels inside the band. The scheme also computed mass region conspicuity using the original image. Using these three features, a Bayesian Belief Network (BBN) was built to classify mass regions into one of the three spiculation categories. We compared the inter-observer variation as well as agreement levels between the subjective and computerized ratings. Agreement rates between paired observers ranged from 41.3% to 58.8% (Kappa = 0.136 to 0.309). The agreement between the computer scheme and observers' average rating was 49.2% (Kappa = 0.218). This study demonstrated a large inter-observer variability in subjective rating of mass speculation levels as well as a large difference between the rating results of a computerized scheme and observers. As a result, in an Interactive Computer-Aided Diagnosis (ICAD) environment, CAD-selected reference regions may be considered "very similar" by some observers and "not similar" by others. Hence, improving the selection of actually visually similar reference regions by a computerized scheme remains an important yet unsolved task for ICAD development.
An improved asymmetry measure to detect breast cancer
Show abstract
Radiologists can use the differences between the left and right breasts, or asymmetry, in mammograms to help detect certain malignant breast cancers. An image similarity method has been improved to make use of this knowledge base to recognize breast cancer. Image similarity is determined using computer-aided detection (CAD) prompts as the features, and then a cluster comparison is done to determine whether there is asymmetry. We develop the analysis through a combination of clustering and supervised learning of model parameters. This process correctly classifies cancerous mammograms 95% of the time, and all mammograms 84% of the time, and thus asymmetry is a measure that can play an important role in significantly improving computer-aided breast cancer detection systems. This technique represents an improvement in accuracy of 121% over commercial techniques on non-cancerous cases.
Most computer-aided detection (CAD) systems are tested on images which contain cancer on the assumption that images without cancer would produce the same number of false positives. However, a pre-screening system is designed to remove the normal cases from consideration, and so the inclusion of a pre-screening system into CAD dramatically reduces the number of false positives reported by the CAD system. We define three methods for the inclusion of pre-screening into CAD, and improve the performance of the CAD system by over 70% at low levels of false positives.
A combined algorithm for breast MRI motion correction
Show abstract
Correction of patient motion is a fundamental preprocessing step for dynamic contrast-enhanced (DCE) breast MRI, removing artifacts induced by involuntary movement and facilitating quantitative analysis of contrast agent kinetics. Image registration algorithms commonly employed for this task align subsequent temporal images of the dynamic MRI by maximizing intensity-, correlation- or entropy-based similarity measures between image pairs. To compensate for global patient motion, frequently an initial affine linear or rigid transformation is estimated. Subsequently, local image variablity is reduced by maximizing local similarity measures and using viscous fluid or elastic regularization terms. We present a novel iterative scheme combining local and global registration into one single algorithm, limiting computational overhead, reducing interpolation artifacts and generally improving the quality of registration results. The relation between local and global motion is adjusted by the introduction of corresponding flexible weighting functions, allowing for a sound combination of both registration types and a potentially wider range of computable transformations. The proposed method is evaluated on both synthetic images and clinical breast MRI data. The results demonstrate that our method works stable and reliably compensates for common motion artifacts typical to DCE MR mammography.
Automatic segmentation of relevant structures in DCE MR mammograms
Show abstract
The automatic segmentation of relevant structures such as skin edge, chest wall, or nipple in dynamic contrast
enhanced MR imaging (DCE MRI) of the breast provides additional information for computer aided diagnosis (CAD) systems. Automatic reporting using BI-RADS criteria benefits of information about location of those
structures. Lesion positions can be automatically described relatively to such reference structures for reporting
purposes. Furthermore, this information can assist data reduction for computation expensive preprocessing such
as registration, or for visualization of only the segments of current interest. In this paper, a novel automatic method for determining the air-breast boundary resp. skin edge, for approximation of the chest wall, and locating of the nipples is presented. The method consists of several steps which are built on top of each other. Automatic threshold computation leads to the air-breast boundary which is then analyzed to determine the location of the nipple. Finally, results of both steps are starting point for approximation of the chest wall. The proposed process was evaluated on a large data set of DCE MRI recorded by T1 sequences and yielded reasonable results in all cases.
Computerized mass detection in whole breast ultrasound images: reduction of false positives using bilateral subtraction technique
Show abstract
The comparison of left and right mammograms is a common technique used by radiologists for the detection and
diagnosis of masses. In mammography, computer-aided detection (CAD) schemes using bilateral subtraction
technique have been reported. However, in breast ultrasonography, there are no reports on CAD schemes using
comparison of left and right breasts. In this study, we propose a scheme of false positive reduction based on
bilateral subtraction technique in whole breast ultrasound images. Mass candidate regions are detected by using
the information of edge directions. Bilateral breast images are registered with reference to the nipple positions
and skin lines. A false positive region is detected based on a comparison of the average gray values of a mass
candidate region and a region with the same position and same size as the candidate region in the contralateral
breast. In evaluating the effectiveness of the false positive reduction method, three normal and three abnormal
bilateral pairs of whole breast images were employed. These abnormal breasts included six masses larger than
5 mm in diameter. The sensitivity was 83% (5/6) with 13.8 (165/12) false positives per breast before applying
the proposed reduction method. By applying the method, false positives were reduced to 4.5 (54/12) per breast
without removing a true positive region. This preliminary study indicates that the bilateral subtraction technique
is effective for improving the performance of a CAD scheme in whole breast ultrasound images.
A versatile knowledge-based clinical imaging annotation system for breast cancer screening
Show abstract
Medical information is evolving towards more complex multimedia data representation, as new imaging modalities
are made available by sophisticated devices. Features such as segmented lesions can now be extracted through
analysis techniques and need to be integrated into clinical patient data. The management of structured information
extracted from multimedia has been addressed in knowledge based annotation systems providing methods
to attach interpretative semantics to multimedia content. Building on these methods, we develop a new clinical
imaging annotation system for computer aided breast cancer screening. The proposed system aims at more
consistent, efficient and standardised data mark-up of digital and digitalised radiology images. The objective is
to provide detailed characterisation of abnormalities as an aid in the diagnostic task through integrated annotation
management. The system combines imaging analysis results and radiologist diagnostic information about
suspicious findings by mapping well-established visual and low-level descriptors into pathology specific profiles.
The versatile characterisation allows differentiating annotation descriptors for different types of findings. Our
approach of semi-automatic integrated annotations supports increased quality assurance in screening practice.
This is achieved through detailed and objective patient imaging information while providing user-friendly means
for their manipulation that is oriented to relieving the radiologist's workload.
A hybrid active contour model for mass detection in digital breast tomosynthesis
Show abstract
In this paper we present a novel approach for mass contour detection for 3D computer-aided detection (CAD) in
digital breast tomosynthesis (DBT) data-sets. A hybrid active contour model, working directly on the projected
views, is proposed. The responses of a wavelet filter applied on the projections are thresholded and combined
to obtain markers for mass candidates. The contours of markers are extracted and serve as initialization for
the active contour model, which is then used to extract mass contours in DBT projection images. A hybrid
model is presented, taking into account several image-based external forces and implemented using a level-set
formulation. A feature vector is computed from the detected contour, which may serve as input to a dedicated
classifier. The segmentation method is applied to simulated images and to clinical cases. Image segmentation
results are presented and compared to two standard active contour models. Evaluation of the performance on
clinical data is obtained by comparison to manual segmentation by an expert. Performance on simulated images
and visual performance assessment provide further illustration of the performance of the presented approach.
Computer-aided detection of mammographic masses based on content-based image retrieval
Show abstract
A method for computer-aided detection (CAD) of mammographic masses is proposed and a prototype CAD system is
presented. The method is based on content-based image retrieval (CBIR). A mammogram database containing 2000
mammographic regions is built in our prototype CBIR-CAD system. Every region of interested (ROI) in the database has
known pathology. Specifically, there are 583 ROIs depicting biopsy-proven masses, and the rest 1417 ROIs are normal.
Whenever a suspicious ROI is detected in a mammogram by a radiologist, it can be submitted as a query to this CBIRCAD
system. As the query results, a series of similar ROI images together with their known pathology knowledge will
be retrieved from the database and displayed in the screen in descending order of their similarities to the query ROI to
help the radiologist to make the diagnosis decision. Furthermore, our CBIR-CAD system will output a decision index
(DI) to quantitatively indicate the probability that the query ROI contains a mass. The DI is calculated by the query
matches. In the querying process, 24 features are extracted from each ROI to form a 24-dimensional vector. Euclidean
distance in the 24-dimensional feature vector space is applied to measure the similarities between ROIs. The prototype
CBIR-CAD system is evaluated based on the leave-one-out sampling scheme. The experiment results showed that the
system can achieve a receiver operating characteristic (ROC) area index AZ =0.84 for detection of mammographic
masses, which is better than the best results achieved by the other known mass CAD systems.
Fast microcalcification detection on digital breast tomosynthesis datasets
Show abstract
In this paper, we present a fast method for microcalcification detection in Digital Breast Tomosynthesis. Instead of
applying the straight-forward reconstruction/filtering/thresholding approach, the filtering is performed on projections
before simple back-projection reconstruction. This leads to a reduced computation time since the number of projections
is generally much smaller than the number of slices. For an average breast thickness and a typical number of
projections, the number of operations is reduced by a factor in the range of 2 to 4. At the same time, the approach yields
a negligible decrease of the contrast to noise ratio in the reconstructed slices. Image segmentation results are presented
and compared to the previous method as visual performance assessment.
Cross-digitizer robustness of a knowledge-based CAD system for mass detection in mammograms
Show abstract
Multiplatform application of CAD systems in mammography is often limited due to image preprocessing steps that are
tailored to the acquisition protocol such as the digitizer. The purpose of this study was to validate our knowledge-based
CAD system across two different digitizers. Our system relies on the similarity of a query image with known cases
stored in a knowledge database. Image similarity is assessed using information theory, without any image preprocessing.
Therefore, we hypothesize that our CAD system can operate robustly across digitizers. We tested the hypothesis using
two different datasets of mammographic regions of interest (ROIs) for mass detection. The two databases consisted of
1,820 and 1,809 ROIs extracted from DDSM mammograms digitized using a Lumisys and a Howtek scanner
respectively. Three experiments were performed. First, we evaluated the CAD system on each dataset independently.
Then, we evaluated the system on each dataset when the other dataset was used as the knowledge database. Finally, we
assessed the CAD detection performance when the knowledge database contained mixed cases. Our CAD system had
similar performance across digitizers (Az=0.87±0.01 for Lumisys vs. Az=0.8±0.01 for Howtek) when assessed
independently. When the system was tested on one dataset while the other was used as the knowledge database, ROC
performance declined marginally, mainly based on the partial ROC area index. This result suggests that blind translation
of the system without some experience with cases digitized with the same digitizer is not recommended when the system
is expected to operate at high sensitivity decision thresholds. When the system operated with a knowledge database of
mixed cases, its performance across digitizers was robust yet slightly inferior to what observed independently.
A preliminary study of content-based mammographic masses retrieval
Show abstract
The purpose of this study is to develop a Content-Based Image Retrieval (CBIR) system for mammographic computer-aided
diagnosis. We have investigated the potential of using shape, texture, and intensity features to categorize masses
that may lead to sorting similar image patterns in order to facilitate clinical viewing of mammographic masses.
Experiments were conducted within a database that contains 243 masses (122 benign and 121 malignant). The retrieval
performances using the individual feature was evaluated, and the best precision was determined to be 79.9% when using
the curvature scale space descriptor (CSSD). By combining several selected shape features for retrieval, the precision
was found to improve to 81.4%. By combining the shape, texture, and intensity features together, the precision was
found to improve to 82.3%.
Objective assessment of the aesthetic outcomes of breast cancer treatment: toward automatic localization of fiducial points on digital photographs
Show abstract
The contemporary goals of breast cancer treatment are not limited to cure but include maximizing quality of
life. All breast cancer treatment can adversely affect breast appearance. Developing objective, quantifiable methods to
assess breast appearance is important to understand the impact of deformity on patient quality of life, guide selection of
current treatments, and make rational treatment advances. A few measures of aesthetic properties such as symmetry have
been developed. They are computed from the distances between manually identified fiducial points on digital
photographs. However, this is time-consuming and subject to intra- and inter-observer variability. The purpose of this
study is to investigate methods for automatic localization of fiducial points on anterior-posterior digital photographs
taken to document the outcomes of breast reconstruction. Particular emphasis is placed on automatic localization of the
nipple complex since the most widely used aesthetic measure, the Breast Retraction Assessment, quantifies the
symmetry of nipple locations. The nipple complexes are automatically localized using normalized cross-correlation with
a template bank of variants of Gaussian and Laplacian of Gaussian filters. A probability map of likely nipple locations
determined from the image database is used to reduce the number of false positive detections from the matched filter
operation. The accuracy of the nipple detection was evaluated relative to markings made by three human observers. The
impact of using the fiducial point locations as identified by the automatic method, as opposed to the manual method, on
the calculation of the Breast Retraction Assessment was also evaluated.
Analysis of texture patterns in medical images with an application to breast imaging
Show abstract
We propose a methodological framework for texture analysis in medical images that is based on Vector Quantization
(VQ), a method traditionally used for image compression. In this framework, the codeword usage histogram is used as a
texture descriptor of the image. This descriptor can be used effectively for similarity searches, clustering, classification
and other retrieval operations. We present an application of this approach to the analysis of x-ray galactograms; we
analyze the texture in retroareolar regions of interests (ROIs) in order to distinguish between patients with reported
galactographic findings and normal subjects. We decompose these ROIs into equi-size blocks and use VQ to represent
each block with the closest codeword from a codebook. Each image is represented as a vector of frequencies of
codeword appearance. We perform k-nearest neighbor classification of the texture patterns employing the histogram
model as a similarity measure. The classification accuracy reached up to 96% for certain experimental settings; these
results demonstrate that the proposed approach can be effective in performing similarity analysis of texture patterns in
breast imaging. The proposed texture analysis framework has a potential to assist the interpretation of clinical images in
general and facilitate the investigation of relationships among structure, texture and function or pathology.
Fractal analysis for assessing tumour grade in microscopic images of breast tissue
Show abstract
In 2006, breast cancer is expected to continue as the leading form of cancer diagnosed in women, and the second leading
cause of cancer mortality in this group. A method that has proven useful for guiding the choice of treatment strategy is
the assessment of histological tumor grade. The grading is based upon the mitosis count, nuclear pleomorphism, and
tubular formation, and is known to be subject to inter-observer variability. Since cancer grade is one of the most
significant predictors of prognosis, errors in grading can affect patient management and outcome. Hence, there is a need
to develop a breast cancer-grading tool that is minimally operator dependent to reduce variability associated with the
current grading system, and thereby reduce uncertainty that may impact patient outcome. In this work, we explored the
potential of a computer-based approach using fractal analysis as a quantitative measure of cancer grade for breast
specimens. More specifically, we developed and optimized computational tools to compute the fractal dimension of
low- versus high-grade breast sections and found them to be significantly different, 1.3±0.10 versus 1.49±0.10,
respectively (Kolmogorov-Smirnov test, p<0.001). These results indicate that fractal dimension (a measure of
morphologic complexity) may be a useful tool for demarcating low- versus high-grade cancer specimens, and has
potential as an objective measure of breast cancer grade. Such prognostic value could provide more sensitive and
specific information that would reduce inter-observer variability by aiding the pathologist in grading cancers.
Initial human subject results for breast bi-plane correlation imaging technique
Show abstract
Computer aided detection (CADe) systems often present multiple false-positives per image in projection
mammography due to overlapping anatomy. To reduce the number of such false-positives, we propose
performing CADe on image pairs acquired using a bi-plane correlation imaging (BCI) technique. In this
technique, images are acquired of each breast at two different projection angles. A traditional CADe
algorithm operates on each image to identify suspected lesions. The suspicious areas from both projections
are then geometrically correlated, eliminating any lesion that is not identified on both views. Proof of concept
studies showed that that the BCI technique reduced the numbers of false-positives per case up to 70%.
Analysis of percent density estimates from digital breast tomosynthesis projection images
Show abstract
Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent
density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in
part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method
in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at
different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity
and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT
is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates
from DBT source projections since the results would be independent of the reconstruction method. We estimated PD
from MLO mammograms (PDM) and from individual DBT projections (PDT). We observed good agreement between
PDM and PDT from the central projection images of 40 women. This suggests that variations in breast positioning, dose,
and scatter between mammography and DBT do not negatively affect PD estimation. The PDT estimated from
individual DBT projections of nine women varied with the angle between the projections. This variation is caused by
the 3D arrangement of the breast dense tissue and the acquisition geometry.
Establishing correspondence in mammograms and tomosynthesis projections
Show abstract
We are developing a computer based aid for automated analysis of digital mammograms and digital breast
tomosynthesis (DBT) images. The ultimate goal is to establish correspondence between regions in images obtained
using different modalities. This paper is focused on establishing point correspondences between mammograms and
DBT projections. Correspondence has been established utilizing two similarity criteria, one based on topology of
prominent elongated anatomical structures, and another based on texture near the potential correspondence points. We
evaluated robustness of the described technique with respect to variations in x-ray tube angle. The evaluation included
72 image pairs: 9 DBT projections from both left and right breasts of 4 women and corresponding mammograms. The
evaluation was performed by a consensus between two trained observers. Two images with highlighted pairs of
automatically established point correspondences were presented. The observers were asked to manually identify the
correct correspondences and measure the displacement error. Points for which correspondences could not be manually
identified were excluded. The topology method automatically generated an average of 12.2 correspondences per image
pair, for which the average measured displacement error was 1.33 mm (N = 10.5). The texture method generated 18.6
correspondences with an average measured error of 1.80 mm (N = 14.7). The algorithms were found to be robust; the
number of correspondences and the average displacement did not significantly change with variations in tube angle.
Poster Session: CAD Issues
Visualization of CAD results to the radiologist: Influence of the marker type on radiologist's sensitivity for the detection of pulmonary nodules
Show abstract
Purpose: The efficiency of the detection of pulmonary nodules by a radiologist with the help of CAD is influenced
by the user interface of the system. Marker with a visually dominant appearance may distract the radiologist from
other parts of the screen. Purpose was to analyse the influence of different CAD markers on radiologist's
performance.
Materials and methods: 10 radiologists analysed 150 pictures of chest CT slices. Every picture contained a CAD
marker; five different types of markers were used - each respectively on 30 pictures (1: thick walled square, 2: thin
walled circle, 3: small arrow, 4: pixel sized point on nodule, 5: very subtle change of colour). One hundred images
contained one nodule: CAD markers marked this finding in 50 cases; in 50 cases a false positive finding was marked
instead. The remaining 50 images contained no nodule but a marker of a false positive finding. The radiologists had
to decide for each image if there was a nodule visible and either click on the nodule or on a button "no finding".
Sensitivity and specificity were calculated for each marker type.
Results: Mean sensitivity was 59%, 62%, 64%, 65% and 64% for marker 1 to 5, respectively. Specificity was 50%,
51%, 64%, 45% and 67%. In the cases with false positive findings sensitivity for detection of the unmarked nodule
was 41%, 58%, 59%, 49% and 54%.
New work to be presented: The study shows that the marker type influences radiologist's sensitivity and distraction
from other findings.
Conclusion: Of the tested markers a small arrow was most efficient for the presentation of the results to the
radiologist.
Some practical issues for assessment of computer-aided diagnostic scheme
Show abstract
Computer-aided diagnostic (CAD) schemes have been developed for assisting radiologists in the detection of various
lesions in medical images. The reliable evaluation of CAD schemes is an important task in the field of CAD research.
In the past, many evaluation approaches, such as the resubstitution, leave-one-out, cross-valiation, and hold-out methods
have been used for evaluating the performance of various CAD schemes. However, some important issues in the
evaluation of CAD schemes have not been systematically analyzed, either theoretically or experimentally. The first
important issue is the analysis and comparison of various evaluation methods in terms of some characteristics, in
particular, the bias and the generalization performance of the trained CAD schemes. The second includes the analysis
of pitfalls in the incorrect use of various evaluation methods and the effective approaches to the reduction of the bias and
variance caused by these pitfalls. We attempt to address the above important issues in this article. We believe that this
article would be useful to researchers in the field of CAD research for selecting appropriate evluation methods and for
improving the reliability of the estimated performance of their CAD schemes.
An image database management system for conducting CAD research
Show abstract
The development of image databases for CAD research is not a trivial task. The collection and management of images
and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and
centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the
specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented
management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased
database system for the storage and management of research-specific medical image metadata was designed for
use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast
ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the
user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable
parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL.
A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was
created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and
is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality
lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata
and successfully generates subsets of cases that match the user's desired search criteria.
The Lung Image Database Consortium (LIDC): a quality assurance model for the collection of expert-defined truth in lung-nodule-based image analysis studies
Show abstract
The development of computer-aided diagnostic (CAD) systems requires an initial establishment of "truth" by
expert human observers. Potential inconsistencies in the "truth" data must be identified and corrected before investigators can rely on this data. We developed a quality assurance model to supplement the "truth" collection process for lung nodules on CT scans. A two-phase process was established for the interpretation of CT scans by four radiologists. During the initial "blinded read," radiologists independently assigned lesions they identified into one of
three categories: "nodule ⩾ 3mm," "nodule < 3mm," or "non-nodule ⩾ 3mm." During the subsequent "unblinded read,"
the blinded read results of all radiologists were revealed. The radiologists then independently reviewed their marks
along with their colleague's marks; a radiologist's own marks could be left unchanged, deleted, switched in terms of
lesion category, or additional marks could be added. The final set of marks underwent quality assurance, which
consisted of identification of potential errors that occurred during the reading process and error correction. All marks
were visually grouped into discrete nodules. Six categories of potential error were defined, and any nodule with a mark
that satisfied the criterion for one of these categories was referred to the radiologist who assigned the mark in question.
The radiologist either corrected the mark or confirmed that the mark was intentional. A total of 829 nodules were
identified by at least one radiologist in 100 CT scans through the two-phase process designed to capture "truth." The
quality assurance process yielded 81 nodules with potential errors. The establishment of "truth" must incorporate a
quality assurance model to guarantee the integrity of the "truth" that will provide the basis for the training and testing of
CAD systems.
Poster Session: Cardiac/Vasculature/Brain Imaging
Forming a reference standard from LIDC data: impact of reader agreement on reported CAD performance
Show abstract
The Lung Image Database Consortium (LIDC) has provided a publicly available collection of CT images with nodule
markings from four radiologists. The LIDC protocol does not require radiologists to reach a consensus during the
reading process, and as a result, there are varying levels of reader agreement for each potential nodule with no explicit
reference standard for nodules. The purpose of this work was to investigate the effects of the level of reader agreement
on the development of a reference standard and the subsequent impact on CAD performance. Ninety series were
downloaded from the LIDC database. Four different reference standards were created based on the markings of the
LIDC radiologists, reflecting four different levels of reader agreement. All series were analyzed with a research CAD
system and its performance was measured against each of the four standards. Between the standards with the lowest
(any 1 of 4 readers) and highest (all 4 readers) required level of reader agreement, the number of nodules ⩾ 3 mm
decreased 48% (from 174 to 90) and CAD sensitivity for nodules ⩾ 3 mm increased from 0.70 ± 0.34 to 0.79 ± 0.35.
Between the same reference standards, the number of nodules < 3 mm decreased 84% (from 483 to 75) and CAD
sensitivity for nodules < 3 mm increased from 0.30 ± 0.29 to 0.51 ± 0.45. This research illustrates the importance of
indicating the method used to form the reference standard, since the method influences both the number of nodules and
reported CAD performance.
Automated diagnosis and prediction of Alzheimer disease using magnetic resonance image
Show abstract
Magnetic resonance image (MRI) has provided an imageological support into the clinical diagnosis and prediction of
Alzheimer disease (AD) progress. Currently, the clinical use of MRI data on AD diagnosis is qualitative via visual
inspection and less accurate. To provide assistance to physicians in improving the accuracy and sensitivity of the AD
diagnose and the clinical outcome of the disease, we developed a computer-assisted analysis package that analyzed the
MRI data of an individual patient in comparison with a group of normal controls. The package is based on the principle
of the well established and widely used voxel-based morphometry (VBM) and SPM software. All analysis procedure is
automated and streamlined. With only one mouse-click, the whole procedure was finished within 15 minutes. With the
interactive display and anatomical automatic labeling toolbox, the final result and report supply the brain regional
structure difference, the quantitative assessment and visual inspections by physicians and scientific researcher. The brain
regions which affected by AD are consonant in the main with the clinical diagnosis, which are reviewed by physicians.
In result, the computer package provides physician with an automatic and assistant tool for prediction using MRI. This
package could be valuable tool assisting physicians in making their clinical diagnosis decisions.
Computerized scheme for detection of arterial occlusion in brain MRA images
Show abstract
Magnetic resonance angiography (MRA) is routinely employed in the diagnosis of cerebrovascular disease. Unruptured
aneurysms and arterial occlusions can be detected in examinations using MRA. This paper describes a computerized
detection method of arterial occlusion in MRA studies. Our database consists of 100 MRA studies, including 85 normal
cases and 15 abnormal cases with arterial occlusion. Detection of abnormality is based on comparison with a reference
(normal) MRA study with all the vessel known. Vessel regions in a 3D target MRA study is first segmented by using
thresholding and region growing techniques. Image registration is then performed so as to maximize the overlapping of
the vessel regions in the target image and the reference image. The segmented vessel regions are then classified into
eight arteries based on comparison of the target image and the reference image. Relative lengths of the eight arteries are
used as eight features in classifying the normal and arterial occlusion cases. Classifier based on the distance of a case
from the center of distribution of normal cases is employed for distinguishing between normal cases and abnormal cases.
The sensitivity and specificity for the detection of abnormal cases with arterial occlusion is 80.0% (12/15) and 95.3%
(81/85), respectively. The potential of our proposed method in detecting arterial occlusion is demonstrated.
CAD of myocardial perfusion
Show abstract
Our purpose is in the automated evaluation of the physiological relevance of lesions in coronary angiograms. We aim to
extract as much as possible quantitative information about the physiological condition of the heart from standard
angiographic image sequences. Coronary angiography is still the gold standard for evaluating and diagnosing coronary
abnormalities as it is able to locate precisely the coronary artery lesions. The dimensions of the stenosis can be assessed
nowadays successfully with image processing based Quantitative Coronary Angiography (QCA) techniques. Our
purpose is to assess the clinical relevance of the pertinent stenosis. We therefore analyze the myocardial perfusion as
revealed in standard angiographic image sequences. In a Region-of-Interest (ROI) on the angiogram (without an
overlaying major blood vessel) the contrast is measured as a function of time (the so-called time-density curve). The
required hyperemic state of exercise is induced artificially by the injection of a vasodilator drug e.g. papaverine. In order
to minimize motion artifacts we select based on the recorded ECG signal end-diastolic images in both a basal and a
hyperemic run in the same projection to position the ROI. We present the development of the algorithms together with
results of a small study of 20 patients which have been catheterized following the standard protocol.
Classification algorithm of pulmonary vein and artery based on multi-slice CT image
Show abstract
Recently, multi-slice helical CT technology was developed. Unlike the conventional helical CT, we can obtain CT
images of two or more slices with 1 time scan. Therefore, we can get many pictures with a clear contrast images and thin
slice images in one time of scanning. The purpose of this presentation is to evaluate the proposed automatic extraction
bronchus and pulmonary vein and artery on multi-slice CT images. The bronchus is extracted by application with region
growing technique and the morphological filters, 3D distance transformation. These results indicate that the proposed
algorithm provides the ability to accurately develop an automatic extraction algorithm of the bronchus on multi-slice CT
images. In this report, we used pulmonary vein and artery marked by the doctor, It aims to discover an amount of the
feature necessary for classifying the pulmonary vein and artery by using the anatomical feature. The classification of the
pulmonary vein and artery is thought necessary information that it is state of tuber benign or malignity judgment. It is
very important to separate the contact part of the lung blood vessel in classifying pulmonary vein and artery. Then, it
aims to discover the feature of the contact part of the lung blood vessel in this report.
Automated detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images: multiscale hierachical expectation-maximization segmentation of vessels and PEs
Show abstract
CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary
embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in
CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the
eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress
non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to
segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for
suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction
method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an
independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE
locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263
were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true
positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show
that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively,
for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the
automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity
and reduce the FPs.
Poster Session: Colonography
Validating Pareto optimal operation parameters of polyp detection algorithms for CT colonography
Show abstract
We evaluated a Pareto front-based multi-objective evolutionary algorithm for optimizing our CT colonography
(CTC) computer-aided detection (CAD) system. The system identifies colonic polyps based on curvature and
volumetric based features, where a set of thresholds for these features was optimized by the evolutionary algorithm.
We utilized a two-fold cross-validation (CV) method to test if the optimized thresholds can be generalized
to new data sets. We performed the CV method on 133 patients; each patient had a prone and a supine scan.
There were 103 colonoscopically confirmed polyps resulting in 188 positive detections in CTC reading from either
the prone or the supine scan or both. In the two-fold CV, we randomly divided the 133 patients into two
cohorts. Each cohort was used to obtain the Pareto front by a multi-objective genetic algorithm, where a set of
optimized thresholds was applied on the test cohort to get test results. This process was repeated twice so that
each cohort was used in the training and testing process once. We averaged the two training Pareto fronts as
our final training Pareto front and averaged the test results from the two runs in the CV as our final test results.
Our experiments demonstrated that the averaged testing results were close to the mean Pareto front determined
from the training process. We conclude that the Pareto front-based algorithm appears to be generalizable to
new test data.
Collaborative classifiers in CT colonography CAD
Show abstract
Multiple classifiers working collaboratively can usually achieve better performance than any single classifier working
independently. Our CT colonography computer-aided detection (CAD) system uses support vector machines (SVM) as
the classifier. In this paper, we developed and evaluated two schemes to collaboratively apply multiple SVMs in the
same CAD system. One is to put the classifiers in a sequence (SVM sequence) and apply them one after another; the
other is to put the classifiers in a committee (SVM committee) and use the committee decision for the classification. We
compared the sequence order (best-first, worst-first and random) in the SVM sequence and two decision functions in the
SVM committee (majority vote and sum probability). The experiments were conducted on 786 CTC datasets, with 63
polyp detections. We used 10-fold cross validation to generate the FROC curves, and conducted 100 bootstraps to
evaluate the performance variation. The result showed that collaborative classifiers performed much better than
individual classifiers. The SVM sequence had slightly better accuracy than the SVM committee but also had bigger
performance variation.
A new color coding scheme for easy polyp visualization in CT-based virtual colonoscopy
Show abstract
In this paper, we first introduce three different geometric features including shape index,
curvedness and sphericity ratio, for colonic polyp detection. A new color coding scheme is
designed to highlight the detected polyps, and help radiologists to distinguish them from
other tissues more easily. The key idea is to place the detected polyp candidates at the
same locations in a newly created polygonal dataset with exactly the same topological and
geometrical properties as the triangulated mesh surface of real colon dataset, and assign
different colors to the two separated datasets to highlight the polyps. Finally, we validate
the proposed polyp detection framework and color coding scheme by computer simulated
and real colon datasets. For sixteen synthetic polyps with different shapes and different
sizes, the sensitivity is 100%, and false positive is 0.
Computer-aided diagnosis (CAD) for colonoscopy
Show abstract
Colorectal cancer is the second leading cause of cancer deaths, and ranks third for new cancer cases and
cancer mortality for both men and women. However, its death rate can be dramatically reduced by
appropriate treatment when early detection is available. The purpose of colonoscopy is to identify and
assess the severity of lesions, which may be flat or protruding. Due to the subjective nature of the
examination, colonoscopic proficiency is highly variable and dependent upon the colonoscopist's
knowledge and experience. An automated image processing system providing an objective, rapid, and
inexpensive analysis of video from a standard colonoscope could provide a valuable tool for screening and
diagnosis. In this paper, we present the design, functionality and preliminary results of its Computer-Aided-Diagnosis (CAD) system for colonoscopy - ColonoCADTM. ColonoCAD is a complex multi-sensor, multi-data and multi-algorithm image processing system, incorporating data management and visualization, video
quality assessment and enhancement, calibration, multiple view based reconstruction, feature extraction
and classification. As this is a new field in medical image processing, our hope is that this paper will
provide the framework to encourage and facilitate collaboration and discussion between industry,
academia, and medical practitioners.
A novel algorithm for polyp detection using Eigen decomposition of Hessian-matrix for CT colonography CAD: validation with physical phantom study
Show abstract
Hessian matrix is the square matrix of second partial derivatives of a scalar-valued function and is well known for object
recognition in computer vision and medical shape analysis. Previous curvature based polyp detection algorithms generate
myriad of false positives. Hessian-matrix based method, however, is more sensitive to local shape features, so easily
reduce false positives. Calculation of Hessian matrix on 3D CT data and Eigen decomposition of the matrix gives three
Eigen values and vectors at each voxel. Using these Eigen values, we can figure out which type of intensity structures
(blob, line, and sheet-like) is on the given voxel. We focus on detecting blob-like object automatically. In the inner
colonic wall structures, blob-like, line-like, and sheet-like objects represent polyps, folds and wall, respectively. In
addition, to improve the performance of the algorithm, Gaussian blurring factor and shape threshold parameters are
optimized. Before Hessian matrix calculation, smoothing the given region using Gaussian kernel with small deviation is
necessary to enhance local intensity structures. To optimize the parameters and validate this method, we have produced
anthropomorphic pig phantoms. Fourteen phantoms with 103 polyps (16 polyps <6mm, 87 >= 6mm) were used. CT scan
was performed with 1mm slice thickness. Our detection algorithm found 84 polyps (81.6%) correctly. Average number
of false positives is 7.9 at each CT scan. This results show that our algorithm is clinically applicable for polyp detection,
because of high sensitivity and relatively low false positive detections.
A new method for detecting colonic polyps based on local intensity structure analysis from 3D abdominal CT images
Show abstract
This paper presents a new method for detecting colonic polyps from abdominal CT images based on Hessian
matrix analysis. Recently, virtual colonoscopy (VC) has widely received attention as a new and less-invasive colon
diagnostic method. A physician diagnoses the inside of the colon using a virtual colonoscopy system. However,
since the colon has many haustra and its shape is long and convoluted, a physician has to change viewpoints and
viewing directions of the virtual camera many times while diagnosing. Lesions behind haustra may be overlooked.
Thus this paper proposes an automated colonic polyp detection method from 3D abdominal CT images. Colonic
polyps are located on the colonic wall, and their CT values are higher than colonic lumen regions. In addition,
CT values inside polyps tend to gradually increase from outward to inward (blob-like structure). We employ a
blob structure enhancement filter based on the eigenvalues of a Hessian matrix to detect polyps with the above
blob-shaped characteristics. For reducing FPs, we eliminate polyp candidate regions in which the maximum
output value of the blob structure enhancement filter is smaller than given threshold values. Also small regions
are removed from candidates. We applied the proposed method to 23 cases of abdominal CT images. Overall,
74.4% of the polyps were detected with 3.8 FPs per case.
Poster Session: New Applications
CAD scheme to detect hemorrhages and exudates in ocular fundus images
Show abstract
This paper describes a method for detecting hemorrhages and exudates in ocular fundus images. The detection of
hemorrhages and exudates is important in order to diagnose diabetic retinopathy. Diabetic retinopathy is one of the most
significant factors contributing to blindness, and early detection and treatment are important. In this study, hemorrhages
and exudates were automatically detected in fundus images without using fluorescein angiograms. Subsequently, the
blood vessel regions incorrectly detected as hemorrhages were eliminated by first examining the structure of the blood
vessels and then evaluating the length-to-width ratio. Finally, the false positives were eliminated by checking the
following features extracted from candidate images: the number of pixels, contrast, 13 features calculated from the co-occurrence
matrix, two features based on gray-level difference statistics, and two features calculated from the extrema
method. The sensitivity of detecting hemorrhages in the fundus images was 85% and that of detecting exudates was
77%. Our fully automated scheme could accurately detect hemorrhages and exudates.
Semi-automatic detection of calcifications using nonlinear stretching on x-ray images
Show abstract
A central problem in the development of a mass-screening tool for atherosclerotic plaque is automatic calcification detection.
The mass-screening aspect implies that the detection process should be fast and reliable. In this paper we present a
first step in this direction by introducing a semi-automatic calcification classification tool based on non-linear stretching, an
image enhancement method that focusses on local image statistics. The calcified areas are approximated by a coarse brush,
which in our case is mimicked by taking the ground truth, provided by radiologists, and dilating it with circular structuring
elements of varying sizes. Thresholds are then examined for the different structuring elements, that yield optimal results
on the enhanced image. The results in this preliminary study which contains 19 images of varying calcification degree,
fully annotated by medical experts, show a significant increase in accuracy when the methodology is validated on a region
of interest containing the areas of a simulated coarse brush.
Contralateral subtraction technique for detection of asymmetric abnormalities on whole-body bone scintigrams
Show abstract
We developed a computer-aided diagnostic (CAD) scheme for assisting radiologists in the detection of asymmetric
abnormalities on a single whole-body bone scintigram by applying a contralateral subtraction (CS) technique. Twenty
whole-body bone scans including 107 abnormal lesions in anterior and/or posterior images (the number of lesions per
case ranged from 1 to 16, mean 5.4) were used in this study. In our scheme, the original bone scan image was flipped
horizontally to provide a mirror image. The mirror image was first rotated and shifted globally to match the original
image approximately, and then was nonlinearly warped by use of an elastic matching technique in order to match the
original image accurately. We applied a nonlinear lookup table to convert the difference in pixel values between the
original and the warped images to new pixel values for a CS image, in order to enhance dark shadows at the locations of
abnormal lesions where uptake of radioisotope was asymmetrically high, and to suppress light shadows of the lesions on
the contralateral side. In addition, we applied a CAD scheme for the detection of asymmetric abnormalities by use of
rule-based tests and sequential application of artificial neural networks with 25 image features extracted from the original
and CS images. The performance of the CAD scheme, which was evaluated by a leave-one-case-out method, indicated
an average sensitivity of 80.4 % with 3.8 false positives per case. This CAD scheme with the contralateral subtraction
technique has the potential to improve radiologists' diagnostic accuracy and could be used for computerized
identification of asymmetric abnormalities on whole-body bone scans.
Automated image analysis of uterine cervical images
Show abstract
Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality
of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented.
Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual
inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD)
system.
In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite
epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic
extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various
illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite
region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates
the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is
analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous
region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite
regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by
40 human subjects' data and demonstrates high correlation with experts' annotations.
Computer-aided detection (CAD) of hepatocellular carcinoma on multiphase CT images
Show abstract
Primary malignant liver tumor, including hepatocellular carcinoma (HCC), caused 1.25 million deaths per year
worldwide. Multiphase CT images offer clinicians important information about hepatic cancer. The presence of HCC is
indicated by high-intensity regions in arterial phase images and low-intensity regions in equilibrium phase images
following enhancement with contrast material. We propose an automatic method for detecting HCC based on edge
detection and subtraction processing. Within a liver area segmented according to our scheme, black regions are selected
by subtracting the equilibrium phase images to the corresponding registrated arterial phase images. From these black
regions, the HCC candidates are extracted as the areas without edges by using Sobel and LoG edge detection filters. The
false-positive (FP) candidates are eliminated by using six features extracted from the cancer and liver regions. Other FPs
are further eliminated by opening processing. Finally, an expansion process is applied to acquire the 3D shape of the
HCC. The cases used in this experiment were from the CT images of 44 patients, which included 44 HCCs. We extracted
97.7% (43/44) HCCs successfully by our proposed method, with an average number of 2.1 FPs per case. The result
demonstrates that our edge-detection-based method is effective in locating the cancer region by using the information
obtained from different phase images.
Computer-assisted lesion detection system for stomach screening using stomach shape and appearance models
Show abstract
In Japan, stomach cancer is one of the three most common causes of death from cancer. Since periodic health checks of
stomach X-rays have become more widely carried out, the physicians' burdens have been increasing in the mass
screening to detect initial symptoms of a disease. For the purpose of automatic diagnosis, we try to develop a computer-assisted
lesion detection system for stomach screening. The proposed system has two databases. One is the stomach
shape database that consists of the computer graphics stomach 3D models based on biomechanics simulation and their
projected 2D images. The other is the normal appearance database that is constructed by learning patterns in a normal
patient training set. The stomach contour is extracted from an X-ray image including a barium filled region by the
following steps. Firstly, the approximated stomach region is obtained by nonrigid registration based on mutual
information. We define nonrigid transformation as one that includes translations, rotations, scaling, air-barium interface
and weights of eigenvectors determined by principal components analysis in the stomach shape database. Secondly, the
accurate stomach contour is extracted from the gradient of an image by using the Dynamic Programming. After then,
stomach lesions are detected by inspecting whether the Mahalanobis distance from the mean in the normal appearance
database is longer than a suitable value on the extracted stomach contour. We applied our system to 75 X-ray images of
barium-filled stomach to show its validity.
Automatic landmark detection for cervical image registration validation
Show abstract
Many cervical Computer-Aided Diagnosis (CAD) methods rely on measuring gradual appearance changes on the
cervix after the application of a contrast agent. Image registration has been used to ensure pixel correspondence
to the same tissue location throughout the whole temporal sequence but, to date, there is no reliable mean of
testing its accuracy to compensate for patient and tissue movement.
We present an independent system to use automatically extracted and matched features from a colposcopic image
sequence in order to generate position landmarks. These landmarks may be used either to measure the accuracy
of a registration method to align any pair of images from the colposcopic sequence or as a cue for registration.
The algorithm selects sets of matched features that extend through the whole image sequence allowing to locate,
in a reliable and unbiased way, a tissue point throughout the whole image sequence. Experiments on real
colposcopy image sequences show that the approach is robust, reliable, and leads to geometrically coherent sets
of landmarks that correspond to visually recognizable regions. We use the extracted landmarks to test the
precision of some of the cervical registration algorithms previously presented in the literature.
Multispectral image analysis of bruise age
Show abstract
The detection and aging of bruises is important within clinical and forensic environments. Traditionally, visual and
photographic assessment of bruise color is used to determine age, but this substantially subjective technique has been
shown to be inaccurate and unreliable. The purpose of this study was to develop a technique to spectrally-age bruises
using a reflective multi-spectral imaging system that minimizes the filtering and hardware requirements while achieving
acceptable accuracy. This approach will then be incorporated into a handheld, point-of-care technology that is
clinically-viable and affordable. Sixteen bruises from elder residents of a long term care facility were imaged over time.
A multi-spectral system collected images through eleven narrow band (~10 nm FWHM) filters having center
wavelengths ranging between 370-970 nm corresponding to specific skin and blood chromophores. Normalized bruise
reflectance (NBR)- defined as the ratio of optical reflectance coefficient of bruised skin over that of normal skin- was
calculated for all bruises at all wavelengths. The smallest mean NBR, regardless of bruise age, was found at wavelength
between 555 & 577nm suggesting that contrast in bruises are from the hemoglobin, and that they linger for a long
duration. A contrast metric, based on the NBR at 460nm and 650nm, was found to be sensitive to age and requires
further investigation. Overall, the study identified four key wavelengths that have promise to characterize bruise age.
However, the high variability across the bruises imaged in this study complicates the development of a handheld
detection system until additional data is available.
A CAD system for assessment of MRI findings to track the progression of multiple sclerosis
Show abstract
Multiple sclerosis (MS) is a progressive neurological disease affecting myelin pathways. MRI has become the
medical imaging study of choice both for the diagnosis and for the follow-up and monitoring of multiple sclerosis.
The progression of the disease is variable, and requires routine follow-up to document disease exacerbation,
improvement, or stability of the characteristic MS lesions or plaques. The difficulties with using MRI as a
monitoring tool are the significant quantities of time needed by the radiologist to actually measure the size of the
lesions, and the poor reproducibility of these manual measurements. A CAD system for automatic image analysis
improves clinical efficiency and standardizes the lesion measurements. Multiple sclerosis is a disease well suited
for automated analysis. The segmentation algorithm devised classifies normal and abnormal brain structures
and measures the volume of multiple sclerosis lesions using fuzzy c-means clustering with incorporated spatial
(sFCM) information. First, an intracranial structures mask in T1 image data is localized and then superimposed
in FLAIR image data. Next, MS lesions are identified by sFCM and quantified within a predefined volume. The
initial validation process confirms a satisfactory comparison of automatic segmentation to manual outline by a
neuroradiologist and the results will be presented.
Automatic differentiation of melanoma and Clark nevus skin lesions
Show abstract
Skin cancer is the most common form of cancer in the United States. Although melanoma accounts for just
11% of all types of skin cancer, it is responsible for most of the deaths, claiming more than 7910 lives
annually. Melanoma is visually difficult for clinicians to differentiate from Clark nevus lesions which are
benign. The application of pattern recognition techniques to these lesions may be useful as an educational
tool for teaching physicians to differentiate lesions, as well as for contributing information about the
essential optical characteristics that identify them. Purpose: This study sought to find the most effective
features to extract from melanoma, melanoma in situ and Clark nevus lesions, and to find the most effective
pattern-classification criteria and algorithms for differentiating those lesions, using the Computer Vision
and Image Processing Tools (CVIPtools) software package. Methods: Due to changes in ambient lighting
during the photographic process, color differences between images can occur. These differences were
minimized by capturing dermoscopic images instead of photographic images. Differences in skin color
between patients were minimized via image color normalization, by converting original color images to
relative-color images. Relative-color images also helped minimize changes in color that occur due to
changes in the photographic and digitization processes. Tumors in the relative-color images were
segmented and morphologically filtered. Filtered, relative-color, tumor features were then extracted and
various pattern-classification schemes were applied. Results: Experimentation resulted in four useful
pattern classification methods, the best of which was an overall classification rate of 100% for melanoma
and melanoma in situ (grouped) and 60% for Clark nevus. Conclusion: Melanoma and melanoma in situ
have feature parameters and feature values that are similar enough to be considered one class of tumor that
significantly differs from Clark nevus. Consequently, grouping melanoma and melanoma in situ together
achieves the best results in classifying and automatically differentiating melanoma from Clark nevus
lesions.
Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images
Show abstract
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth
of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the
maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver
tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation
results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects,
called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to
shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the
histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing
the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the
boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one,
the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the
DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed
that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our
preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors
detected in hepatic CT images.
The effect of blood vessels on the computation of the scanning laser ophthalmoscope retinal thickness map
Show abstract
Retinal thickness maps obtained using a scanning laser ophthalmoscope are useful in the diagnosis of macular edema and
other diseases that cause changes in the retinal thickness. However, the thickness measurements are adversely affected
by the presence of blood vessels. This paper studies the effect that the blood vessels have on the computation of the
retinal thickness. The retinal thickness is estimated using maximum-likelihood resolution with anatomical constraints.
The blood vessels are segmented using local image features. Comparison of the retinal thickness with and without the
blood vessel removal is made using correlation coefficient and I-divergence.
Automated tumor delineation using joint PET/CT information
Show abstract
In this paper, we propose a new method for automated delineation of tumor boundaries in whole-body PET/CT by
jointly using information from both PET and diagnostic CT images. Our method takes advantage of initial robust hot
spot detection and segmentation performed in PET to provide a conservative tumor structure delineation. Using this
estimate as initialization, a model for tumor appearance and shape in corresponding CT structures is learned and the
model provides the basis for classifying each voxel to either lesion or background class. This CT classification is then
probabilistically integrated with PET classification using the joint likelihood ratio test technique to derive the final
delineation. More accurate and reproducible tumor delineation is achieved as a result of such multi-modal tumor
delineation, without additional user intervention. The method is particular useful to improve the PET delineation result
when there are clear contrast edges in CT between tumor and healthy tissue, and to enable CT segmentation guided by
PET when such contrast difference is absent in CT.
Detection of retinal nerve fiber layer defects in retinal fundus images using Gabor filtering
Show abstract
Retinal nerve fiber layer defect (NFLD) is one of the most important findings for the diagnosis of glaucoma reported by
ophthalmologists. However, such changes could be overlooked, especially in mass screenings, because ophthalmologists
have limited time to search for a number of different changes for the diagnosis of various diseases such as diabetes,
hypertension and glaucoma. Therefore, the use of a computer-aided detection (CAD) system can improve the results of
diagnosis. In this work, a technique for the detection of NFLDs in retinal fundus images is proposed. In the
preprocessing step, blood vessels are "erased" from the original retinal fundus image by using morphological filtering.
The preprocessed image is then transformed into a rectangular array. NFLD regions are observed as vertical dark bands
in the transformed image. Gabor filtering is then applied to enhance the vertical dark bands. False positives (FPs) are
reduced by a rule-based method which uses the information of the location and the width of each candidate region. The
detected regions are back-transformed into the original configuration. In this preliminary study, 71% of NFLD regions
are detected with average number of FPs of 3.2 per image. In conclusion, we have developed a technique for the
detection of NFLDs in retinal fundus images. Promising results have been obtained in this initial study.
Classification of cirrhotic liver in Gadolinium-enhanced MR images
Show abstract
Cirrhosis of the liver is characterized by the presence of widespread nodules and fibrosis in the liver. The fibrosis
and nodules formation causes distortion of the normal liver architecture, resulting in characteristic texture patterns.
Texture patterns are commonly analyzed with the use of co-occurrence matrix based features measured on regions-of-interest (ROIs). A classifier is subsequently used for the classification of cirrhotic or non-cirrhotic livers.
Problem arises if the classifier employed falls into the category of supervised classifier which is a popular choice.
This is because the 'true disease states' of the ROIs are required for the training of the classifier but is, generally, not
available. A common approach is to adopt the 'true disease state' of the liver as the 'true disease state' of all ROIs in
that liver. This paper investigates the use of a nonsupervised classifier, the k-means clustering method in classifying
livers as cirrhotic or non-cirrhotic using unlabelled ROI data. A preliminary result with a sensitivity and specificity
of 72% and 60%, respectively, demonstrates the feasibility of using the k-means non-supervised clustering method
in generating a characteristic cluster structure that could facilitate the classification of cirrhotic and non-cirrhotic
livers.
Fully automated screening of immunocytochemically stained specimens for early cancer detection
Show abstract
Cytopathological cancer diagnoses can be obtained less invasive than histopathological investigations. Cells
containing specimens can be obtained without pain or discomfort, bloody biopsies are avoided, and the diagnosis
can, in some cases, even be made earlier. Since no tissue biopsies are necessary these methods can also be used
in screening applications, e.g., for cervical cancer. Among the cytopathological methods a diagnosis based on
the analysis of the amount of DNA in individual cells achieves high sensitivity and specificity. Yet this analysis
is time consuming, which is prohibitive for a screening application. Hence, it will be advantageous to retain, by
a preceding selection step, only a subset of suspicious specimens. This can be achieved using highly sensitive
immunocytochemical markers like p16ink4a for preselection of suspicious cells and specimens.
We present a method to fully automatically acquire images at distinct positions at cytological specimens
using a conventional computer controlled microscope and an autofocus algorithm. Based on the thus obtained
images we automatically detect p16ink4a-positive objects. This detection in turn is based on an analysis of the
color distribution of the p16ink4a marker in the Lab-colorspace. A Gaussian-mixture-model is used to describe
this distribution and the method described in this paper so far achieves a sensitivity of up to 90%.
Automated detection of extradural and subdural hematoma for contrast-enhanced CT images in emergency medical care
Show abstract
We have been developing the CAD scheme for head and abdominal injuries for emergency medical care. In this work, we
have developed an automated method to detect typical head injuries, rupture or strokes of brain. Extradural and subdural
hematoma region were detected by comparing technique after the brain areas were registered using warping. We employ
5 normal and 15 stroke cases to estimate the performance after creating the brain model with 50 normal cases. Some of
the hematoma regions were detected correctly in all of the stroke cases with no false positive findings on normal cases.
Digital staining of pathological images: dye amount correction for improved classification performance
Show abstract
Physical staining is indispensable in pathology. While physical staining uses chemicals, "digital staining" exploits the
differing spectral characteristics of the different tissue components to simulate the effect of physical staining. Digital
staining for pathological images involves two basic processes: classification of tissue components and digital
colorization whereby the classified tissue components are impressed with colors associated to their reaction to specific
dyes. Spectral features, i.e. spectral transmittance, of the different tissue structures are dependent on the staining
condition of the tissue slide. Thus, if the staining condition of the test image is different, classification result is affected,
and the resulting digitally-stained image may not reflect the desired result. This paper shows that it is possible to obtain
robust classification results by correcting the dye amount of each test-image pixel using Beer Lambert's Law. Also the
effectiveness of such technique to be incorporated to the current digital staining scheme is investigated as well.
Bone, blood vessels, and muscle detection algorithm and creating database based on dynamic and non-dynamic multi-slice CT image of head and neck
Show abstract
Nowadays, dental CT images play more and more important roles in oral clinical applications. Our research is important
particularly in the field of dentistry. We are using non-dynamic and dynamic CT image for our research. We are creating
our database of bone, blood vessels and muscles of head and neck. This database contains easy case and difficult case of
head and neck's bone, blood vessels and muscle. There are lots of difficult cases in our database. Teeth separation and
condylar process separation is difficult case. External carotid artery has many branches and they are attached with vain
so it is difficult to separate. All muscle threshold value is same and they are attaching with each other so muscle
separation is very difficult. These databases also contain different age's patients. For this reason our database becomes
an important tool for dental students and also important assets for diagnosis. After completion our database we can link
it with other dental application.
Automatic selection of region of interest for radiographic texture analysis
Show abstract
We have been developing radiographic texture analysis (RTA) for assessing osteoporosis and the related risk of fracture.
Currently, analyses are performed on heel images obtained from a digital imaging device, the GE/Lunar PIXI, that yields
both the bone mineral density (BMD) and digital images (0.2-mm pixels; 12-bit quantization). RTA is performed on the
image data in a region-of-interest (ROI) placed just below the talus in order to include the trabecular structure in the
analysis. We have found that variations occur from manually selecting this ROI for RTA. To reduce the variations, we
present an automatic method involving an optimized Canny edge detection technique and parameterized bone
segmentation, to define bone edges for the placement of an ROI within the predominantly calcaneus portion of the
radiographic heel image. The technique was developed using 1158 heel images and then tested on an independent set of
176 heel images. Results from a subjective analysis noted that 87.5% of ROI placements were rated as "good". In
addition, an objective overlap measure showed that 98.3% of images had successful ROI placements as compared to
placement by an experienced observer at an overlap threshold of 0.4. In conclusion, our proposed method for automatic
ROI selection on radiographic heel images yields promising results and the method has the potential to reduce intra- and
inter-observer variations in selecting ROIs for radiographic texture analysis.
Automatic CAD of meniscal tears on MR imaging: a morphology-based approach
Show abstract
Knee-related injuries, including meniscal tears, are common in young athletes and require accurate diagnosis and
appropriate surgical intervention. Although with proper technique and skill, confidence in the detection of meniscal
tears should be high, this task continues to be a challenge for many inexperienced radiologists. The purpose of our study
was to automate detection of meniscal tears of the knee using a computer-aided detection (CAD) algorithm. Automated
segmentation of the sagittal T1-weighted MR imaging sequences of the knee in 28 patients with diagnoses of meniscal
tears was performed using morphologic image processing in a 3-step process including cropping, thresholding, and
application of morphological constraints. After meniscal segmentation, abnormal linear meniscal signal was extracted
through a second thresholding process. The results of this process were validated by comparison with the interpretations
of 2 board-certified musculoskeletal radiologists. The automated meniscal extraction algorithm process was able to
successfully perform region of interest selection, thresholding, and object shape constraint tasks to produce a convex
image isolating the menisci in more than 69% of the 28 cases. A high correlation was also noted between the CAD
algorithm and human observer results in identification of complex meniscal tears. Our initial investigation indicates
considerable promise for automatic detection of simple and complex meniscal tears of the knee using the CAD
algorithm. This observation poses interesting possibilities for increasing radiologist productivity and confidence,
improving patient outcomes, and applying more sophisticated CAD algorithms to orthopedic imaging tasks.
Poster Session: Thoracic Imaging
Evaluation of lung nodule growth measurement for MDCT exams with different dosages using synthetic nodules
Show abstract
A new lung nodule simulation model was designed to create and insert synthetic solid lung nodules, with shapes
and density similar to real nodules, into normal MDCT chest exams. The nodule simulation model was validated
both subjectively by human experts and quantitatively by comparing density attenuation profiles of simulated
nodules with real nodules. These validation studies demonstrated a high level of similarity between the synthetic
nodules and real nodules. This nodule simulation model was used to create objective test databases for use in
evaluating lung nodule growth measurement of a CAD system. The performance evaluation studies demonstrated
a high level of accuracy for the automatic growth measurement tool, while the error margin for the growth
measurement increased with nodule size decreasing. The experiments also showed the volume/growth estimation
errors for low dose scans were comparable to the ones for the normal dose scans, thus demonstrated a robust
performance across different dosages.
Texture-based computer-aided diagnosis system for lung fibrosis
Show abstract
Computer-aided detection of lung fibrosis remains a difficult task due to the small vascular structures, scars, and fibrotic
tissues that need to be identified and differentiated. In this paper, we present a texture-based computer-aided diagnosis
(CAD) system that automatically detects lung fibrosis. Our system uses high-resolution computed tomography (HRCT),
advanced texture analysis, and support vector machine (SVM) committees to automatically and accurately detect lung
fibrosis. Our CAD system follows a five-stage pipeline that is comprised of: segmentation, texture analysis, training,
classification, and display. Since the accuracy of the proposed texture-based CAD system depends on how precise we
can distinguish texture dissimilarities between normal and abnormal lungs, in this paper we have given special attention
to the texture block selection process. We present the effects that texture block size, data reduction techniques, and
image smoothing filters have within the overall classification results. Furthermore, a histogram-based technique to
refine the classification results inside texture blocks is presented.
The proposed texture-based CAD system to detect lung fibrosis has been trained with several normal and abnormal
HRCT studies and has been tested with the original training dataset as well as new HRCT studies. On average, when
using the suggested/default texture size and an optimized SVM committee system, a 90% accuracy has been observed
with the proposed texture-based CAD system to detect lung fibrosis.
Prediction of tumor volumes using an exponential model
Show abstract
Measurement of pulmonary nodule growth rate is important for the evaluation of lung cancer treatment. The
change in nodule growth rate can be used as an indicator of the efficacy of a prescribed treatment. However, a
change in growth rate may be due to actual physiological change, or it may be simply due to measurement error.
To address this issue, we propose the use of an exponential model to predict the volume of a tumor based on
two earlier scans. We examined 11 lung cancers presenting as solid pulmonary nodules that were not treated.
Using 5 of these with optimal scan parameters, thin-slice (1.0mm or 1.25mm) with same axial resolution, we
found an error ranging from 1.7% to 27.7%, with an average error of 14.9%. This indicates that we can estimate
the growth of a lung cancer, as measured by CT, which includes the actual growth as well as the error due to
the technique, by the amount indicated above. Using scans with non-optimal parameters, either thick-slice or
different resolution thin-slice scans, resulted in errors ranging from 30% to 600%, suggesting that same resolution
thin-slice CT scans are necessary for accurate measurement of nodule growth.
Classifying pulmonary nodules using dynamic enhanced CT images based on CT number histogram
Show abstract
Pulmonary nodules are classified into three types such as solid, mixed GGO, and pure GGO types on the basis of the
visual assessment of CT appearance. In our current study a quantitative classification algorithm has been developed by
using volumetric data sets obtained from thin-section CT images. The algorithm can classify the pulmonary nodules into
five types (&agr;, &bgr;, &ggr;, &dgr;, and ε; on the basis of internal features extracted from CT number histograms inside nodules. We
applied dynamic enhanced single slice and multi slice CT images to this classification algorithm and we analyzed it in
each type.
Automated alignment of serial thoracic scans using bone structure descriptors
Show abstract
In this manuscript we present an automated algorithm for the alignment of thoracic scans using descriptors of bone
structures. Bone structures were utilized because they are expected to be less susceptible to sources of errors such as
patient positioning and breath hold. The algorithm employed the positioning of ribs relative to the spinal cord along with
a description of the scapula. The spinal cord centroid was detected by extracting local maxima of the distance transform
followed by point tracing along consecutive slices. Ribs were segmented using adaptive thresholding followed by the
watershed algorithm to detach ribs from the vertebra, and by imposing requirements of rib proximity to the lung border.
The angles formed between the spinal cord centroid and segmented rib centroids were used to describe rib positioning.
Additionally, the length of the scapula was extracted in each slice. A cost function incorporating the difference of
features from rib positioning and scapula length between two slices was derived and used to match slices. The method
was evaluated on a set of 12 pairs of full and partial CT scans acquired on the same day. Evaluation was based on
whether the slices showing a nodule at its maximum diameter in each scan were matched. Full-to-partial and partial-to-full
alignment were performed. Results showed that the proposed metric matched nodule slices within an average
distance of 1.08 and 1.17 slices from the target for full-to-partial and partial-to-full alignment respectively. These
preliminary results are encouraging for using this method as a first step in an overall process of temporally analyzing CT
lung nodules.
Differentiating solitary pulmonary nodules (SPNs) with 3D shape features
Show abstract
This study developed a methodology to extract the quantitative features of representing nodule 3D shape and investigate
the performance of these features in differentiating between benign and malignant solitary pulmonary nodules (SPNs).
36 cases with solitary lung nodules (15 Benign, 21 Malignant) were examined in this study. The CT helical scanning-parameters
were ⩽3 mm collimation, pitch 1-2, and 1.5-3 mm reconstruction interval. The nodule boundaries were
contoured by radiologists on 3D volume data. Using these boundaries, the nodule physical 3D surfaces were created and
several 3D nodule shape-features were computed, including: Compactness Factor (CF) of nodule, Shape Index (SI) and
curvedness of each pixel in the physical 3D nodule surface. The histogram characteristic features of SI and curvedness
were calculated. AdaBoost was performed to select the features and their statistically differences were analyzed. Logistic
Regression Analysis (LRA) and AdaBoost were used to evaluate the overall diagnostic accuracy. For 36 patients, CF is
the first feature selected by AdaBoost, which also has significant difference (t-test, P=0.6%) between Benign and
malignant nodules. However, histogram features of SI and curvedness are not all significantly different. The accuracy of
LRA is 75%, with accuracies of AdaBoost using all features is about 80% with cross validation. Generally, SI,
curvedness and CF may provide a comprehensive examination of the nodule shape, which can be used in differentiating
benign from malignant SPNs. However, other types' features (such as texture, angiogenesis) should be combined with
shape information to assist radiologists in characterizing SPNs more accurately.
Characterization of solid pulmonary nodules using three-dimensional features
Show abstract
With the development of high-resolution, multirow-detector CT scanners, the prospects for diagnosing and
treating lung cancer at an early stage are much improved. However, it is often difficult to determine whether a
nodule, especially a small nodule, is malignant from a single CT scan. We developed a computer-aided diagnostic
algorithm to distinguish benign from malignant solid nodules based on features that can be extracted from a
single CT scan. Our method uses 3D geometric and densitometric moment analysis of a segmented nodule image
and surface curvature from a polygonal surface model of the nodule. After excluding features directly related
to size, we computed a total of 28 features. Prior to classification, the number of features was reduced through
stepwise feature selection. The features are used by two classifiers, k-nearest-neighbors (k-NN) and logistic
regression. We used 48 malignant nodules whose status was determined by biopsy or resection, and 55 benign
nodules determined to be clinically stable through two years of no change or biopsy. The k-NN classifier achieved
a sensitivity of 0.81 with a specificity of 0.76, while the logistic regression classifier achieved a sensitivity of 0.85
and a specificity of 0.80.
False positive reduction for lung nodule CAD
Show abstract
Computer-aided detection (CAD) algorithms 'automatically' identify lung nodules on thoracic multi-slice CT scans
(MSCT) thereby providing physicians with a computer-generated 'second opinion'. While CAD systems can achieve
high sensitivity, their limited specificity has hindered clinical acceptance. To overcome this problem, we propose a false
positive reduction (FPR) system based on image processing and machine learning to reduce the number of false positive
lung nodules identified by CAD algorithms and thereby improve system specificity.
To discriminate between true and false nodules, twenty-three 3D features were calculated from each candidate nodule's
volume of interest (VOI). A genetic algorithm (GA) and support vector machine (SVM) were then used to select an
optimal subset of features from this pool of candidate features. Using this feature subset, we trained an SVM classifier to
eliminate as many false positives as possible while retaining all the true nodules. To overcome the imbalanced nature of
typical datasets (significantly more false positives than true positives), an intelligent data selection algorithm was
designed and integrated into the machine learning framework, thus further improving the FPR rate.
Three independent datasets were used to train and validate the system. Using two datasets for training and the third for
validation, we achieved a 59.4% FPR rate while removing one true nodule on the validation datasets. In a second
experiment, 75% of the cases were randomly selected from each of the three datasets and the remaining cases were used
for validation. A similar FPR rate and true positive retention rate was achieved. Additional experiments showed that the
GA feature selection process integrated with the proposed data selection algorithm outperforms the one without it by
5%-10% FPR rate.
The methods proposed can be also applied to other application areas, such as computer-aided diagnosis of lung nodules.
Extrapolation techniques for textural characterization of tissue in medical images
Show abstract
The low in-plane resolution of thoracic computed tomography (CT) scans may force texture analysis in
regions of interest (ROIs) that are not completely filled by the tissue under analysis. The inclusion of
extraneous tissue textures within the ROI may substantially contaminate these texture descriptor values.
The goal of this study is to investigate the accuracy of different image extrapolation methods when
calculating common texture descriptor values. Three extrapolation methods (mean fill, tiled fill, and
CLEAN deconvolution) were applied to 480 lung parenchyma regions of interest (ROIs) extracted from
transverse thoracic CT sections. The ROIs were artificially corrupted, and each extrapolation method was
independently applied to create extrapolation-corrected ROIs. Texture descriptor values were calculated
and compared for the original, corrupted, and extrapolation-corrected ROIs. For 51 of 53 texture
descriptors, the values calculated from extrapolation-corrected ROIs were more accurate than values
calculated from corrupted ROIs. Further, a "best" extrapolation method for all texture descriptors was not
identified, which implies that the choice of extrapolation method depends on the texture descriptors applied
in a given tissue classification scheme.
Quantitative kinetic analysis of lung nodules by temporal subtraction technique in dynamic chest radiography with a flat panel detector
Show abstract
Early detection and treatment of lung cancer is one of the most effective means to reduce cancer mortality; chest
X-ray radiography has been widely used as a screening examination or health checkup. The new examination
method and the development of computer analysis system allow obtaining respiratory kinetics by the use of flat
panel detector (FPD), which is the expanded method of chest X-ray radiography. Through such changes functional
evaluation of respiratory kinetics in chest has become available. Its introduction into clinical practice is expected in
the future. In this study, we developed the computer analysis algorithm for the purpose of detecting lung nodules
and evaluating quantitative kinetics. Breathing chest radiograph obtained by modified FPD was converted into 4
static images drawing the feature, by sequential temporal subtraction processing, morphologic enhancement
processing, kinetic visualization processing, and lung region detection processing, after the breath synchronization
process utilizing the diaphragmatic analysis of the vector movement. The artificial neural network used to analyze
the density patterns detected the true nodules by analyzing these static images, and drew their kinetic tracks. For the
algorithm performance and the evaluation of clinical effectiveness with 7 normal patients and simulated nodules,
both showed sufficient detecting capability and kinetic imaging function without statistically significant difference.
Our technique can quantitatively evaluate the kinetic range of nodules, and is effective in detecting a nodule on a
breathing chest radiograph. Moreover, the application of this technique is expected to extend computer-aided
diagnosis systems and facilitate the development of an automatic planning system for radiation therapy.
3D temporal subtraction on multislice CT images using nonlinear warping technique
Show abstract
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and
difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval
changes between previous and current multislice CT images based on a nonlinear image warping technique. Our
method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT
image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our
computerized method includes global and local image matching techniques for accurate registration of current and
previous CT images. For global image matching, we selected the corresponding previous section image for each
current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred
previous CT image. For local image matching, we applied the 3D template matching technique with translation and
rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift
vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template
matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the
previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous
CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical
cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration
artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT
images.
Automatic two-step detection of pulmonary nodules
Show abstract
We present a computer-aided diagnosis (CAD) system to detect small-size (from 2mm to around 10mm) pulmonary
nodules from helical CT scans. A pulmonary nodule is a small, round (parenchymal nodule) or worm
(juxta-pleural) shaped lesion in the lungs. Both have greater radio density than lungs parenchyma. Lung nodules
may indicate a lung cancer and its detection in early stage improves survival rate of patients. CT is considered
to be the most accurate imaging modality for detection of nodules. However, the large amount of data per
examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. CAD
system presented is designed to help lower the number of omissions. Our system uses two different schemes
to locate juxtapleural nodules and parenchymal nodules. For juxtapleural nodules, morphological closing and
thresholding is used to find nodule candidates. To locate non-pleural nodule candidates, 3D blob detector uses
multiscale filtration. Ellipsoid model is fitted on nodules. To define which of the nodule candidates are in fact
nodules, an additional classification step is applied. Linear and multi-threshold classifiers are used. System was
tested on 18 cases (4853 slices) with total sensitivity of 96%, with about 12 false positives/slice. The classification
step reduces number of false positives to 9 per slice without significantly decreasing sensitivity (89,6%).
Algorithm of pulmonary emphysema extraction using thoracic 3D CT images
Show abstract
Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was
destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative
algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose
thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as
emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT
images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the
emphysematous lesions distribution and their evolution in time interval changes.
Application of supervised range-constrained thresholding to extract lung pleura for automated detection of pleural thickenings from thoracic CT images
Show abstract
We develop an image analysis system to automatically detect pleural thickenings and assess their characteristic values
from patients' thoracic spiral CT images. Algorithms are described to carry out the segmentation of pleural contours and
to find the pleural thickenings. The method of thresholding was selected as the technique to separate lung's tissue from
other. Instead thresholding based only on empirical considerations, the so-called "supervised range-constrained
thresholding" is applied. The automatic detection of pleural thickenings is carried out based on the examination of its
concavity and on the characteristic Hounsfield unit of tumorous tissue. After detection of pleural thickenings, in order to
assess their growth rate, a spline-based interpolation technique is used to create a model of healthy pleura. Based on this
healthy model, the size of the pleural thickenings is calculated. In conjunction with the spatio-temporal matching of CT
images acquired at different times, the oncopathological assessment of morbidity can be documented. A graphical user
interface is provided which is also equipped with 3D visualization of the pleura. Our overall aim is to develop an image
analysis system for an efficient and reliable diagnosis of early stage pleural mesothelioma in order to ease the
consequences of the expected peak of malignant pleural mesothelioma caused by asbestos exposure.
Extracting alveolar structure of human lung tissue specimens based on surface skeleton representation from 3D micro-CT images
Show abstract
We have developed a Micro CT system for understanding lung function at a high resolution of the micrometer order
(up to 5µm in spatial resolution). Micro CT system enables the removal specimen of lungs to be observed at micro level,
has expected a big contribution for micro internal organs morphology and the image diagnosis study. In this research,
we develop system to visualize lung microstructures in three dimensions from micro CT images and analyze them. They
characterize in that high CT value of the noise area is, and the difficulty of only using threshold processing to extract the
alveolar wall of micro CT images. Thus, we are developing a method of extracting the alveolar wall with surface
thinning algorithm. In this report, we propose the method which reduces the excessive degeneracy of figure which
caused by surface thinning process. And, we apply this algorithm to the micro CT image of the actual pulmonary
specimen. It is shown that the extraction of the alveolus wall becomes possible in the high precision.
Image-based diagnostic aid for interstitial lung disease with secondary data integration
Show abstract
Interstitial lung diseases (ILDs) are a relatively heterogeneous group of around 150 illnesses with often very
unspecific symptoms. The most complete imaging method for the characterisation of ILDs is the high-resolution
computed tomography (HRCT) of the chest but a correct interpretation of these images is difficult even for
specialists as many diseases are rare and thus little experience exists. Moreover, interpreting HRCT images
requires knowledge of the context defined by clinical data of the studied case. A computerised diagnostic aid tool
based on HRCT images with associated medical data to retrieve similar cases of ILDs from a dedicated database
can bring quick and precious information for example for emergency radiologists. The experience from a pilot
project highlighted the need for detailed database containing high-quality annotations in addition to clinical
data.
The state of the art is studied to identify requirements for image-based diagnostic aid for interstitial lung
disease with secondary data integration. The data acquisition steps are detailed. The selection of the most
relevant clinical parameters is done in collaboration with lung specialists from current literature, along with
knowledge bases of computer-based diagnostic decision support systems. In order to perform high-quality
annotations of the interstitial lung tissue in the HRCT images an annotation software and its own file format
is implemented for DICOM images. A multimedia database is implemented to store ILD cases with clinical
data and annotated image series. Cases from the University & University Hospitals of Geneva (HUG) are
retrospectively and prospectively collected to populate the database. Currently, 59 cases with certified diagnosis
and their clinical parameters are stored in the database as well as 254 image series of which 26 have their regions
of interest annotated.
The available data was used to test primary visual features for the classification of lung tissue patterns. These
features show good discriminative properties for the separation of five classes of visual observations.
Labeling the pulmonary arterial tree in CT images for automatic quantification of pulmonary embolism
Show abstract
Contrast-enhanced CT Angiography has become an accepted diagnostic tool for detecting Pulmonary Embolism (PE).
The CT obstruction index proposed by Qanadli, which is based on the number of obstructed arterial segments, enables
the quantification of PE severity. Because the required manual identification of twenty arterial segments is time
consuming, we propose a method for automated labeling of the pulmonary arterial tree to identify the arterial segments.
Assuming that the peripheral parts of the arterial tree contain most relevant information for labeling, we propose a
bottom-up labeling algorithm exploiting the spatial information of the peripheral arteries. A model of reference positions
of the arterial segments was trained using manually labeled trees of 9 patients. To improve accuracy, the arterial tree was
partitioned into sub-trees enabling an iterative labeling technique that labels each sub-tree separately. The accuracy of
the labeling technique was evaluated using manually labeled trees of 10 patients. Initially an accuracy of 74% was
obtained, whereas the iterative approach improved accuracy to 85%. The labeling errors had minor effects on the
calculated Qanadli index. Therefore, the presented labeling approach is applicable in automated PE quantification.
An automated system for lung nodule detection in low-dose computed tomography
Show abstract
A computer-aided detection (CAD) system for the identification of pulmonary nodules in low-dose multi-detector helical
Computed Tomography (CT) images was developed in the framework of the MAGIC-5 Italian project. One of the main
goals of this project is to build a distributed database of lung CT scans in order to enable automated image analysis
through a data and cpu GRID infrastructure.
The basic modules of our lung-CAD system, a dot-enhancement filter for nodule candidate selection and a neural
classifier for false-positive finding reduction, are described. The system was designed and tested for both internal and
sub-pleural nodules. The results obtained on the collected database of low-dose thin-slice CT scans are shown in terms of
free response receiver operating characteristic (FROC) curves and discussed.
Automated anatomical labeling algorithm of bronchial branches based on multi-slice CT images
Show abstract
Multi-slice CT technology was developed, so, we can get clear contrast images and thin slice images. But doctors need
to diagnosis many image, thus their load increases. Therefore, development of the algorithm that analyses lung internal-organs
is expected. When doctors diagnose lung internal-organs, they understand it. So, detailed analyze of lung internal-organs
is applicant to early detection of a nodule. Especially, analyzing bronchus provides that useful information of
detection of airway disease and classification of the pulmonary vein and artery. In this paper, we describe a method for
automated anatomical labeling algorithm of bronchial branches based on Multi-Slice CT images.