Hyperspectral signature analysis of skin parameters
Author(s):
Saurabh Vyas;
Amit Banerjee;
Luis Garza;
Sewon Kang;
Philippe Burlina
Show Abstract
The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.
Down syndrome detection from facial photographs using machine learning techniques
Author(s):
Qian Zhao;
Kenneth Rosenbaum;
Raymond Sze;
Dina Zand;
Marshall Summar;
Marius George Linguraru
Show Abstract
Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is
born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems
and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an
important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial
image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture
features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics.
Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and
recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was
performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for
the combined features; the detection results were higher than using only geometric or texture features. The promising
results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.
Region detection in medical images using HOG classifiers and a body landmark network
Author(s):
Marius Erdt;
Oliver Knapp;
Klaus Drechsler;
Stefan Wesarg
Show Abstract
Automatic detection of anatomical structures and regions in 3D medical images is important for several computer aided diagnosis tasks. In this work, a new method for simultaneous detection of multiple anatomical areas is proposed. The method consists of two steps: first, single rectangular region candidates are detected independently using 3D variants of Histograms of Oriented Gradients (HOG) features. These features are robust against small changes between regions in rotation and scale which typically occur between different individuals. In a second step, the positions of the detected candidates are refined by incorporating a body landmark network that exploits anatomical relations between different structures. The landmark network consists of a principle component based statistical modeling of the relative positions between the detected regions in training images. The method has been evaluated on thoracic/abdominal CT images of the portal venous phase. In 216 CT images, eight different structures have been trained. Results show an increase in performance using the combination of HOGs and the landmark network in comparison to using independent classifiers without anatomical relations.
Automatic segmentation of kidneys from non-contrast CT images using efficient belief propagation
Author(s):
Jianfei Liu;
Marius George Linguraru;
Shijun Wang;
Ronald M. Summers
Show Abstract
CT colonography (CTC) can increase the chance of detecting high-risk lesions not only within the colon but anywhere in the abdomen with a low cost. Extracolonic findings such as calculi and masses are frequently found in the kidneys on CTC. Accurate kidney segmentation is an important step to detect extracolonic findings in the kidneys. However, noncontrast CTC images make the task of kidney segmentation substantially challenging because the intensity values of kidney parenchyma are similar to those of adjacent structures. In this paper, we present a fully automatic kidney
segmentation algorithm to support extracolonic diagnosis from CTC data. It is built upon three major contributions: 1)
localize kidney search regions by exploiting the segmented liver and spleen as well as body symmetry; 2) construct a
probabilistic shape prior handling the issue of kidney touching other organs; 3) employ efficient belief propagation on the shape prior to extract the kidneys. We evaluated the accuracy of our algorithm on five non-contrast CTC datasets with manual kidney segmentation as the ground-truth. The Dice volume overlaps were 88%/89%, the root-mean-squared errors were 3.4 mm/2.8 mm, and the average surface distances were 2.1 mm/1.9 mm for the left/right kidney respectively. We also validated the robustness on 27 additional CTC cases, and 23 datasets were successfully segmented. In four problematic cases, the segmentation of the left kidney failed due to problems with the spleen segmentation. The results demonstrated that the proposed algorithm could automatically and accurately segment kidneys from CTC images, given the prior correct segmentation of the liver and spleen.
Robust detection of renal calculi from non-contract CT images using TV-flow and MSER features
Author(s):
Jianfei Liu;
Shijun Wang;
Marius George Linguraru;
Ronald M. Summers
Show Abstract
Renal calculi are one of the most painful urologic disorders causing 3 million treatments per year in the United States.
The objective of this paper is the automated detection of renal calculi from CT colonography (CTC) images on which
they are one of the major extracolonic findings. However, the primary purpose of the CTC protocols is not for the
detection of renal calculi, but for screening of colon cancer. The kidneys are imaged with significant amounts of noise in the non-contrast CTC images, which makes the detection of renal calculi extremely challenging. We propose a
computer-aided diagnosis method to detect renal calculi in CTC images. It is built on three novel techniques: 1) total
variation (TV) flow to reduce image noise while keeping calculi, 2) maximally stable extremal region (MSER) features
to find calculus candidates, 3) salient feature descriptors based on intensity properties to train a support vector machine classifier and filter false positives. We selected 23 CTC cases with 36 renal calculi to analyze the detection algorithm. The calculus size ranged from 1.0mm to 6.8mm. Fifteen cases were selected as the training dataset, and the remaining eight cases were used for the testing dataset. The area under the receiver operating characteristic curve (AUC) values were 0.92 in the training datasets and 0.93 in the testing datasets. The testing dataset confidence interval for AUC reported by ROCKIT was [0.8799, 0.9591] and the training dataset was [0.8974, 0.9642]. These encouraging results demonstrated that our detection algorithm can robustly and accurately identify renal calculi from CTC images.
Preliminary results of automated removal of degenerative joint disease in bone scan lesion segmentation
Author(s):
Gregory H. Chu;
Pechin Lo;
Hyun J. Kim;
Martin Auerbach;
Jonathan Goldin;
Keith Henkel;
Ashley Banola;
Darren Morris;
Heidi Coy;
Matthew S. Brown
Show Abstract
Whole-body bone scintigraphy (or bone scan) is a highly sensitive method for visualizing bone metastases and is the accepted standard imaging modality for detection of metastases and assessment of treatment outcomes. The development of a quantitative biomarker using computer-aided detection on bone scans for treatment response assessment may have a significant impact on the evaluation of novel oncologic drugs directed at bone metastases. One of the challenges to lesion segmentation on bone scans is the non-specificity of the radiotracer, manifesting as high activity related to non-malignant processes like degenerative joint disease, sinuses, kidneys, thyroid and bladder. In this paper, we developed an automated bone scan lesion segmentation method that implements intensity normalization, a two-threshold model, and automated detection and removal of areas consistent with non-malignant processes from the segmentation. The two-threshold model serves to account for outlier bone scans with elevated and diffuse intensity distributions. Parameters to remove degenerative joint disease were trained using a multi-start Nelder-Mead simplex optimization scheme. The segmentation reference standard was constructed manually by a panel of physicians. We compared the performance of the proposed method against a previously published method. The results of a two-fold cross validation show that the overlap ratio improved in 67.0% of scans, with an average improvement of 5.1% points.
Segmenting the thoracic, abdominal and pelvic musculature on CT scans combining atlas-based model and active contour model
Author(s):
Weidong Zhang;
Jiamin Liu;
Jianhua Yao;
Ronald M. Summers
Show Abstract
Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis
(CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due
to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%±3.5%, and that of false positives is 5.5%±4.2%.
Automated measurement of diagnostic angles for hip dysplasia
Author(s):
Sepp de Raedt;
Inger Mechlenburg;
Maiken Stilling;
Lone Rømer;
Kjeld Søballe;
Marleen de Bruijne
Show Abstract
A fully automatic method for measuring diagnostic angles of hip dysplasia is presented. The method consists of the automatic segmentation of CT images and detection of anatomical landmarks on the femur and acetabulum. The standard angles used in the diagnosis of hip dysplasia are subsequently automatically calculated. Previous work in automating the measuring of angles required the manual segmentation or delineation of the articular joint surface. In the current work automatic segmentation is established using graph-cuts with a cost function based on a sheetness score to detect the sheet-like structure of the bone. Anatomical landmarks are subsequently detected using heuristics based on ray-tracing and the distance to the approximated acetabulur joint surface. Standard diagnositic angles are finally calculated and presented for interpretation. Experiments using 26 patients, showed a good agreement with gold standard manual measurements by an expert radiologist as performed in daily practice. The mean difference for the five angles was between −1:1 and 2:0 degrees with a concordance correlation coefficient between 0:87 and 0:93. The standard deviation varied between 2:3 and 4:1 degrees. These values correspond to values found in evaluating interobserver and intraobserver variation for manual measurements. The method can be used in clinical practice to replace the current manual measurements performed by radiologists. In the future, the method will be integrated into an intraoperative surgical guidance system.
Bone age assessment using support vector regression with smart class mapping
Author(s):
Daniel Haak;
Jing Yu;
Hendrik Simon;
Hauke Schramm;
Thomas Seidl;
Thomas M. Deserno
Show Abstract
Bone age assessment on hand radiographs is a frequently and time consuming task to determine growth disturbances in human body. Recently, an automatic processing pipeline, combining content-based image retrieval and support vector regression (SVR), has been developed. This approach was evaluated based on 1,097 radiographs from the University of Southern California. Discretization of SVR continuous prediction to age classes has been done by (i) truncation. In this paper, we apply novel approaches in mapping of SVR continuous output values: (ii) rounding, where 0.5 is added to the values before truncation; (iii) curve, where a linear mapping curve is applied between the age classes, and (iv) age, where artificial age classes are not used at all. We evaluate these methods on the age range of 0-18 years, and 2-17 years for comparison with the commercial product BoneXpert that is using an active shape approach. Our methods reach root-mean-square (RMS) errors of 0.80, 0.76 and 0.73 years, respectively, which is slightly below the performance of the BoneXpert.
Cortical thickness estimation of the proximal femur from multi-view dual-energy X-ray absorptiometry (DXA)
Author(s):
N. Tsaousis;
A. H. Gee;
G. M. Treece;
K.E.S. Poole
Show Abstract
Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one year post-fracture. Although various preventative therapies are available, patient selection is difficult. The current state-of-the-art risk assessment tool (FRAX) ignores focal structural defects, such as cortical bone thinning, a critical component in characterizing hip fragility. Cortical thickness can be measured using CT, but this is expensive and involves a significant radiation dose. Instead, Dual-Energy X-ray Absorptiometry (DXA) is currently the preferred imaging modality for assessing hip fracture risk and is used routinely in clinical practice. Our ambition is to develop a tool to measure cortical thickness using multi-view DXA instead of CT. In this initial study, we work with digitally reconstructed radiographs (DRRs) derived from CT data as a surrogate for DXA scans: this enables us to compare directly the thickness estimates with the gold standard CT results. Our approach involves a model-based femoral shape reconstruction followed by a data-driven algorithm to extract numerous cortical thickness point estimates. In a series of experiments on the shaft and trochanteric regions of 48 proximal femurs, we validated our algorithm and established its performance limits using 20 views in the range 0°–171°: estimation errors were 0:19 ± 0:53mm (mean +/- one standard deviation). In a more clinically viable protocol using four views in the range 0°–51°, where no other bony structures obstruct the projection of the femur, measurement errors were −0:07 ± 0:79 mm.
Detection of vertebral degenerative disc disease based on cortical shell unwrapping
Author(s):
Hector E. Muñoz;
Jianhua Yao;
Joseph E. Burns;
Ronald M. Summers
Show Abstract
Degenerative disc disease (DDD) can be identified as hyperdense regions of bone and osseous spur formation in the spine that become more prevalent with age. These regions can act as confounding factors in the search for alternative hyperdense foci such as neoplastic processes. We created a preliminary CAD system that detects DDD in the spine on CT images. After the spine is segmented, the cortical shell of each vertebral body is unwrapped onto a 2D map. Candidates are detected from the 2D map based on their intensity and gradient. The 2D detections are remapped into 3D space and a level set algorithm is applied to more fully segment the 3D lesions. Features generated from the unwrapped 2D map and 3D segmentation are combined to train a support vector machine (SVM) classifier. The classifier was trained on 20 cases with DDD, which were marked by a radiologist. The pre-SVM program detected 164/193 ground truth lesions. Preliminary results showed 69.65% sensitivity with a 95% confidence interval of (64.47%, 73.92%), at an average of 9.8 false positives per patient.
Comparison of demons deformable registration-based methods for texture analysis of serial thoracic CT scans
Author(s):
Alexandra R. Cunliffe;
Hania A. Al-Hallaq;
Xianhan M. Fei;
Rachel E. Tuohy;
Samuel G. Armato III
Show Abstract
To determine how 19 image texture features may be altered by three image registration methods, “normal” baseline and follow-up computed tomography (CT) scans from 27 patients were analyzed. Nineteen texture feature values were calculated in over 1,000 32x32-pixel regions of interest (ROIs) randomly placed in each baseline scan. All three methods used demons registration to map baseline scan ROIs to anatomically matched locations in the corresponding transformed follow-up scan. For the first method, the follow-up scan transformation was subsampled to achieve a voxel size identical to that of the baseline scan. For the second method, the follow-up scan was transformed through affine registration to achieve global alignment with the baseline scan. For the third method, the follow-up scan was directly deformed to the baseline scan using demons deformable registration. Feature values in matched ROIs were compared using Bland- Altman 95% limits of agreement. For each feature, the range spanned by the 95% limits was normalized to the mean feature value to obtain the normalized range of agreement, nRoA. Wilcoxon signed-rank tests were used to compare nRoA values across features for the three methods. Significance for individual tests was adjusted using the Bonferroni method. nRoA was significantly smaller for affine-registered scans than for the resampled scans (p=0.003), indicating lower feature value variability between baseline and follow-up scan ROIs using this method. For both of these methods, however, nRoA was significantly higher than when feature values were calculated directly on demons-deformed followup scans (p<0.001). Across features and methods, nRoA values remained below 26%.
Normalization of CT scans reconstructed with different kernels to reduce variability in emphysema measurements
Author(s):
L. Gallardo Estrella;
B. van Ginneken;
E. M. van Rikxoort
Show Abstract
Chronic Obstructive Pulmonary Disease (COPD) is a lung disease characterized by progressive air flow limitation caused by emphysema and chronic bronchitis. Emphysema is quantified from chest computed tomography (CT) scans as the percentage of attentuation values below a fixed threshold. The emphysema quantification varies substantially between scans reconstructed with different kernels, limiting the possibilities to compare emphysema quantifications obtained from scans with different reconstruction parameters. In this paper we propose a method to normalize scans reconstructed with different kernels to have the same characteristics as scans reconstructed with a reference kernel and investigate if this normalization reduces the variability in emphysema quantification. The proposed normalization splits a CT scan into different frequency bands based on hierarchical unsharp masking. Normalization is performed by changing the energy in each frequency band to the average energy in each band in the reference kernel. A database of 15 subjects with COPD was constructed for this study. All subjects were scanned at total lung capacity and the scans were reconstructed with four different reconstruction kernels. The normalization was applied to all scans. Emphysema quantification was performed before and after normalization. It is shown that the emphysema score varies substantially before normalization but the variation diminishes after normalization.
Pulmonary emphysema classification based on an improved texton learning model by sparse representation
Author(s):
Min Zhang;
Xiangrong Zhou;
Satoshi Goshima;
Huayue Chen;
Chisako Muramatsu;
Takeshi Hara;
Ryujiro Yokoyama;
Masayuki Kanematsu;
Hiroshi Fujita
Show Abstract
In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE)
subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the
proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate
and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE
and PLE, respectively.
Normalization of chest radiographs
Author(s):
R.H.H.M. Philipsen;
P. Maduskar;
L. Hogeweg;
B. van Ginneken
Show Abstract
The clinical use of computer-aided diagnosis (CAD) systems is increasing. A possible limitation of CAD systems is that they are typically trained on data from a small number of sources and as a result, they may not perform optimally on data from different sources. In particular for chest radiographs, it is known that acquisition settings, detector technology, proprietary post-processing and, in the case of analog images, digitization, can all influence the appearance and statistical properties of the image. In this work we investigate if a simple energy normalization procedure is sufficient to increase the robustness of CAD in chest radiography. We evaluate the performance of a supervised lung segmentation algorithm, trained with data from one type of machine, on twenty images each from five different sources. The results, expressed in terms of Jaccard index, increase from 0.530 ± 0.290 to 0.914 ± 0.041 when energy normalization is omitted or applied, respectively. We conclude that energy normalization is an effective way to make the performance of lung segmentation satisfactory on data from different sources.
Improved texture analysis for automatic detection of tuberculosis (TB) on chest radiographs with bone suppression images
Author(s):
Pragnya Maduskar;
Laurens Hogeweg;
Rick Philipsen;
Steven Schalekamp;
Bram van Ginneken
Show Abstract
Computer aided detection (CAD) of tuberculosis (TB) on chest radiographs (CXR) is challenging due to over-lapping structures. Suppression of normal structures can reduce overprojection effects and can enhance the appearance of diffuse parenchymal abnormalities. In this work, we compare two CAD systems to detect textural abnormalities in chest radiographs of TB suspects. One CAD system was trained and tested on the original CXR and the other CAD system was trained and tested on bone suppression images (BSI). BSI were created using a commercially available software (ClearRead 2.4, Riverain Medical). The CAD system is trained with 431 normal and 434 abnormal images with manually outlined abnormal regions. Subtlety rating (1-3) is assigned to each abnormal region, where 3 refers to obvious and 1 refers to subtle abnormalities. Performance is evaluated on normal and abnormal regions from an independent dataset of 900 images. These contain in total 454 normal and 1127 abnormal regions, which are divided into 3 subtlety categories containing 280, 527 and 320 abnormal regions, respectively. For normal regions, original/BSI CAD has an average abnormality score of 0.094±0.027/0.085±0.032 (p − 5.6×10−19). For abnormal regions, subtlety 1, 2, 3 categories have average abnormality scores for original/BSI of 0.155±0.073/0.156±0.089 (p = 0.73), 0.194±0.086/0.207±0.101 (p = 5.7×10−7), 0.225±0.119/0.247±0.117 (p = 4.4×10−7), respectively. Thus for normal regions, CAD scores slightly decrease when using BSI instead of the original images, and for abnormal regions, the scores increase slightly. We therefore conclude that the use of bone suppression results in slightly but significantly improved automated detection of textural abnormalities in chest radiographs.
A method for automatic matching of multi-timepoint findings for enhanced clinical workflow
Author(s):
Laks Raghupathi;
MS Dinesh;
Pandu R. Devarakota;
Gerardo Hermosillo Valadez;
Matthias Wolf
Show Abstract
Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological
findings can be time consuming as well as tedious due to possible differences in orientation and position between
scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.
Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer
Author(s):
Y. Kawata;
N. Niki;
H. Ohmatsu;
M. Kusumoto;
T. Tsuchida;
K. Eguchi;
M. Kaneko;
N. Moriyama
Show Abstract
In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of
pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any
under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT
values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived
features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.
Measurement of spiculation index in 3D for solitary pulmonary nodules in volumetric lung CT images
Author(s):
Ashis Kumar Dhara;
Sudipta Mukhopadhyay;
Naved Alam;
Niranjan Khandelwal M.D.
Show Abstract
In this paper a differential geometry based method is proposed for calculating surface speculation of solitary pulmonary nodule (SPN) in 3D from lung CT images. Spiculation present in SPN is an important shape feature to assist radiologist for measurement of malignancy. Performance of Computer Aided Diagnostic (CAD) system depends on the accurate estimation of feature like spiculation. In the proposed method, the peak of the spicules is identified using the property of Gaussian and mean curvature calculated at each surface point on segmented SPN. Once the peak point for a particular SPN is identified, the nearest valley points for the corresponding peak point are determined. The area of cross-section of the best fitted plane passing through the valley points is the base of that spicule. The solid angle subtended by the base of spicule at peak point and the distance of peak point from nodule base are taken as the measures of spiculation. The speculation index (SI) for a particular SPN is the weighted combination of all the spicules present in that SPN. The proposed method is validated on 95 SPN from Imaging Database Resources Initiative (IDRI) public database. It has achieved 87.4% accuracy in calculating quantified spiculation index compared to the spiculation index provided by radiologists in IDRI database.
Robust airway extraction based on machine learning and minimum spanning tree
Author(s):
Tsutomu Inoue;
Yoshiro Kitamura;
Yuanzhong Li;
Wataru Ito
Show Abstract
Recent advances in MDCT have improved the quality of 3D images. Virtual Bronchoscopy has been used before and
during the bronchoscopic examination for the biopsy. However, Virtual Bronchoscopy has become widely used only for the examination of proximal airway diseases. The reason is that conventional airway extraction methods often fail to extract peripheral airways with low image contrast. In this paper, we propose a machine learning based method which can improve the extraction robustness remarkably. The method consists of 4 steps. In the first step, we use Hessian analysis to detect as many airway candidates as possible. In the second, false positives are reduced effectively by introducing a machine learning method. In the third, an airway tree is constructed from the airway candidates by utilizing a minimum spanning tree algorithm. In the fourth, we extract airway regions by using Graph cuts. Experimental results evaluated by a standardized evaluation framework show that our method can extract peripheral airways very well.
Automatic age-related macular degeneration detection and staging
Author(s):
Mark J. J. P. van Grinsven;
Yara T. E. Lechanteur;
Johannes P. H. van de Ven;
Bram van Ginneken;
Thomas Theelen;
Clara I. Sánchez
Show Abstract
Age-related macular degeneration (AMD) is a degenerative disorder of the central part of the retina, which mainly affects older people and leads to permanent loss of vision in advanced stages of the disease. AMD grading of non-advanced AMD patients allows risk assessment for the development of advanced AMD and enables timely treatment of patients, to prevent vision loss. AMD grading is currently performed manually on color fundus images, which is time consuming and expensive. In this paper, we propose a supervised classification method to distinguish patients at high risk to develop advanced AMD from low risk patients and provide an exact AMD stage determination. The method is based on the analysis of the number and size of drusen on color fundus images, as drusen are the early characteristics of AMD. An automatic drusen detection algorithm is used to detect all drusen. A weighted histogram of the detected drusen is constructed to summarize the drusen extension and size and fed into a random forest classifier in order to separate low risk from high risk patients and to allow exact AMD stage determination. Experiments showed that the proposed method achieved similar performance as human observers in distinguishing low risk from high risk AMD patients, obtaining areas under the Receiver Operating Characteristic curve of 0.929 and 0.934. A weighted kappa agreement of 0.641 and 0.622 versus two observers were obtained for AMD stage evaluation. Our method allows for quick and reliable AMD staging at low costs.
Automated detection of microaneurysms using robust blob descriptors
Author(s):
K. Adal;
S. Ali;
D. Sidibé;
T. Karnowski;
E. Chaum;
F. Mériaudeau
Show Abstract
Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.
Changes in quantitative 3D shape features of the optic nerve head associated with age
Author(s):
Mark Christopher;
Li Tang;
John H. Fingert;
Todd E. Scheetz;
Michael D. Abramoff
Show Abstract
Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and
monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging
or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information
useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.
Automated retinal vessel type classification in color fundus images
Author(s):
H. Yu;
S. Barriga;
C. Agurto;
S. Nemeth;
W. Bauman;
P. Soliz
Show Abstract
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
Retrieving clinically relevant diabetic retinopathy images using a multi-class multiple-instance framework
Author(s):
Parag S. Chandakkar;
Ragav Venkatesan;
Baoxin Li
Show Abstract
Diabetic retinopathy (DR) is a vision-threatening complication from diabetes mellitus, a medical condition that is rising
globally. Unfortunately, many patients are unaware of this complication because of absence of symptoms. Regular
screening of DR is necessary to detect the condition for timely treatment. Content-based image retrieval, using archived and diagnosed fundus (retinal) camera DR images can improve screening efficiency of DR. This content-based image retrieval study focuses on two DR clinical findings, microaneurysm and neovascularization, which are clinical signs of non-proliferative and proliferative diabetic retinopathy. The authors propose a multi-class multiple-instance image retrieval framework which deploys a modified color correlogram and statistics of steerable Gaussian Filter responses, for retrieving clinically relevant images from a database of DR fundus image database.
Classification of Alzheimer's disease using regional saliency maps from brain MR volumes
Author(s):
Andrea Pulido;
Andrea Rueda;
Eduardo Romero
Show Abstract
Accurate diagnosis of Alzheimer's disease (AD) from structural Magnetic Resonance (MR) images is difficult due to the complex alteration of patterns in brain anatomy that could indicate the presence or absence of the pathology. Currently, an effective approach that allows to interpret the disease in terms of global and local changes is not available in the clinical practice. In this paper, we propose an approach for classification of brain MR images, based on finding pathology-related patterns through the identification of regional structural changes. The approach combines a probabilistic Latent Semantic Analysis (pLSA) technique, which allows to identify image regions through latent topics inferred from the brain MR slices, with a bottom-up Graph-Based Visual Saliency (GBVS) model, which calculates maps of relevant information per region. Regional saliency maps are finally combined into a single map on each slice, obtaining a master saliency map of each brain volume. The proposed approach includes a one-to-one comparison of the saliency maps which feeds a Support Vector Machine (SVM) classifier, to group test subjects into normal or probable AD subjects. A set of 156 brain MR images from healthy (76) and pathological (80) subjects, splitted into a training set (10 non-demented and 10 demented subjects) and one testing set (136 subjects), was used to evaluate the performance of the proposed approach. Preliminary results show that the proposed method reaches a maximum classification accuracy of 87.21%.
Improved multimodal biomarkers for Alzheimer's disease and mild cognitive impairment diagnosis: data from ADNI
Author(s):
Antonio Martinez-Torteya;
Víctor Treviño-Alvarado;
José Tamez-Peña
Show Abstract
The accurate diagnosis of Alzheimer’s disease (AD) and mild cognitive impairment (MCI) confers many clinical research and patient care benefits. Studies have shown that multimodal biomarkers provide better diagnosis accuracy of AD and MCI than unimodal biomarkers, but their construction has been based on traditional statistical approaches. The
objective of this work was the creation of accurate AD and MCI diagnostic multimodal biomarkers using advanced
bioinformatics tools. The biomarkers were created by exploring multimodal combinations of features using machine
learning techniques. Data was obtained from the ADNI database. The baseline information (e.g. MRI analyses, PET
analyses and laboratory essays) from AD, MCI and healthy control (HC) subjects with available diagnosis up to June
2012 was mined for case/controls candidates. The data mining yielded 47 HC, 83 MCI and 43 AD subjects for biomarker creation. Each subject was characterized by at least 980 ADNI features. A genetic algorithm feature selection strategy was used to obtain compact and accurate cross-validated nearest centroid biomarkers. The biomarkers achieved training classification accuracies of 0.983, 0.871 and 0.917 for HC vs. AD, HC vs. MCI and MCI vs. AD respectively. The constructed biomarkers were relatively compact: from 5 to 11 features. Those multimodal biomarkers included several widely accepted univariate biomarkers and novel image and biochemical features. Multimodal biomarkers constructed from previously and non-previously AD associated features showed improved diagnostic performance when compared to those based solely on previously AD associated features.
Effect of CADe on radiologists’ performance in detection of "difficult" polyps in CT colonography
Author(s):
Kenji Suzuki;
Masatoshi Hori;
Gen Iinuma;
Abraham H. Dachman
Show Abstract
To investigate the actual usefulness of computer-aided detection (CADe) of polyps as a second reader, we conducted a
free-response observer performance study with radiologists in the detection of “difficult” polyps in CT colonography
(CTC) from a multicenter clinical trial. The “difficult” polyps were defined as the ones that had been “missed” by
radiologists in the clinical trial or rated “difficult” in our retrospective review. Our advanced CADe scheme utilizing
massive-training artificial neural networks (MTANNs) technology was sensitive and specific to the “difficult” polyps.
Four board-certified abdominal radiologists participated in this observer study. They were instructed, first without and
then with our CADe, to indicate the location of polyps and their confidence level regarding the presence of polyps. Our database contains 20 patients with 23 polyps including 14 false-negative (FN) and 7 “difficult” polyps and 10 negative patients. With CADe, the average by-polyp sensitivity of radiologists was improved from 53 to 63% at a statistically significant level (P=0.037). Thus, our CADe scheme utilizing the MTANN technology improved the diagnostic
performance of radiologists, including expert readers, in the detection of “difficult” polyps in CTC.
Computer-aided detection of early cancer in the esophagus using HD endoscopy images
Author(s):
Fons van der Sommen;
Svitlana Zinger;
Erik J. Schoon;
Peter H. N. de With
Show Abstract
Esophageal cancer is the fastest rising type of cancer in the Western world. The recent development of High-Definition (HD) endoscopy has enabled the specialist physician to identify cancer at an early stage. Nevertheless, it still requires considerable effort and training to be able to recognize these irregularities associated with early cancer. As a first step towards a Computer-Aided Detection (CAD) system that supports the physician in finding these early stages of cancer, we propose an algorithm that is able to identify irregularities in the esophagus automatically, based on HD endoscopic images. The concept employs tile-based processing, so our system is not only able to identify that an endoscopic image contains early cancer, but it can also locate it. The identification is based on the following steps: (1) preprocessing, (2) feature extraction with dimensionality reduction, (3) classification. We evaluate the detection performance in RGB, HSI and YCbCr color space using the Color Histogram (CH) and Gabor features and we compare with other well-known features to describe texture. For classification, we employ a Support Vector Machine (SVM) and evaluate its performance using different parameters and kernel functions. In experiments, our system achieves a classification accuracy of 95.9% on 50×50 pixel tiles of tumorous and normal tissue and reaches an Area Under the Curve (AUC) of 0.990. In 22 clinical examples our algorithm was able to identify all (pre-)cancerous regions and annotate those regions reasonably well. The experimental and clinical validation are considered promising for a CAD system that supports the physician in finding early stage cancer.
Low-dose dual-energy electronic cleansing for fecal-tagging CT colonography
Author(s):
Wenli Cai;
Da Zhang;
June-Goo Lee;
Hiroyuki Yoshida
Show Abstract
Dual-energy electronic cleansing (DE-EC) provides a promising means for cleansing the tagged fecal materials in fecaltagging CT colonography (CTC). However, the increased radiation dose due to the double exposures in dual-energy CTC (DE-CTC) scanning is a major limitation for the use of DE-EC in clinical practice. The purpose of this study was to develop and evaluate a low-dose DE-EC scheme in fecal-tagging DE-CTC. In this study, a custom-made
anthropomorphic colon phantom, which was filled with simulated tagged materials by non-ionic iodinated contrast agent (Omnipaque iohexol, GE Healthcare), was scanned by a dual-source CT scanner (SOMATON Definition Flash, Siemens Healthcare) at two photon energies: 80 kVp and 140 kVp with nine different tube current settings ranging from 12 to 74 mAs for 140 kVp, and then reconstructed by soft-tissue reconstruction kernel (B30f). The DE-CTC images were subjected to a low-dose DE-EC scheme. First, our image-space DE-CTC denoising filter was applied for reduction of image noise. Then, the noise-reduced images were processed by a virtual lumen tagging method for reduction of partial volume effect and tagging inhomogeneity. The results were compared with the registered CTC images of native phantom without fillings. Preliminary results showed that our low-dose DE-EC scheme achieved the cleansing ratios, defined by the proportion of the cleansed voxels in the tagging mask, between 93.18% (12 mAs) and 96.62% (74 mAs). Also, the soft-tissue preservation ratios, defined by the proportion of the persevered voxels in the soft-tissue mask, were maintained in the range between 94.67% and 96.41%.
Blood vessel-based liver segmentation through the portal phase of a CT dataset
Author(s):
Ahmed S. Maklad;
Mikio Matsuhiro;
Hidenobu Suzuki;
Yoshiki Kawata;
Noboru Niki;
Noriyuki Moriyama;
Toru Utsunomiya;
Mitsuo Shimada
Show Abstract
Blood vessels are dispersed throughout the human body organs and carry unique information for each person. This information can be used to delineate organ boundaries. The proposed method relies on abdominal blood vessels (ABV) to segment the liver considering the potential presence of tumors through the portal phase of a CT dataset. ABV are extracted and classified into hepatic (HBV) and nonhepatic (non-HBV) with a small number of interactions. HBV and non-HBV are used to guide an automatic segmentation of the liver. HBV are used to individually segment the core region of the liver. This region and non-HBV are used to construct a boundary surface between the liver and other organs to separate them. The core region is classified based on extracted posterior distributions of its histogram into low intensity tumor (LIT) and non-LIT core regions. Non-LIT case includes normal part of liver, HBV, and high intensity tumors if exist. Each core region is extended based on its corresponding posterior distribution. Extension is completed when it reaches either a variation in intensity or the constructed boundary surface. The method was applied to 80 datasets (30 Medical Image Computing and Computer Assisted Intervention (MICCAI) and 50 non-MICCAI data) including 60 datasets with tumors. Our results for the MICCAI-test data were evaluated by sliver07 [1] with an overall score of 79.7, which ranks seventh best on the site (December 2013). This approach seems a promising method for extraction of liver volumetry of various shapes and sizes and low intensity hepatic tumors.
Image patch-based method for automated classification and detection of focal liver lesions on CT
Author(s):
Mustafa Safdari;
Raghav Pasari;
Daniel Rubin;
Hayit Greenspan
Show Abstract
We developed a method for automated classification and detection of liver lesions in CT images based on image patch
representation and bag-of-visual-words (BoVW). BoVW analysis has been extensively used in the computer vision
domain to analyze scenery images. In the current work we discuss how it can be used for liver lesion classification and
detection. The methodology includes building a dictionary for a training set using local descriptors and representing a
region in the image using a visual word histogram. Two tasks are described: a classification task, for lesion characterization, and a detection task in which a scan window moves across the image and is determined to be normal
liver tissue or a lesion. Data: In the classification task 73 CT images of liver lesions were used, 25 images having cysts, 24 having metastasis and 24 having hemangiomas. A radiologist circumscribed the lesions, creating a region of interest (ROI), in each of the images. He then provided the diagnosis, which was established either by biopsy or clinical follow-up. Thus our data set comprises 73 images and 73 ROIs. In the detection task, a radiologist drew ROIs around each liver lesion and two regions of normal liver, for a total of 159 liver lesion ROIs and 146 normal liver ROIs. The radiologist also demarcated the liver boundary. Results: Classification results of more than 95% were obtained. In the detection task, F1 results obtained is 0.76. Recall is 84%, with precision of 73%. Results show the ability to detect lesions, regardless of shape.
Visual analysis of longitudinal brain tumor perfusion
Author(s):
Sylvia Glaßer;
Steffen Oeltze;
Uta Preim;
Atle Bjørnerud;
Helwig Hauser;
Bernhard Preim
Show Abstract
In clinical research on diagnosis and evaluation of brain tumors, longitudinal perfusion MRI studies are acquired for tumor grading as well as to monitor and assess treatment response and patient prognosis. Within this work, we
demonstrate how visual analysis techniques can be adapted to multidimensional datasets from such studies within
a framework to support the computer-aided diagnosis of brain tumors. Our solution builds on two innovations: First, we introduce a pipeline yielding comparative, co-registered quantitative perfusion parameter maps over all time steps of the longitudinal study. Second, based on these time-dependent parameter maps, visual analysis methods were developed and adapted to reveal valuable insight into tumor progression, especially regarding the clinical research area of low grade glioma transformation into high grade gliomas. Our examination of four longitudinal brain studies demonstrates the suitability of the presented visual analysis methods and comprises new possibilities for the clinical researcher to characterize the development of low grade gliomas.
Differentiating cerebral lymphomas and GBMs featuring luminance distribution analysis
Author(s):
Toshihiko Yamasaki;
Tsuhan Chen;
Toshinori Hirai;
Ryuji Murakami
Show Abstract
Differentiating lymphomas and glioblastoma multiformes (GBMs) is important for proper treatment planning. A number of works have been proposed but there are still some problems. For example, many works depend on thresholding a single feature value, which is susceptible to noise. Non-typical cases that do not get along with such simple thresholding can be found easily. In other cases, experienced observers are required to extract the feature values or to provide some interactions to the system, which is costly. Even if experts are involved, inter-observer variance becomes another problem. In addition, most of the works use only one or a few slice(s) because 3D tumor segmentation is difficult and time-consuming. In this paper, we propose a tumor classification system that analyzes the luminance distribution of the whole tumor region. The 3D MRIs are segmented within a few tens of seconds by using our fast 3D segmentation algorithm. Then, the luminance histogram of the whole tumor region is generated. The typical cases are classified by the histogram range thresholding and the apparent diffusion coefficients (ADC) thresholding. The non-typical cases are learned and classified by a support vector machine (SVM). Most of the processing elements are semi-automatic except for the ADC value extraction. Therefore, even novice users can use the system easily and get almost the same results as experts. The experiments were conducted using 40 MRI datasets (20 lymphomas and 20 GBMs) with non-typical cases. The classification accuracy of the proposed method was 91.1% without the ADC thresholding and 95.4% with the ADC thresholding. On the other hand, the baseline method, the conventional ADC thresholding, yielded only 67.5% accuracy.
Assessment of quantitative cortical biomarkers in the developing brain of preterm infants
Author(s):
Pim Moeskops;
Manon J. N. L. Benders;
Paul C. Pearlman;
Karina J. Kersbergen;
Alexander Leemans;
Max A. Viergever;
Ivana Išgum
Show Abstract
The cerebral cortex rapidly develops its folding during the second and third trimester of pregnancy. In preterm birth, this growth might be disrupted and influence neurodevelopment. The aim of this work is to extract quantitative biomarkers describing the cortex and evaluate them on a set of preterm infants without brain pathology.
For this study, a set of 19 preterm - but otherwise healthy - infants scanned coronally with 3T MRI at the postmenstrual age of 30 weeks were selected. In ten patients (test set), the gray and white matter were manually annotated by an expert on the T2-weighted scans. Manual segmentations were used to extract cortical volume, surface area, thickness, and curvature using voxel-based methods. To compute these biomarkers per region in every patient, a template brain image has been generated by iterative registration and averaging of the scans of the remaining nine patients. This template has been manually divided in eight regions, and is transformed to every test image using elastic registration.
In the results, gray and white matter volumes and cortical surface area appear symmetric between hemispheres, but small regional differences are visible. Cortical thickness seems slightly higher in the right parietal lobe than in other regions. The parietal lobes exhibit a higher global curvature, indicating more complex folding compared to other regions.
The proposed approach can potentially - together with an automatic segmentation algorithm - be applied as a tool to assist in early diagnosis of abnormalities and prediction of the development of the cognitive abilities of these children.
Computer-aided diagnosis of acute ischemic stroke based on cerebral hypoperfusion using 4D CT angiography
Author(s):
Jean-Paul Charbonnier;
Ewoud J. Smit;
Max A. Viergever;
Birgitta K. Velthuis;
Pieter C. Vos
Show Abstract
The presence of collateral blood flow is found to be a strong predictor of patient outcome after acute ischemic stroke. Collateral blood flow is defined as an alternative way to provide oxygenated blood to ischemic cerebral tissue. Assessment of collateral blood supply is currently performed by visual inspection of a Computed Tomography Angiogram (CTA) which introduces inter-observer variability and depends on the grading scale. Furthermore, variations in the arterial contrast arrival time may lead to underestimation of collateral blood supply in a CTA which exerts a negative influence on the prediction of patient outcome. In this study, the feasibility of a Computer-aided Diagnosis system is investigated capable of objectively predicting patient outcome. We present a novel automatic method for quantitative assessment of cerebral hypoperfusion in timing-invariant (i.e. delay insensitive) CTA (TI-CTA). The proposed Vessel Density Symmetry algorithm automatically generates descriptive maps based on hemispheric asymmetry of blood vessels. Intensity and symmetry based features are extracted from these descriptive maps and subjected to a best-first-search feature selection. Linear Discriminant Analysis is performed to combine selected features into a likelihood of good patient outcome. Receiver operating characteristic (ROC) analysis is conducted to evaluate the diagnostic performance of the CAD by leave-one- patient-out cross validation. A Positive Predicting Value of 1 was obtained at a sensitivity of 25% with an area under the ROC-curve of 0.86. The results show that the CAD is feasible to objectively predict patient outcome. The presented CAD could make an important contribution to acute ischemic stroke diagnosis and treatment.
Automatic detection and segmentation of ischemic lesions in computed tomography images of stroke patients
Author(s):
Pieter C. Vos;
J. Matthijs Biesbroek;
Nick A. Weaver;
Birgitta K. Velthuis;
Max A. Viergever
Show Abstract
Stroke is the third most common cause of death in developed countries. Clinical trials are currently investigating whether advanced Computed Tomography can be of benefit for diagnosing stroke at the acute phase. These trials are based on large patients cohorts that need to be manually annotated to obtain a reference standard of tissue loss at follow-up, resulting in extensive workload for the radiologists. Therefore, there is a demand for accurate and reliable automatic lesion segmentation methods. This paper presents a novel method for the automatic detection and segmentation of ischemic lesions in CT images. The method consists of multiple sequential stages. In the initial stage, pixel classification is performed using a naive Bayes classifier in combination with a tissue homogeneity algorithm in order to localize ischemic lesion candidates. In the next stage, the candidates are segmented using a marching cubes algorithm. Regional statistical analysis is used to extract features based on local information as well as contextual information from the contra-lateral hemisphere. Finally, the extracted features are summarized into a likelihood of ischemia by a supervised classifier. An area under the Receiver Operating Characteristic curve of 0.91 was obtained for the identification of ischemic lesions. The method performance on lesion segmentation reached a Dice similarity coeficient (DSC) of 0.74±0.09, whereas an independent human observer obtained a DSC of 0.79±0.11 in the same dataset. The experiments showed that it is feasible to automatically detect and segment ischemic lesions in CT images, obtaining a comparable performance as human observers.
Detection of white matter lesions in cerebral small vessel disease
Author(s):
Medhat M. Riad;
Bram Platel;
Frank-Erik de Leeuw;
Nico Karssemeijer
Show Abstract
White matter lesions (WML) are diffuse white matter abnormalities commonly found in older subjects and are important indicators of stroke, multiple sclerosis, dementia and other disorders. We present an automated WML detection method and evaluate it on a dataset of small vessel disease (SVD) patients. In early SVD, small WMLs are expected to be of importance for the prediction of disease progression. Commonly used WML segmentation methods tend to ignore small WMLs and are mostly validated on the basis of total lesion load or a Dice coefficient for all detected WMLs. Therefore, in this paper, we present a method that is designed to detect individual lesions, large or small, and we validate the detection performance of our system with FROC (free-response ROC) analysis. For the automated detection, we use supervised classification making use of multimodal voxel based features from different magnetic resonance imaging (MRI) sequences, including intensities, tissue probabilities, voxel locations and distances, neighborhood textures and others. After preprocessing, including co-registration, brain extraction, bias correction, intensity normalization, and nonlinear registration, ventricle segmentation is performed and features are calculated for each brain voxel. A gentle-boost classifier is trained using these features from 50 manually annotated subjects to give each voxel a probability of being a lesion voxel. We perform ROC analysis to illustrate the benefits of using additional features to the commonly used voxel intensities; significantly increasing the area under the curve (Az) from 0.81 to 0.96 (p<0.05). We perform the FROC analysis by testing our classifier on 50 previously unseen subjects and compare the results with manual annotations performed by two experts. Using the first annotator results as our reference, the second annotator performs at a sensitivity of 0.90 with an average of 41 false positives per subject while our automated method reached the same level of sensitivity at approximately 180 false positives per subject.
Automatic stent strut detection in intravascular OCT images using image processing and classification technique
Author(s):
Hong Lu;
Madhusudhana Gargesha;
Zhao Wang;
Daniel Chamie;
Guilherme F. Attizani;
Tomoaki Kanaya;
Soumya Ray;
Marco A. Costa;
Andrew M. Rollins;
Hiram G. Bezerra;
David L. Wilson
Show Abstract
Intravascular OCT (iOCT) is an imaging modality with ideal resolution and contrast to provide accurate in vivo
assessments of tissue healing following stent implantation. Our Cardiovascular Imaging Core Laboratory has served >20 international stent clinical trials with >2000 stents analyzed. Each stent requires 6-16hrs of manual analysis time and we are developing highly automated software to reduce this extreme effort. Using classification technique, physically meaningful image features, forward feature selection to limit overtraining, and leave-one-stent-out cross validation, we detected stent struts. To determine tissue coverage areas, we estimated stent “contours” by fitting detected struts and interpolation points from linearly interpolated tissue depths to a periodic cubic spline. Tissue coverage area was obtained by subtracting lumen area from the stent area. Detection was compared against manual analysis of 40 pullbacks. We obtained recall = 90±3% and precision = 89±6%. When taking struts deemed not bright enough for manual analysis into consideration, precision improved to 94±6%. This approached inter-observer variability (recall = 93%, precision = 96%). Differences in stent and tissue coverage areas are 0.12 ± 0.41 mm2 and 0.09 ± 0.42 mm2, respectively. We are developing software which will enable visualization, review, and editing of automated results, so as to provide a comprehensive stent analysis package. This should enable better and cheaper stent clinical trials, so that manufacturers can optimize the myriad of parameters (drug, coverage, bioresorbable versus metal, etc.) for stent design.
Computerized detection of non-calcified plaques in coronary CT angiography: topological soft-gradient detection method for plaque prescreening
Author(s):
Jun Wei;
Chuan Zhou;
Heang-Ping Chan;
Aamer Chughtai;
Smita Patel;
Prachi Agarwal;
Jean Kuriakose;
Lubomir Hadjiiski;
Ella Kazerooni
Show Abstract
Non-calcified plaque (NCP) detection in coronary CT angiography (cCTA) is challenging due to the low CT
number of NCP, the large number of coronary arteries and multiple phase CT acquisition. We are developing computervision methods for automated detection of NCPs in cCTA. A data set of 62 cCTA scans with 87 NCPs was collected retrospectively from patient files. Multiscale coronary vessel enhancement and rolling balloon tracking were first applied to each cCTA volume to extract the coronary artery trees. Each extracted vessel was reformatted to a
straightened volume composed of cCTA slices perpendicular to the vessel centerline. A new topological soft-gradient
(TSG) detection method was developed to prescreen for both positive and negative remodeling candidates by analyzing the 2D topological features of the radial gradient field surface along the vessel wall. Nineteen features were designed to describe the relative location along the coronary artery, shape, distribution of CT values, and radial gradients of each NCP candidate. With a machine learning algorithm and a two-loop leave-one-case-out training and testing resampling method, useful features were selected and combined into an NCP likelihood measure to differentiate TPs from FPs. The detection performance was evaluated by FROC analysis. Our TSG method achieved a sensitivity of 96.6% with 35.4 FPs/scan at prescreening. Classification with the NCP likelihood measure reduced the FP rates to 13.1, 10.0 and 6.7 FPs/scan at sensitivities of 90%, 80%, and 70%, respectively. These results demonstrated that the new TSG method is useful for computerized detection of NCPs in cCTA.
Computer-aided scheme for functional index computation of left ventricle in cardiac CTA: segmentation and partitioning of left ventricle
Author(s):
Hui Huang;
Xiahai Zhuang;
Yi Shao;
Tian Lan;
Liu Liu;
Qiang Li
Show Abstract
Cardiac functional indices, such as ejection fraction and regional wall motion/ thickening, are commonly used for
assessing the contractility and functionality of the heart in clinical practice. An important step for computer-aided
determination of functional indices is the automated segmentation of the heart from computed tomography angiography (CTA) and the partitioning of the left ventricle into 16 segments. We develop a fully automatic scheme which not only segments the whole heart from cardiac CTA images, but also partitions the left ventricle, including the blood pool and myocardium, into 16 segments of bull’s eye plot. The segmentation is based on image registration and atlas propagation techniques, whereas the bull’s eye plot is first obtained through atlas propagation and then further improved to correct inconsistency across different subjects, uneven size for each segment and “zig-zag” edges between them. In this preliminary study, a cohort of ten clinical CTA data was employed to compute and evaluate the regional functional indices as well as the global indices.
Computer-based assessment of left ventricular wall stiffness in patients with ischemic dilated cardiomyopathy
Author(s):
Y. Su;
S. K. Teo;
R.S. Tan;
C.W. Lim;
L. Zhong
Show Abstract
Ischemic dilated cardiomyopathy (IDCM) is a degenerative disease of the myocardial tissue accompanied by left ventricular (LV) structural changes such as interstitial fibrosis. This can induce increased passive stiffness of the LV wall. However, quantification of LV passive wall stiffness in vivo is extremely difficult, particularly in ventricles with complex geometry. Therefore, we sought to (i) develop a computer-based assessment of LV passive wall stiffness from cardiac magnetic resonance (CMR) imaging in terms of a nominal stiffness index (E*); and (ii) investigate whether E* can offer an insight into cardiac mechanics in IDCM. CMR scans were performed in 5 normal subjects and 5 patients with IDCM. For each data sample, an in-house software was used to generate a 1-to-1 corresponding mesh pair of the LV from the ED and ES phases. The E* values are then computed as a function of local ventricular wall strain. We found that E* in the IDCM group (40.66 – 215.12) was at least one order of magnitude larger than the normal control group (1.00 – 6.14). In addition, the IDCM group revealed much higher inhomogeneity of E* values manifested by a greater spread of E* values throughout the LV. In conclusion, there is a substantial elevated ventricular stiffness index in IDCM. This would suggest that E* could be used as discriminator for early detection of disease state. The computational performance per data sample took approximately 25 seconds, which demonstrates its clinical potential as a real-time cardiac assessment tool.
Patient-specific coronary artery blood flow simulation using myocardial volume partitioning
Author(s):
Kyung Hwan Kim;
Dongwoo Kang;
Nahyup Kang;
Ji-Yeon Kim;
Hyong-Euk Lee;
James D. K. Kim
Show Abstract
Using computational simulation, we can analyze cardiovascular disease in non-invasive and quantitative manners. More specifically, computational modeling and simulation technology has enabled us to analyze functional aspect such as blood flow, as well as anatomical aspect such as stenosis, from medical images without invasive measurements. Note that the simplest ways to perform blood flow simulation is to apply patient-specific coronary anatomy with other average-valued properties; in this case, however, such conditions cannot fully reflect accurate physiological properties of patients. To resolve this limitation, we present a new patient-specific coronary blood flow simulation method by myocardial volume partitioning considering artery/myocardium structural correspondence. We focus on that blood supply is closely related to the mass of each myocardial segment corresponding to the artery. Therefore, we applied this concept for setting-up simulation conditions in the way to consider many patient-specific features as possible from medical image: First, we segmented coronary arteries and myocardium separately from cardiac CT; then the myocardium is partitioned into multiple regions based on coronary vasculature. The myocardial mass and required blood mass for each artery are estimated by converting myocardial volume fraction. Finally, the required blood mass is used as boundary conditions for each artery outlet, with given average aortic blood flow rate and pressure. To show effectiveness of the proposed method, fractional flow reserve (FFR) by simulation using CT image has been compared with invasive FFR measurement of real patient data, and as a result, 77% of accuracy has been obtained.
Automated assessment of bilateral breast volume asymmetry as a breast cancer biomarker during mammographic screening
Author(s):
Alex C. Williams;
Austin Hitt;
Sophie Voisin;
Georgia Tourassi
Show Abstract
The biological concept of bilateral symmetry as a marker of developmental stability and good health is well established. Although most individuals deviate slightly from perfect symmetry, humans are essentially considered bilaterally symmetrical. Consequently, increased fluctuating asymmetry of paired structures could be an indicator of disease. There are several published studies linking bilateral breast size asymmetry with increased breast cancer risk. These studies were based on radiologists’ manual measurements of breast size from mammographic images. We aim to develop a computerized technique to assess fluctuating breast volume asymmetry in screening mammograms and investigate whether it correlates with the presence of breast cancer. Using a large database of screening mammograms with known ground truth we applied automated breast region segmentation and automated breast size measurements in CC and MLO views using three well established methods. All three methods confirmed that indeed patients with breast cancer have statistically significantly higher fluctuating asymmetry of their breast volumes. However, statistically significant difference between patients with cancer and benign lesions was observed only for the MLO views. The study suggests that automated assessment of global bilateral asymmetry could serve as a breast cancer risk biomarker for women undergoing mammographic screening. Such biomarker could be used to alert radiologists or computer-assisted detection (CAD) systems to exercise increased vigilance if higher than normal cancer risk is suspected.
A fully-automated software pipeline for integrating breast density and parenchymal texture analysis for digital mammograms: parameter optimization in a case-control breast cancer risk assessment study
Author(s):
Yuanjie Zheng;
Yan Wang;
Brad M. Keller;
Emily Conant;
James C. Gee;
Despina Kontos
Show Abstract
Estimating a woman’s risk of breast cancer is becoming increasingly important in clinical practice. Mammographic density, estimated as the percent of dense (PD) tissue area within the breast, has been shown to be a strong risk factor. Studies also support a relationship between mammographic texture and breast cancer risk. We have developed a fullyautomated software pipeline for computerized analysis of digital mammography parenchymal patterns by quantitatively measuring both breast density and texture properties. Our pipeline combines advanced computer algorithms of pattern recognition, computer vision, and machine learning and offers a standardized tool for breast cancer risk assessment studies. Different from many existing methods performing parenchymal texture analysis within specific breast subregions, our pipeline extracts texture descriptors for points on a spatial regular lattice and from a surrounding window of each lattice point, to characterize the local mammographic appearance throughout the whole breast. To demonstrate the utility of our pipeline, and optimize its parameters, we perform a case-control study by retrospectively analyzing a total of 472 digital mammography studies. Specifically, we investigate the window size, which is a lattice related parameter, and compare the performance of texture features to that of breast PD in classifying case-control status. Our results suggest that different window sizes may be optimal for raw (12.7mm2) versus vendor post-processed images (6.3mm2). We also show that the combination of PD and texture features outperforms PD alone. The improvement is significant (p=0.03) when raw images and window size of 12.7mm2 are used, having an ROC AUC of 0.66. The combination of PD and our texture features computed from post-processed images with a window size of 6.3 mm2 achieves an ROC AUC of 0.75.
Fully-automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI by integrating a continuous max-flow model and a likelihood atlas
Author(s):
Shandong Wu;
Susan P. Weinstein;
Emily F. Conant;
Despina Kontos
Show Abstract
Studies suggest that the relative amount of fibroglandular tissue in the breast as quantified in breast MRI can be predictive of the risk for developing breast cancer. Automated segmentation of the fibroglandular tissue from breast MRI data could therefore be an essential component in quantitative risk assessment. In this work we propose a new fullyautomated 3D segmentation algorithm, namely the continuous max-flow (CMF)-Atlas method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. Our method goes through a first step of applying a continuous max-flow model in the MR image intensity space to produce an initial voxel-wise likelihood map of being fibroglandular tissue. Then we further incorporate an a-priori learned fibroglandular tissue likelihood atlas to refine the initial likelihood map to achieve enhanced segmentation, from which the relative (e.g., percent) volumetric amount of fibroglandular tissue (FT%) in the breast is computed. Our method is evaluated by a representative dataset of 16 3D bilateral breast MRI scans (32 breasts, 896 tomographic MR slices in total). A high correlation (r=0.95) is achieved in FT% estimation, and the overall averaged spatial segmentation agreement is 0.77 in terms of Dice’s coefficient, between the automated segmentation and the manual segmentation obtained from an experienced breast imaging radiologist. The automated segmentation method also runs time-efficiently at ~1 minute for each 3D MR scan (56 slices), compared to ~15 minutes needed for manual segmentation. Our method can serve as an effective tool for processing large scale clinical breast MR datasets for quantitative fibroglandular tissue estimation.
Breast segmentation in MR images using three-dimensional spiral scanning and dynamic programming
Author(s):
Luan Jiang;
Yanyun Lian;
Yajia Gu;
Qiang Li
Show Abstract
Magnetic resonance (MR) imaging has been widely used for risk assessment and diagnosis of breast cancer in clinic. To
develop a computer-aided diagnosis (CAD) system, breast segmentation is the first important and challenging task. The
accuracy of subsequent quantitative measurement of breast density and abnormalities depends on accurate definition of the breast area in the images. The purpose of this study is to develop and evaluate a fully automated method for accurate segmentation of breast in three-dimensional (3-D) MR images. A fast method was developed to identify bounding box, i.e., the volume of interest (VOI), for breasts. A 3-D spiral scanning method was used to transform the VOI of each breast into a single two-dimensional (2-D) generalized polar-coordinate image. Dynamic programming technique was applied to the transformed 2-D image for delineating the “optimal” contour of the breast. The contour of the breast in the transformed 2-D image was utilized to reconstruct the segmentation results in the 3-D MR images using interpolation and lookup table. The preliminary results on 17 cases show that the proposed method can obtain accurate segmentation of the breast based on subjective observation. By comparing with the manually delineated region of 16 breasts in 8 cases, an overlap index of 87.6% ± 3.8% (mean ± SD), and a volume agreement of 93.4% ± 4.5% (mean ± SD) were achieved, respectively. It took approximately 3 minutes for our method to segment the breast in an MR scan of 256 slices.
Symmetry-based detection and diagnosis of DCIS in breast MRI
Author(s):
Abhilash Srikantha;
Markus T. Harz;
Gillian Newstead;
Lei Wang;
Bram Platel;
Katrin Hegenscheid;
Ritse M. Mann;
Horst K. Hahn;
Heinz-Otto Peitgen
Show Abstract
The delineation and diagnosis of non-mass-like lesions, most notably DCIS (ductal carcinoma in situ), is among the most challenging tasks in breast MRI reading. Even for human observers, DCIS is not always easy to diferentiate from patterns of active parenchymal enhancement or from benign alterations of breast tissue. In this light, it is no surprise that CADe/CADx approaches often completely fail to classify DCIS. Of the several approaches that have tried to devise such computer aid, none achieve performances similar to mass detection and classification in terms of sensitivity and specificity. In our contribution, we show a novel approach to combine a newly proposed metric of anatomical breast symmetry calculated on subtraction images of dynamic contrast-enhanced (DCE) breast MRI, descriptive kinetic parameters, and lesion candidate morphology to achieve performances comparable to computer-aided methods used for masses. We have based the development of the method on DCE MRI data of 18 DCIS cases with hand-annotated lesions, complemented by DCE-MRI data of nine normal cases. We propose a novel metric to quantify the symmetry of contralateral breasts and derive a strong indicator for potentially malignant changes from this metric. Also, we propose a novel metric for the orientation of a finding towards a fix point (the nipple). Our combined scheme then achieves a sensitivity of 89% with a specificity of 78%, matching CAD results for breast MRI on masses. The processing pipeline is intended to run on a CAD server, hence we designed all processing to be automated and free of per-case parameters. We expect that the detection results of our proposed non-mass aimed algorithm will complement other CAD algorithms, or ideally be joined with them in a voting scheme.
Association between bilateral asymmetry of kinetic features computed from the DCE-MRI images and breast cancer
Author(s):
Qian Yang;
Lihua Li;
Juan Zhang;
Chengjie Zhang;
Bin Zheng
Show Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of breast yields high sensitivity but relatively
lower specificity. To improve diagnostic accuracy of DCE-MRI, we investigated the association between bilateral
asymmetry of kinetic features computed from the left and right breasts and breast cancer detection with the hypothesis that due to the growth of angiogenesis associated with malignant lesions, the average dynamic contrast enhancement computed from the breasts depicting malignant lesions should be higher than negative or benign breasts. To test this hypothesis, we assembled a database involving 130 DCE-MRI examinations including 81 malignant and 49 benign cases. We developed a computerized scheme that automatically segments breast areas depicted on MR images and computes kinetic features related to the bilateral asymmetry of contrast enhancement ratio between two breasts. An artificial neural network (ANN) was then used to classify between malignant and benign cases. To identify the optimal approach to compute the bilateral kinetic feature asymmetry, we tested 4 different thresholds to select the enhanced pixels (voxels) from DCE-MRI images and compute the kinetic features. Using the optimal threshold, the ANN had a classification performance measured by the area under the ROC curve of AUC=0.79±0.04. The positive and negative predictive values were 0.75 and 0.67, respectively. The study suggested that the bilateral asymmetry of kinetic features or contrast enhancement of breast background tissue could provide valuable supplementary information to distinguish between the malignant and benign cases, which can be fused into existing computer-aided detection schemes to improve classification performance.
A prostate cancer computer-aided diagnosis system using multimodal magnetic resonance imaging and targeted biopsy labels
Author(s):
Peter Liu;
Shijun Wang;
Baris Turkbey;
Kinzya Grant;
Peter Pinto;
Peter Choyke;
Bradford J. Wood;
Ronald M. Summers
Show Abstract
We propose a new method for prostate cancer classification based on supervised statistical learning methods by
integrating T2-weighted, diffusion-weighted, and dynamic contrast-enhanced MRI images with targeted prostate biopsy results. In the first step of the method, all three imaging modalities are registered based on the image coordinates encoded in the DICOM images. In the second step, local statistical features are extracted in each imaging modality to capture intensity, shape, and texture information at every biopsy target. Finally, using support vector machines, supervised learning is conducted with the biopsy results to train a classification system that predicts the pathology of suspicious cancer lesions. The algorithm was tested with a dataset of 54 patients that underwent 164 targeted biopsies (58 positive, 106 negative). The proposed tri-modal MRI algorithm shows significant improvement over a similar approach that utilizes only T2-weighted MRI images (p= 0.048). The areas under the ROC curve for these methods were 0.82 (95% CI: [0.71, 0.93]) and 0.73 (95% CI: [0.55, 0.84]), respectively.
A study of T[sub]2[/sub]-weighted MR image texture features and diffusion-weighted MR image features for computer-aided diagnosis of prostate cancer
Author(s):
Yahui Peng;
Yulei Jiang;
Tatjana Antic;
Maryellen L. Giger;
Scott Eggener;
Aytekin Oto
Show Abstract
The purpose of this study was to study T2-weighted magnetic resonance (MR) image texture features and diffusionweighted (DW) MR image features in distinguishing prostate cancer (PCa) from normal tissue. We collected two image datasets: 23 PCa patients (25 PCa and 23 normal tissue regions of interest [ROIs]) imaged with Philips MR scanners, and 30 PCa patients (41 PCa and 26 normal tissue ROIs) imaged with GE MR scanners. A radiologist drew ROIs manually via consensus histology-MR correlation conference with a pathologist. A number of T2-weighted texture features and apparent diffusion coefficient (ADC) features were investigated, and linear discriminant analysis (LDA) was used to combine select strong image features. Area under the receiver operating characteristic (ROC) curve (AUC) was used to characterize feature effectiveness in distinguishing PCa from normal tissue ROIs. Of the features studied, ADC 10th percentile, ADC average, and T2-weighted sum average yielded AUC values (±standard error) of 0.95±0.03, 0.94±0.03, and 0.85±0.05 on the Phillips images, and 0.91±0.04, 0.89±0.04, and 0.70±0.06 on the GE images, respectively. The three-feature combination yielded AUC values of 0.94±0.03 and 0.89±0.04 on the Phillips and GE images, respectively. ADC 10th percentile, ADC average, and T2-weighted sum average, are effective in distinguishing PCa from normal tissue, and appear robust in images acquired from Phillips and GE MR scanners.
Ultrasound RF time series for tissue typing: first in vivo clinical results
Author(s):
Mehdi Moradi;
S. Sara Mahdavi;
Guy Nir;
Edward C. Jones;
S. Larry Goldenberg;
Septimiu E. Salcudean
Show Abstract
The low diagnostic value of ultrasound in prostate cancer imaging has resulted in an effort to enhance the tumor contrast using ultrasound-based technologies that go beyond traditional B-mode imaging. Ultrasound RF time series, formed by echo samples originating from the same location over a few seconds of imaging, has been proposed and experimentally used for tissue typing with the goal of cancer detection. In this work, for the first time we report the preliminary results of in vivo clinical use of spectral parameters extracted from RF time series in prostate cancer detection. An image processing pipeline is designed to register the ultrasound data to wholemount histopathology references acquired from prostate specimens that are removed in radical prostatectomy after imaging. Support vector machine classification is used to detect cancer in 524 regions of interest of size 5×5 mm, each forming a feature vector of spectral RF time series parameters. Preliminary ROC curves acquired based on RF time series analysis for individual cases, with leave-one-patient-out cross validation, are presented and compared with B-mode texture analysis.
Iterative multiple reference tissue method for estimating pharmacokinetic parameters on prostate DCE MRI
Author(s):
Shoshana B. Ginsburg;
B. Nicolas Bloch;
Neil M. Rofsky;
Elizabeth M. Genega;
Robert E. Lenkinski;
Anant Madabhushi
Show Abstract
Pharmacokinetic (PK) parameters are probes of tissue status that can be assessed by analysis of dynamic contrast-enhanced (DCE) MRI and are useful for prostate cancer (CaP) detection and grading. Traditionally, PK analysis requires knowledge of the time-resolved concentration of the contrast agent in the blood plasma, the arterial input function (AIF), which is typically estimated in an artery in the field-of-view (FOV). In cases when no suitable artery is present in the FOV, the multiple reference tissue method (MRTM) enables the estimation of PK parameters without the AIF by leveraging PK parameter values from the literature for a reference tissue in the FOV. Nevertheless, PK parameters estimated in the prostate vary significantly between patients. Consequently, population-based values obtained from the literature may introduce error into PK parameter estimation via MRTM. The objectives of this paper are two-fold. First we present a novel scheme, iterative MRTM (IMRTM), to estimate PK parameter values in the absence of the AIF without making assumptions about the PK constants associated with a reference tissue. Then, using IMRTM we investigate differences in PK constants between CaP in the peripheral zone (PZ) and CaP in the central gland (CG), as CG and PZ CaP have previously been shown to differ significantly in terms of both texture and prognosis. We apply IMRTM to 15 patients with CaP in either the CG or the PZ who were scheduled for a radical prostatectomy and a pre-operative MRI. Values for the PK parameters Ktrans and ve estimated via IMRTM average 0.29 and 0.60 for normal central gland (CG), 0.29 and 0.64 for normal peripheral zone (PZ), and 0.30 and 0.53 for CaP. It is noteworthy that PK constants estimated in PZ CaP are significantly higher than those estimated in CG CaP (p < 0:05). While both MRTM and IMRTM provide PK parameter values that are biologically feasible, IMRTM has the advantage that it invokes patient-specific information rather than relying on population-based PK constants in performing PK analysis.
Automatic abdominal lymph node detection method based on local intensity structure analysis from 3D x-ray CT images
Author(s):
Yoshihiko Nakamura;
Yukitaka Nimura;
Takayuki Kitasaka;
Shinji Mizuno;
Kazuhiro Furukawa;
Hidemi Goto;
Michitaka Fujiwara;
Kazunari Misawa;
Masaaki Ito;
Shigeru Nawano;
Kensaku Mori
Show Abstract
This paper presents an automated method of abdominal lymph node detection to aid the preoperative diagnosis of abdominal cancer surgery. In abdominal cancer surgery, surgeons must resect not only tumors and metastases but also lymph nodes that might have a metastasis. This procedure is called lymphadenectomy or lymph node dissection. Insufficient lymphadenectomy carries a high risk for relapse. However, excessive resection decreases a patient's quality of life. Therefore, it is important to identify the location and the structure of lymph nodes to make a suitable surgical plan. The proposed method consists of candidate lymph node detection and false positive reduction. Candidate lymph nodes are detected using a multi-scale blob-like enhancement filter based on local intensity structure analysis. To reduce false positives, the proposed method uses a classifier based on support vector machine with the texture and shape information. The experimental results reveal that it detects 70.5% of the lymph nodes with 13.0 false positives per case.
Detection of microcalcifications in breast tomosynthesis reconstructed with multiscale bilateral filtering regularization
Author(s):
Ravi K. Samala;
Heang-Ping Chan;
Yao Lu;
Lubomir Hadjiyski;
Jun Wei;
Berkman Sahiner;
Mark Helvie
Show Abstract
We are developing a CAD system to assist radiologists in detecting microcalcification clusters (MCs) in digital breast
tomosynthesis (DBT). In this study, we investigated the feasibility of using as input to the CAD system an enhanced
DBT volume that was reconstructed with the iterative simultaneous algebraic reconstruction technique (SART)
regularized by a new multiscale bilateral filtering (MBiF) method. The MBiF method utilizes the multiscale structures
of the breast to selectively enhance MCs and preserve mass spiculations while smoothing noise in the DBT images. The
CAD system first extracted the enhancement-modulated calcification response (EMCR) in the DBT volume. Detection
of the seed points for MCs and individual calcifications were guided by the EMCR. MC candidates were formed by
dynamic clustering. FPs were further reduced by analysis of the feature characteristics of the MCs. With IRB approval,
two-view DBT of 91 subjects with biopsy-proven MCs were collected. Seventy-eight views from 39 subjects with MCs
were used for training and the remaining 52 cases were used for independent testing. For view-based detection, a
sensitivity of 85% was achieved at 3.23 FPs/volume. For case-based detection, the same sensitivity was obtained at 1.63
FPs/volume. The results indicate that the new MBiF method is useful in improving the detection accuracy of clustered
microcalcifications. An effective CAD system for microcalcification detection in DBT has the potential to eliminate the
need for additional mammograms, thereby reducing patient dose and reading time.
Neural network training by maximization of the area under the ROC curve: application to characterization of masses on breast ultrasound as malignant or benign
Author(s):
Berkman Sahiner;
Xin He;
Weijie Chen;
Heang-Ping Chan;
Lubomir Hadjiiski;
Nicholas Petrick
Show Abstract
Back-propagation neural networks (BPNs) are traditionally trained using error measures such as sum-of-squares or
cross-entropy. If the training sample size is small, and the neural network has a large number of hidden layer nodes, the BPN may be overtrained, i.e., it may fit the training data well, but may generalize poorly to independent test data. In this study, we investigated a training technique that maximized the approximate area under the ROC curve (AUC) to reduce overtraining. In general, the non-parametric AUC is a discontinuous and non-differentiable function of the neural network output, which makes it unsuitable for gradient descent algorithms such as back-propagation. We used a semidifferentiable approximation to AUC, which appeared to provide reasonable training for the data sets explored in this study. We performed a simulation study using synthetic data sets consisting of Gaussian mixtures to investigate the behavior of this new technique with respect to overtraining. Our results indicated that an artificial neural network trained using the AUC-maximization method is less prone to overtraining. The advantage of the AUC-maximization method was consistently observed over different values of hidden layer BPN nodes, training sample sizes, and the dimensionality of the feature spaces evaluated in our simulation study. For a five-hidden-node BPN trained using 50 training samples per class, the average test AUC was 0.896 (standard deviation (SD): 0.026) with AUC-maximization and 0.856 (SD: 0.028) with the sum-of-squares method. The gain in test performance by the AUC-maximization method over the traditional BPN training was greater when the training sample size was smaller. We also applied this new method to a data set previously acquired for characterization of masses on breast ultrasound as malignant or benign. Our results with this real-world data set had the same trend as with our simulation data sets in that the AUC-maximization technique was less prone to overtraining than the sum-of-squares method.
Finding lesion correspondences in different views of automated 3D breast ultrasound
Author(s):
Tao Tan;
Bram Platel;
Michael Hicks;
Ritse M. Mann;
Nico Karssemeijer
Show Abstract
Screening with automated 3D breast ultrasound (ABUS) is gaining popularity. However, the acquisition of multiple views required to cover an entire breast makes radiologic reading time-consuming. Linking lesions across views can facilitate the reading process. In this paper, we propose a method to automatically predict the position of a lesion in the target ABUS views, given the location of the lesion in a source ABUS view. We combine features describing the lesion location with respect to the nipple, the transducer and the chestwall, with features describing lesion properties such as intensity, spiculation, blobness, contrast and lesion likelihood. By using a grid search strategy, the location of the lesion was predicted in the target view. Our method achieved an error of 15.64 mm±16.13 mm. The error is small enough to help locate the lesion with minor additional interaction.
Computer-aided lesion diagnosis in B-mode ultrasound by border irregularity and multiple sonographic features
Author(s):
Jong-Ha Lee;
Yeong Kyeong Seong;
Chu-Ho Chang;
Eun Young Ko;
Baek Hwan Cho;
Jeonghun Ku;
Kyoung-Gu Woo
Show Abstract
In this paper, we propose novel feature extraction techniques which can provide a high accuracy rate of mass classification in the computer-aided lesion diagnosis of breast tumor. Totally 290 features were extracted using the newly developed border irregularity feature extractor as well as multiple sonographic features based on the breast imaging-reporting and data system (BI-RADS) lexicons. To demonstrate the performance of the proposed features, 4,107 ultrasound images containing 2,508 malignant cases were used. The clinical results demonstrate that the proposed feature combination can be an integral part of ultrasound CAD systems to help accurately distinguish benign from malignant tumors.
A robust region-based active contour model with point classification for ultrasound breast lesion segmentation
Author(s):
Zhihua Liu;
Lidan Zhang;
Haibing Ren;
Ji-Yeun Kim
Show Abstract
Lesion segmentation is one of the key technologies for computer-aided diagnosis (CAD) system. In this paper, we propose a robust region-based active contour model (ACM) with point classification to segment high-variant breast lesion in ultrasound images. First, a local signed pressure force (LSPF) function is proposed to classify the contour points into two classes: local low contrast class and local high contrast class. Secondly, we build a sub-model for each class. For low contrast class, the sub-model is built by combining global energy with local energy model to find a global optimal solution. For high contrast class, the sub-model is just the local energy model for its good level set initialization. Our final energy model is built by adding the two sub-models. Finally, the model is minimized and evolves the level set contour to get the segmentation result. We compare our method with other state-of-art methods on a very large ultrasound database and the result shows that our method can achieve better performance.
Fast microcalcification detection in ultrasound images using image enhancement and threshold adjacency statistics
Author(s):
Baek Hwan Cho;
Chuho Chang;
Jong-Ha Lee;
Eun Young Ko;
Yeong Kyeong Seong;
Kyoung-Gu Woo
Show Abstract
The existence of microcalcifications (MCs) is an important marker of malignancy in breast cancer. In spite of the benefits in mass detection for dense breasts, ultrasonography is believed that it might not reliably detect MCs. For computer aided diagnosis systems, however, accurate detection of MCs has the possibility of improving the performance in both Breast Imaging-Reporting and Data System (BI-RADS) lexicon description for calcifications and malignancy classification. We propose a new efficient and effective method for MC detection using image enhancement and threshold adjacency statistics (TAS). The main idea of TAS is to threshold an image and to count the number of white pixels with a given number of adjacent white pixels. Our contribution is to adopt TAS features and apply image enhancement to facilitate MC detection in ultrasound images. We employed fuzzy logic, tophat filter, and texture filter to enhance images for MCs. Using a total of 591 images, the classification accuracy of the proposed method in MC detection showed 82.75%, which is comparable to that of Haralick texture features (81.38%). When combined, the performance was as high as 85.11%. In addition, our method also showed the ability in mass classification when combined with existing features. In conclusion, the proposed method exploiting image enhancement and TAS features has the potential to deal with MC detection in ultrasound images efficiently and extend to the real-time localization and visualization of MCs.
Psychophysical similarity measure based on multi-dimensional scaling for retrieval of similar images of breast masses on mammograms
Author(s):
Kohei Nishimura;
Chisako Muramatsu;
Mikinao Oiwa;
Misaki Shiraiwa;
Tokiko Endo;
Kunio Doi;
Hiroshi Fujita
Show Abstract
For retrieving reference images which may be useful to radiologists in their diagnosis, it is necessary to determine a
reliable similarity measure which would agree with radiologists' subjective impression. In this study, we propose a new
similarity measure for retrieval of similar images, which may assist radiologists in the distinction between benign and
malignant masses on mammograms, and investigated its usefulness. In our previous study, to take into account the
subjective impression, the psychophysical similarity measure was determined by use of an artificial neural network (ANN), which was employed to learn the relationship between radiologists’ subjective similarity ratings and image
features. In this study, we propose a psychophysical similarity measure based on multi-dimensional scaling (MDS) in
order to improve the accuracy in retrieval of similar images. Twenty-seven images of masses, 3 each from 9 different
pathologic groups, were selected, and the subjective similarity ratings for all possible 351 pairs were determined by 8
expert physicians. MDS was applied using the average subjective ratings, and the relationship between each output axis and image features was modeled by the ANN. The MDS-based psychophysical measures were determined by the
distance in the modeled space. With a leave-one-out test method, the conventional psychophysical similarity measure
was moderately correlated with subjective similarity ratings (r=0.68), whereas the psychophysical measure based on
MDS was highly correlated (r=0.81). The result indicates that a psychophysical similarity measure based on MDS would
be useful in the retrieval of similar images.
Automatic localization of the nipple in mammograms using Gabor filters and the Radon transform
Author(s):
Jayasree Chakraborty;
Sudipta Mukhopadhyay;
Rangaraj M. Rangayyan;
Anup Sadhu;
P. M. Azevedo-Marques
Show Abstract
The nipple is an important landmark in mammograms. Detection of the nipple is useful for alignment and registration of mammograms in computer-aided diagnosis of breast cancer. In this paper, a novel approach is proposed for automatic detection of the nipple based on the oriented patterns of the breast tissues present in mammograms. The Radon transform is applied to the oriented patterns obtained by a bank of Gabor filters to detect the linear structures related to the tissue patterns. The detected linear structures are then used to locate the nipple position using the characteristics of convergence of the tissue patterns towards the nipple. The performance of the method was evaluated with 200 scanned-film images from the mini-MIAS database and 150 digital radiography (DR) images from a local database. Average errors of 5:84 mm and 6:36 mm were obtained with respect to the reference nipple location marked by a radiologist for the mini-MIAS and the DR images, respectively.
Preliminary investigation on CAD system update: effect of selection of new cases on classifier performance
Author(s):
Chisako Muramatsu;
Kohei Nishimura;
Takeshi Hara;
Hiroshi Fujita
Show Abstract
When a computer-aided diagnosis (CAD) system is used in clinical practice, it is desirable that the system is constantly
and automatically updated with new cases obtained for performance improvement. In this study, the effect of different
case selection methods for the system updates was investigated. For the simulation, the data for classification of benign and malignant masses on mammograms were used. Six image features were used for training three classifiers: linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbors (kNN). Three datasets, including dataset I for initial training of the classifiers, dataset T for intermediate testing and retraining, and dataset E for evaluating the classifiers, were randomly sampled from the database. As a result of intermediate testing, some cases from dataset T were selected to be added to the previous training set in the classifier updates. In each update, cases were selected using 4 methods: selection of (a) correctly classified samples, (b) incorrectly classified samples, (c) marginally classified samples, and (d) random samples. For comparison, system updates using all samples in dataset T were also evaluated. In general, the average areas under the receiver operating characteristic curves (AUCs) were almost unchanged with method (a), whereas AUCs generally degraded with method (b). The AUCs were improved with method (c) and (d), although use of all available cases generally provided the best or nearly best AUCs. In conclusion, CAD systems may be improved by retraining with new cases accumulated during practice.
Model-based position correlation between breast images
Author(s):
J. Georgii;
F. Zöhrer;
H. K. Hahn
Show Abstract
Nowadays, breast diagnosis is based on images of different projections and modalities, such that sensitivity and specificity of the diagnosis can be improved. However, this emburdens radiologists to find corresponding locations in these data sets, which is a time consuming task, especially since the resolution of the images increases and thus more and more data have to be considered in the diagnosis. Therefore, we aim at support radiologist by automatically synchronizing cursor positions between different views of the breast. Specifically, we present an automatic approach to compute the spatial correlation between MLO and CC mammogram or tomosynthesis projections of the breast. It is based on pre-computed finite element simulations of generic breast models, which are adapted to the patient-specific breast using a contour mapping approach. Our approach is designed to be fully automatic and efficient, such that it can be implemented directly into existing multimodal breast workstations. Additionally, it is extendable to support other breast modalities in future, too.
Boosting framework for mammographic mass classification with combination of CC and MLO view information
Author(s):
Dae Hoe Kim;
Jae Young Choi;
Yong Man Ro
Show Abstract
In breast cancer screening practice, radiologists compare multiple views during the interpretation of mammograms to detect breast cancers. Hence, it is natural that information derived from multiple mammograms can be used for computer-aided detection (CAD) system to obtain better sensitivity and/or specificity. However, similarity features derived from the combination of cranio-caudal (CC) and mediolateral oblique (MLO) views are weak for classifying masses, because a breast is elastic and deformable. In this study, therefore, a new mass classification with boosting algorithm is proposed, aiming to reduce FPs by combining the information of CC and MLO view mammograms. The proposed method has been developed under the following facts: (1) classifiers trained using similarity features are rather weak classifier; (2) boosting technique generates a single strong classifier by combining multiple weak classifiers. By combining the classifier ensemble framework with similarity features, we are able to improve mass classification performance in two-view analysis. In this study, 192 mammogram cases were collected from the public DDSM database (DB) to demonstrate the effectiveness of the proposed method in terms of improving mass classification. Results show that our proposed classifier ensemble method can improve an area under the ROC curve (AUC) of 0.7479, compared to the best single support vector machine (SVM) classifier using feature-level fusion (AUC of 0.7123). In addition, the weakness of similarity features is experimentally found to prove the feasibility of the proposed method.
Neural networks combined with region growing techniques for tumor detection in [18F]-fluorothymidine dynamic positron emission tomography breast cancer studies
Author(s):
Zoltan Cseh;
Laura Kenny;
James Swingland;
Subrata Bose;
Federico E. Turheimer
Show Abstract
Early detection and precise localization of malignant tumors has been a primary challenge in medical imaging in recent
years. Functional modalities play a continuously increasing role in these efforts. Image segmentation algorithms which
enable automatic, accurate tumor visualization and quantification on noisy positron emission tomography (PET) images
would significantly improve the quality of treatment planning processes and in turn, the success of treatments. In this
work a novel multistep method has been applied in order to identify tumor regions in 4D dynamic [18F] fluorothymidine (FLT) PET studies of patients with locally advanced breast cancer. In order to eliminate the effect of inherently detectable high inhomogeneity inside tumors, specific voxel-kinetic classes were initially introduced by finding characteristic FLT-uptake curves with K-means algorithm on a set of voxels collected from each tumor. Image voxel sets were then split based on voxel time-activity curve (TAC) similarities, and models were generated separately on each voxel set. At first, artificial neural networks, in comparison with linear classification algorithms were applied to
distinguish tumor and healthy regions relying on the characteristics of TACs of the individual voxels. The outputs of the best model with very high specificity were then used as input seeds for region shrinking and growing techniques, the application of which considerably enhanced the sensitivity and specificity (78.65% ± 0.65% and 98.98% ± 0.03%, respectively) of the final image segmentation model.
Improving positive predictive value in computer-aided diagnosis using mammographic mass and microcalcification confidence score fusion based on co-location information
Author(s):
Seung Hyun Lee;
Dae Hoe Kim;
Jae Young Choi;
Yong Man Ro
Show Abstract
In this study, a novel fusion framework has been developed to combine the detection of both breast masses and
microcalcifications (MCs), aiming to improve positive predictive value (PPV) in Computer-aided Diagnosis (CADx).
Clinically, it has been widely accepted that a mass associated with MC is a useful indicator of predicting the malignancy of the mass. In light of this fact, given that a mass and MCs are co-located each other (i.e., they are at the same location), the proposed fusion framework combines confidence scores of the mass and MCs for the purpose of improving the probability that the mass is malignant. To this end, the popular Bayesian network model is applied to effectively combine the detection confidence scores and to achieve higher accuracy for malignant mass classification. To demonstrate the effectiveness of the proposed fusion framework, 31 mammograms were collected from the public DDSM database. The proposed fusion framework can increase the area under the receiver operating characteristic curve (AUC) from 0.7939 to 0.8806, and the partial area index (PAUC) above the sensitivity of 0.9 from 0.1270 to 0.2280, compared to the CADx system without exploiting co-location information with MCs. Based on these results, it can be expected that the proposed fusion framework can be readily applied for realizing CADx systems with the higher PPV.
Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter
Author(s):
Ruriha Yoshikawa;
Atsushi Teramoto;
Tomoko Matsubara;
Hiroshi Fujita
Show Abstract
Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the
architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting
architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.
A pairwise image analysis with sparse decomposition
Author(s):
A. Boucher;
F. Cloppet;
N. Vincent
Show Abstract
This paper aims to detect the evolution between two images representing the same scene. The evolution detection
problem has many practical applications, especially in medical images. Indeed, the concept of a patient “file” implies the joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The
research presented in this paper is carried out within the application context of the development of computer assisted
diagnosis (CAD) applied to mammograms. It is performed on already registered pair of images. As the registration is
never perfect, we must develop a comparison method sufficiently adapted to detect real small differences between
comparable tissues. In many applications, the assessment of similarity used during the registration step is also used for
the interpretation step that yields to prompt suspicious regions. In our case registration is assumed to match the spatial coordinates of similar anatomical elements. In this paper, in order to process the medical images at tissue level, the image representation is based on elementary patterns, therefore seeking patterns, not pixels. Besides, as the studied images have low entropy, the decomposed signal is expressed in a parsimonious way. Parsimonious representations are known to help extract the significant structures of a signal, and generate a compact version of the data. This change of representation should allow us to compare the studied images in a short time, thanks to the low weight of the images thus represented, while maintaining a good representativeness. The good precision of our results show the approach efficiency.
Visual words based approach for tissue classification in mammograms
Author(s):
Idit Diamant;
Jacob Goldberger;
Hayit Greenspan
Show Abstract
The presence of Microcalcifications (MC) is an important indicator for developing breast cancer. Additional indicators for cancer risk exist, such as breast tissue density type. Different methods have been developed for breast tissue classification for use in Computer-aided diagnosis systems. Recently, the visual words (VW) model has been successfully applied for different classification tasks. The goal of our work is to explore VW based methodologies for
various mammography classification tasks. We start with the challenge of classifying breast density and then focus on
classification of normal tissue versus Microcalcifications. The presented methodology is based on patch-based visual words model which includes building a dictionary for a training set using local descriptors and representing the image using a visual word histogram. Classification is then performed using k-nearest-neighbour (KNN) and Support vector machine (SVM) classifiers. We tested our algorithm on the MIAS and DDSM publicly available datasets. The input is a representative region-of-interest per mammography image, manually selected and labelled by expert. In the tissue density task, classification accuracy reached 85% using KNN and 88% using SVM, which competes with the state-of-the-art results. For MC vs. normal tissue, accuracy reached 95.6% using SVM. Results demonstrate the feasibility to classify breast tissue using our model. Currently, we are improving the results further while also investigating VW capability to classify additional important mammogram classification problems. We expect that the methodology presented will enable high levels of classification, suggesting new means for automated tools for mammography diagnosis support.
Improving breast cancer classification with mammography, supported on an appropriate variable selection analysis
Author(s):
Noel Pérez;
Miguel A. Guevara;
Augusto Silva
Show Abstract
This work addresses the issue of variable selection within the context of breast cancer classification with mammography. A comprehensive repository of feature vectors was used including a hybrid subset gathering image-based and clinical features. It aimed to gather experimental evidence of variable selection in terms of cardinality, type and find a classification scheme that provides the best performance over the Area Under Receiver Operating Characteristics Curve (AUC) scores using the ranked features subset. We evaluated and classified a total of 300 subsets of features formed by the application of Chi-Square Discretization, Information-Gain, One-Rule and RELIEF methods in association with Feed-Forward Backpropagation Neural Network (FFBP), Support Vector Machine (SVM) and Decision Tree J48 (DTJ48) Machine Learning Algorithms (MLA) for a comparative performance evaluation based on AUC scores. A variable selection analysis was performed for Single-View Ranking and Multi-View Ranking groups of features. Features subsets representing Microcalcifications (MCs), Masses and both MCs and Masses lesions achieved AUC scores of 0.91, 0.954 and 0.934 respectively. Experimental evidence demonstrated that classification performance was improved by combining image-based and clinical features. The most important clinical and image-based features were StromaDistortion and Circularity respectively. Other less important but worth to use due to its consistency were Contrast, Perimeter, Microcalcification, Correlation and Elongation.
Predictive features of breast cancer on Mexican screening mammography patients
Author(s):
Juan Rodriguez-Rojas;
Margarita Garza-Montemayor;
Victor Trevino-Alvarado;
José Gerardo Tamez-Pena
Show Abstract
Breast cancer is the most common type of cancer worldwide. In response, breast cancer screening programs are becoming common around the world and public programs now serve millions of women worldwide. These programs are expensive, requiring many specialized radiologists to examine all images. Nevertheless, there is a lack of trained radiologists in many countries as in Mexico, which is a barrier towards decreasing breast cancer mortality, pointing at the need of a triaging system that prioritizes high risk cases for prompt interpretation. Therefore we explored in an image database of Mexican patients whether high risk cases can be distinguished using image features. We collected a set of 200 digital screening mammography cases from a hospital in Mexico, and assigned low or high risk labels according to its BIRADS score. Breast tissue segmentation was performed using an automatic procedure. Image features were obtained considering only the segmented region on each view and comparing the bilateral di erences of the obtained features. Predictive combinations of features were chosen using a genetic algorithms based feature selection procedure. The best model found was able to classify low-risk and high-risk cases with an area under the ROC curve of 0.88 on a 150-fold cross-validation test. The features selected were associated to the differences of signal distribution and tissue shape on bilateral views. The model found can be used to automatically identify high risk cases and trigger the necessary measures to provide prompt treatment.
Automatic assessment of the quality of patient positioning in mammography
Author(s):
Thomas Bülow;
Kirsten Meetz;
Dominik Kutra;
Thomas Netsch;
Rafael Wiemker;
Martin Bergtholdt;
Jörg Sabczynski;
Nataly Wieberneit;
Manuela Freund;
Ingrid Schulze-Wenck
Show Abstract
Quality assurance has been recognized as crucial for the success of population-based breast cancer screening programs using x-ray mammography. Quality guidelines and criteria have been defined in the US as well as the European Union in order to ensure the quality of breast cancer screening. Taplin et al. report that incorrect positioning of the breast is the major image quality issue in screening mammography. Consequently, guidelines and criteria for correct positioning and for the assessment of the positioning quality in mammograms play an important role in the quality standards. In this paper we present a system for the automatic evaluation of positioning quality in mammography according to the existing standardized criteria. This involves the automatic detection of anatomic landmarks in medio- lateral oblique (MLO) and cranio-caudal (CC) mammograms, namely the pectoral muscle, the mammilla and the infra-mammary fold. Furthermore, the detected landmarks are assessed with respect to their proper presentation in the image. Finally, the geometric relations between the detected landmarks are investigated to assess the positioning quality. This includes the evaluation whether the pectoral muscle is imaged down to the mammilla level, and whether the posterior nipple line diameter of the breast is consistent between the different views (MLO and CC) of the same breast. Results of the computerized assessment are compared to ground truth collected from two expert readers.
Automatic 3D lesion segmentation on breast ultrasound images
Author(s):
Hsien-Chi Kuo;
Maryellen L. Giger;
Ingrid Reiser;
Karen Drukker;
Alexandra Edwards;
Charlene A. Sennett
Show Abstract
Automatically acquired and reconstructed 3D breast ultrasound images allow radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in 3D ultrasound can be difficult and time consuming. In this study, we evaluate a 3D lesion segmentation method, which we had previously developed for breast CT, and investigate its robustness on lesions on 3D breast ultrasound images. Our dataset includes 98 3D breast ultrasound images obtained on an ABUS system from 55 patients containing 64 cancers. Cancers depicted on 54 US images had been clinically interpreted as negative on screening mammography and 44 had been clinically visible on mammography. All were from women with breast density BI-RADS 3 or 4. Tumor centers and margins were indicated and outlined by radiologists. Initial RGI-eroded contours were automatically calculated and served as input to the active contour segmentation algorithm yielding the final lesion contour. Tumor segmentation was evaluated by determining the overlap ratio (OR) between computer-determined and manually-drawn outlines. Resulting average overlap ratios on coronal, transverse, and sagittal views were 0.60 ± 0.17, 0.57 ± 0.18, and 0.58 ± 0.17, respectively. All OR values were significantly higher the 0.4, which is deemed “acceptable”. Within the groups of mammogram-negative and mammogram-positive cancers, the overlap ratios were 0.63 ± 0.17 and 0.56 ± 0.16, respectively, on the coronal views; with similar results on the other views. The segmentation performance was not found to be correlated to tumor size. Results indicate robustness of the 3D lesion segmentation technique in multi-modality 3D breast imaging.
Texture feature standardization in digital mammography for improving generalizability across devices
Author(s):
Yan Wang;
Brad M. Keller;
Yuanjie Zheng;
Raymond J. Acciavatti;
James C. Gee;
Andrew D. A. Maidment;
Despina Kontos
Show Abstract
Growing evidence suggests a relationship between mammographic texture and breast cancer risk. For studies performing texture analysis on digital mammography (DM) images from various DM systems, it is important to evaluate if different systems could introduce inherent differences in the images analyzed and how to construct a methodological framework to identify and standardize such effects, if these differences exist. In this study, we compared two DM systems, the GE Senographe 2000D and DS using a validated physical breast phantom (Rachel, Gammex). The GE 2000D and DS systems use the same detector, but a different automated exposure control (AEC) system, resulting in differences in dose performance. On each system, images of the phantom are acquired five times in the Cranio-Caudal (CC) view with the same clinically optimized phototimer setting. Three classes of texture features, namely grey-level histogram, cooccurrence, and run-length texture features (a total of 26 features), are generated within the breast region from the raw DM images and compared between the two imaging systems. To alleviate system effects, a range of standardization steps are applied to the feature extraction process: z-score normalization is performed as the initial step to standardize image intensities, and the parameters in generating co-occurrence features are varied to decrease system differences introduced by detector blurring effects. To identify texture features robust to detectors (i.e. the ones minimally affected only by electronic noise), the distribution of each texture feature is compared between the two systems using the Kolmogorov-Smirnov (K-S) test at 0.05 significance, where features with p>0.05 are deemed robust to inherent system differences. Our approach could provide a basis for texture feature standardization across different DM imaging systems and provide a systematic methodology for selecting generalizable texture descriptors in breast cancer risk assessment.
Quantitative evaluation of automatic methods for lesions detection in breast ultrasound images
Author(s):
Karem D. Marcomini;
Homero Schiabel;
Antonio Adilton O. Carneiro
Show Abstract
Ultrasound (US) is a useful diagnostic tool to distinguish benign from malignant breast masses, providing more detailed evaluation in dense breasts. Due to the subjectivity in the images interpretation, computer-aid diagnosis (CAD) schemes have been developed, increasing the mammography analysis process to include ultrasound images as complementary exams. As one of most important task in the evaluation of this kind of images is the mass detection and its contours interpretation, automated segmentation techniques have been investigated in order to determine a quite suitable procedure to perform such an analysis. Thus, the main goal in this work is investigating the effect of some processing techniques used to provide information on the determination of suspicious breast lesions as well as their accurate boundaries in ultrasound images. In tests, 80 phantom and 50 clinical ultrasound images were preprocessed, and 5 segmentation techniques were tested. By using quantitative evaluation metrics the results were compared to a reference image delineated by an experienced radiologist. A self-organizing map artificial neural network has provided the most relevant results, demonstrating high accuracy and low error rate in the lesions representation, corresponding hence to the segmentation process for US images in our CAD scheme under tests.
A clinically viable capsule endoscopy video analysis platform for automatic bleeding detection
Author(s):
Steven Yi;
Heng Jiao;
Jean Xie;
Peter Mui;
Jonathan A. Leighton;
Shabana Pasha;
Lauri Rentz;
Mahmood Abedi
Show Abstract
In this paper, we present a novel and clinically valuable software platform for automatic bleeding detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos for GI tract run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. As a result, the process is time consuming and is prone to disease miss-finding. While researchers have made efforts to automate this process, however, no clinically acceptable software is available on the marketplace today. Working with our collaborators, we have developed a clinically viable software platform called GISentinel for fully automated GI tract bleeding detection and classification. Major functional modules of the SW include: the innovative graph based NCut segmentation algorithm, the unique feature selection and validation method (e.g. illumination invariant features, color independent features, and symmetrical texture features), and the cascade SVM classification for handling various GI tract scenes (e.g. normal tissue, food particles, bubbles, fluid, and specular reflection). Initial evaluation results on the SW have shown zero bleeding instance miss-finding rate and 4.03% false alarm rate. This work is part of our innovative 2D/3D based GI tract disease detection software platform. While the overall SW framework is designed for intelligent finding and classification of major GI tract diseases such as bleeding, ulcer, and polyp from the CE videos, this paper will focus on the automatic bleeding detection functional module.
A method for quickly and exactly extracting hepatic vein
Author(s):
Qing Xiong;
Rong Yuan;
Luyao Wang;
Yanchun Wang;
Zhen Li;
Daoyu Hu;
Qingguo Xie
Show Abstract
It is of vital importance that providing detailed and accurate information about hepatic vein (HV) for liver surgery
planning, such as pre-operative planning of living donor liver transplantation (LDLT). Due to the different blood flow
rate of intra-hepatic vascular systems and the restrictions of CT scan, it is common that HV and hepatic portal vein (HPV) are both filled with contrast medium during the scan and in high intensity in the hepatic venous phase images. As a result, the HV segmentation result obtained from the hepatic venous phase images is always contaminated by HPV which makes accurate HV modeling difficult. In this paper, we proposed a method for quick and accurate HV extraction. Based on the topological structure of intra-hepatic vessels, we analyzed the anatomical features of HV and HPV. According to the analysis, three conditions were presented to identify the nodes that connect HV with HPV in the topological structure, and thus to distinguish HV from HPV. The method costs less than one minute to extract HV and provides a correct and detailed HV model even with variations in vessels. Evaluated by two experienced radiologists, the accuracy of the HV model obtained from our method is over 97%. In the following work, we will extend our work to a comprehensive clinical evaluation and apply this method to actual LDLT surgical planning.
A dimension reduction strategy for improving the efficiency of computer-aided detection for CT colonography
Author(s):
Bowen Song;
Guopeng Zhang;
Huafeng Wang;
Wei Zhu;
Zhengrong Liang
Show Abstract
Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for
polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography
colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.
Supine and prone registration of the colon for CT colonography based on dynamic programming technique
Author(s):
Masahiro Oda;
Eiichiro Fukano;
Takayuki Kitasaka;
Tetsuji Takayama M.D.;
Hirotsugu Takabatake M.D.;
Masaki Mori M.D.;
Hiroshi Natori M.D.;
Shigeru Nawano M.D.;
Kensaku Mori
Show Abstract
This paper proposes a registration method of the colon taken in two positions of CT images. CT colonographybased
colon diagnosis using 3D CT images taken in supine and prone positions is time-consuming because a physician has to refer to many CT images for the diagnosis of a patient. Automated synchronization of the observing areas in the two positions is required to reduce the load on physicians. This paper proposes a novel registration method of the colon in two positions to synchronize the observing areas. The registration process utilizes the sharp curved points of the colon centerlines and haustral folds as landmarks. A dynamic programming technique finds correspondence between the haustral fold landmarks in the two positions. The experimental results using six pairs of CT images showed that the mean registration error was 4.70 [mm].
Comparison of texture models for efficient ultrasound image retrieval
Author(s):
Maggi Bansal;
Vipul Sharma;
Sukhwinder Singh
Show Abstract
Due to availability of inexpensive and easily available image capturing devices, the size of digital image collection is increasing rapidly. Thus, there is need to create efficient access methods or retrieval tools to search, browse and retrieve images from large multimedia repositories. More specifically, researchers have been engaged on different ways of retrieving images based on their actual content. In particular, Content Based Image Retrieval (CBIR) systems have attracted considerable research and commercial interest in the recent years. In CBIR, visual features characterizing the image content are color, shape and texture. Currently, texture is used to quantify the image content of medical images as it is the most prominent feature that contains information about the spatial distribution of gray levels and variations in brightness. Various texture models like Haralick’s Spatial Gray Level Co-occurence Matrix (SGLCM), Gray Level Difference Statistics (GLDS), First-order Statistics (FoS), Statistical Feature Matrix (SFM), Law’s Texture Energy Measures (TEM), Fractal features and Fourier Power Spectrum (FPS) features exists in literature. Each of these models visualizes texture in a different way. Retrieval performance depends upon the choice of texture algorithm. Unfortunately, there is no texture model known to work best for encoding texture properties of liver ultrasound images or retrieving most similar images. An experimental comparison of different texture models for Content Based Medical Image Retrieval (CBMIR) is presented in this paper. For the experiments, liver ultrasound image database is used and the retrieval performance of the various texture models is analyzed in detail. The paper concludes with recommendations which texture model performs better for liver ultrasound images. Interestingly, FPS and SGLCM based Haralick’s features perform well for liver ultrasound retrieval and thus can be recommended as a simple baseline for such images.
Volumetric detection of flat lesions for minimal-preparation dual-energy CT colonography
Author(s):
Janne J. Näppi;
Se Hyung Kim;
Hiroyuki Yoshida
Show Abstract
Computer-aided detection (CAD) systems for computed tomographic colonography (CTC) tend to miss many flat lesions. We developed a volumetric method for automated detection of lesions with dual-energy CTC (DECTC). The target region for the detection is defined in terms of a distance transform of the colonic lumen. To detect lesions, volumetric shape features are calculated at the image scale defined by the thickness of the target region. False-positive (FP) detections are reduced by use of a random-forest classifier based on shape, texture, and dual-energy features of the detected lesion candidates. For pilot evaluation, 37 patients were examined by use of DE-CTC with a reduced one-day bowel preparation. The CAD scheme was trained with the DE-CTC data of 12 patients, and it was tested with the DE-CTC data of 25 patients. The detection sensitivity was assessed at multiple thicknesses of the target region. There were 39 lesions ≥6 mm in 15 patients, including 8 flat lesions ≥10 mm. The thickness of the target region had a statistically significant effect on the detection sensitivity. At the optimal thickness of the target region, the per-lesion and per-patient sensitivities for flat lesions were 100% at a median of 4 FPs per patient.
A shape constrained MAP-EM algorithm for colorectal segmentation
Author(s):
Huafeng Wang;
Lihong Li;
Bowen Song;
Fangfang Han;
Zhengrong Liang
Show Abstract
The task of effectively segmenting colon areas in CT images is an important area of interest in medical imaging field.
The ability to distinguish the colon wall in an image from the background is a critical step in several approaches for
achieving larger goals in automated computer-aided diagnosis (CAD). The related task of polyp detection, the ability to
determine which objects or classes of polyps are present in a scene, also relies on colon wall segmentation. When
modeling each tissue type as a conditionally independent Gaussian distribution, the tissue mixture fractions in each voxel via the modeled unobservable random processes of the underlying tissue types can be estimated by maximum a
posteriori expectation-maximization (MAP-EM) algorithm in an iterative manner. This paper presents, based on the
assumption that the partial volume effect (PVE) could be fully described by a tissue mixture model, a theoretical solution to the MAP-EM segmentation algorithm. However, the MAP-EM algorithm may miss some small regions which also belong to the colon wall. Combining with the shape constrained model, we present an improved algorithm which is able to merge similar regions and reserve fine structures. Experiment results show that the new approach can refine the jagged-like boundaries and achieve better results than merely exploited our previously presented MAP-EM algorithm.
Optic disk localization by a robust fusion method
Author(s):
Jielin Zhang;
Fengshou Yin;
Damon W. K. Wong;
Jiang Liu;
Mani Baskaran;
Ching-Yu Cheng;
Tien Yin Wong
Show Abstract
The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular
diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an
intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches
are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing
method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.
Region-based multi-step optic disk and cup segmentation from color fundus image
Author(s):
Di Xiao;
Jane Lock;
Javier Moreno Manresa;
Janardhan Vignarajan;
Mei-Ling Tay-Kearney;
Yogesan Kanagasingam
Show Abstract
Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we
propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.
Computerized detection of retina blood vessel using a piecewise line fitting approach
Author(s):
Suicheng Gu;
Yi Zhen;
Ningli Wang;
Jiantao Pu
Show Abstract
Retina vessels are important landmarks in fundus images, an accurate segmentation of the vessels may be useful for automated screening for several eye diseases or systematic diseases, such as diebetes. A new method is presented for automated segmentation of blood vessels in two-dimensional color fundus images. First, a coherence filter and a followed mean filter are applied to the green channel of the image. The green channel is selected because the vessels have the maximal contrast at the green channel. The coherence filter is to enhance the line strength of the original image and the mean filter is to discard the intensity variance among different regions. Since the vessels are darker than the around tissues depicted on the image, the pixels with small intensity are then retained as points of interest (POI). A new line fitting algorithm is proposed to identify line-like structures in each local circle of the POI. The proposed line fitting method is less sensitive to noise compared to the least squared fitting. The fitted lines with
higher scores are regarded as vessels. To evaluate the performance of the proposed method, a public available database DRIVE with 20 test images is selected for experiments. The mean accuracy on these images is 95.7% which is comparable to the state-of-art.
Automatic conjunctival provocation test combining Hough circle transform and self-calibrated color measurements
Author(s):
Suman Raj Bista;
István Sárándi;
Serkan Dogan;
Anatoli Astvatsatourov;
Ralph Mösges;
Thomas M. Deserno
Show Abstract
Computer-aided diagnosis is developed for assessment of allergic rhinitis/rhinoconjunctivitis measuring the relative redness of sclera under application of allergen solution. Images of the patient's eye are taken using a commercial digital camera. The iris is robustly localized using a gradient-based Hough circle transform. From the center of the pupil, the region of interest within the sclera is extracted using geometric anatomy-based apriori information. The red color pixels are extracted thresholding in the hue, saturation and value color space. Then, redness is measured by taking mean of saturation projected into zero hue. Evaluation is performed with 98 images taken from 14 subjects, 8 responders and 6 non-responders, which were classified according to an experienced otorhinolaryngologist. Provocation is performed with 100, 1,000 and 10,000 AU/ml allergic solution and normalized to control images without provocation. The evaluation yields relative redness of 1.01, 1.05, 1.30 and 0.95, 1.00, 0.96 for responders and non-responders, respectively. Variations in redness measurements were analyzed according to alteration of parameters of the image processing chain proving stability and robustness of our approach. The results indicate that the method improves visual inspection and may be suitable as reliable surrogate endpoint in controlled clinical trials.
Training set optimization and classifier performance in a top-down diabetic retinopathy screening system
Author(s):
J. Wigdahl;
C. Agurto;
V. Murray;
S. Barriga;
P. Soliz
Show Abstract
Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm’s performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on
partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.
White matter injury detection in neonatal MRI
Author(s):
Irene Cheng;
Nasim Hajari;
Amirhossein Firouzmanesh;
Rui Shen;
Steven Miller;
Ken Poskitt;
Anup Basu
Show Abstract
Early detection of white matter injury in premature newborns can facilitate timely clinical treatments reducing the potential risk of later developmental deficits. It was reported that there were more than 5% premature newborns in British Columbia, Canada, among which 5-10% exhibited major motor deficits and 25-50% exhibited significant developmental and visual deficits. With the advancement of computer assisted detection systems, it is possible to automatically identify white matter injuries, which are found inside the grey matter region of the brain. Atlas registration has been suggested in the literature to distinguish grey matter from the soft tissues inside the skull. However, our subjects are premature newborns delivered at 24 to 32 weeks of gestation. During this period, the grey matter undergoes rapid changes and differs significantly from one to another. Besides, not all detected white spots represent injuries. Additional neighborhood information and expert input are required for verification. In this paper, we propose a white matter feature identification system for premature newborns, which is composed of several steps: (1) Candidate white matter segmentation; (2) Feature extraction from candidates; (3) Validation with data obtained at a later stage on the children; and (4) Feature confirmation for automated detection. The main challenge of this work lies in segmenting white matter injuries from noisy and low resolution data. Our approach integrates image fusion and contrast enhancement together with a fuzzy segmentation technique to achieve promising results. Other applications, such as brain tumor and intra-ventricular haemorrhage detection can also benefit from our approach.
Risk assessment of sleeping disorder breathing based on upper airway centerline evaluation
Author(s):
Noura Alsufyani;
Rui Shen;
Irene Cheng;
Paul Major
Show Abstract
One of the most important breathing disorders in childhood is obstructive sleep apnea syndrome which affects 2–3% of children, and the reported failure rate of surgical treatment was as high as 54%. A possible reason in respiratory complications is having reduced dimensions of the upper airway which are further compressed when muscle tone is decreased during sleep. In this study, we use Cone-beam computed tomography (CBCT) to assess the location or cause of the airway obstruction. To date, all studies analyzing the upper airway in subjects with Sleeping Disorder Breathing were based on linear, area, or volumetric measurements, which are global computations and can easily ignore local significance. Skeletonization was initially introduced as a 3D modeling technique by which representative medial points of a model are extracted to generate centerlines for evaluations. Although centerlines have been commonly used in guiding surgical procedures, our novelty lies in comparing its geometric properties before and after surgeries. We apply 3D data refinement, registration and projection steps to quantify and localize the geometric deviation in target airway regions. Through cross validation with corresponding subjects’ therapy data, we expect to quantify the tolerance threshold beyond which reduced dimensions of the upper airway are not clinically significant. The ultimate goal is to utilize this threshold to identify patients at risk of complications. Outcome from this research will also help establish a predictive model for training and to estimate treatment success based on airway measurements prior to intervention. Preliminary results demonstrate the feasibility of our approach.
Statistical shape modeling of human cochlea: alignment and principal component analysis
Author(s):
Anton A. Poznyakovskiy;
Thomas Zahnert;
Björn Fischer;
Nikoloz Lasurashvili;
Yannis Kalaidzidis;
Dirk Mürbe
Show Abstract
The modeling of the cochlear labyrinth in living subjects is hampered by insufficient resolution of available clinical
imaging methods. These methods usually provide resolutions higher than 125 μm. This is too crude to record the
position of basilar membrane and, as a result, keep apart even the scala tympani from other scalae. This problem could
be avoided by the means of atlas-based segmentation. The specimens can endure higher radiation loads and, conversely, provide better-resolved images. The resulting surface can be used as the seed for atlas-based segmentation. To serve this purpose, we have developed a statistical shape model (SSM) of human scala tympani based on segmentations obtained from 10 μCT image stacks. After segmentation, we aligned the resulting surfaces using Procrustes alignment. This algorithm was slightly modified to accommodate single models with nodes which do not necessarily correspond to salient features and vary in number between models. We have established correspondence by mutual proximity between nodes. Rather than using the standard Euclidean norm, we have applied an alternative logarithmic norm to improve outlier treatment. The minimization was done using BFGS method. We have also split the surface nodes along an octree to reduce computation cost. Subsequently, we have performed the principal component analysis of the training set with Jacobi eigenvalue algorithm. We expect the resulting method to help acquiring not only better understanding in interindividual variations of cochlear anatomy, but also a step towards individual models for pre-operative diagnostics prior to cochlear implant insertions.
Survival time prediction of patients with glioblastoma multiforme tumors using spatial distance measurement
Author(s):
Mu Zhou;
Lawrence O. Hall;
Dmitry B. Goldgof;
Robert J. Gillies;
Robert A. Gatenby
Show Abstract
Regional variations in tumor blood flow and necrosis are commonly observed in cross sectional imaging of clinical cancers. We hypothesize that radiologically-defined regional variations in tumor characteristics can be used to define distinct “habitats” that reflect the underlying evolutionary dynamics. Here we present an experimental framework to extract spatially-explicit variations in tumor features (habitats) from multiple MRI sequences performed on patients with Glioblastoma Multiforme (GBM). The MRI sequences consist of post gadolinium T1-weighted, FLAIR, and T2-weighted images from The Cancer Genome Atlas (TCGA). Our strategy is to identify spatially distinct, radiologically-defined intratumoral habitats by characterizing each small tumor regions based on their combined properties in 3 different MRI sequences. Initial tumor identification was performed by manually drawing a mask on a T1-weighted post contrast image slice. The extracted tumor was segmented into an enhancing and non-enhancing region by the Otsu segmentation algorithm, followed by a mask mapping procedure onto the corresponding FLAIR and T2-weighted images. Then Otsu was applied on the FLAIR and T2 images separately. We find that tumor heterogeneity measured through Distance Features (DF) can be used as a strong predictor of survival time. In an initial cohort of 16 cases slow progressing tumors have lower DF values (are less heterogeneous) compared to those with fast progression and short survival times.
Automated segmentation of brain ventricles in unenhanced CT of patients with ischemic stroke
Author(s):
Xiaohua Qian;
Jiahui Wang;
Qiang Li
Show Abstract
We are developing an automated method for detection and quantification of ischemic stroke in computed tomography (CT). Ischemic stroke often connects to brain ventricle, therefore, ventricular segmentation is an important and difficult task when stroke is present, and is the topic of this study. We first corrected inclination angle of brain by aligning midline of brain with the vertical centerline of a slice. We then estimated the intensity range of the ventricles by use of the k-means method. Two segmentation of the ventricle were obtained by use of thresholding technique. One segmentation contains ventricle and nearby stroke. The other mainly contains ventricle. Therefore, the stroke regions can be extracted and removed using image difference technique. An adaptive template-matching algorithm was employed to identify objects in the fore-mentioned segmentation. The largest connected component was identified and considered as the ventricle. We applied our method to 25 unenhanced CT scans with stroke. Our method achieved average Dice index, sensitivity, and specificity of 95.1%, 97.0%, and 99.8% for the entire ventricular regions. The experimental results demonstrated that the proposed method has great potential in detection and quantification of stroke and other neurologic diseases.
Multi-atlas-based segmentation of the parotid glands of MR images in patients following head-and-neck cancer radiotherapy
Author(s):
Guanghui Cheng;
Xiaofeng Yang;
Ning Wu;
Zhijian Xu;
Hongfu Zhao;
Yuefeng Wang;
Tian Liu
Show Abstract
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and
distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume
reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians’ manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients’ images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians’ manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
Automated detection of abnormalities in paranasal sinus on dental panoramic radiographs by using contralateral subtraction technique based on mandible contour
Author(s):
Shintaro Mori;
Takeshi Hara;
Motoki Tagami;
Chicako Muramatsu;
Takashi Kaneda;
Akitoshi Katsumata;
Hiroshi Fujita
Show Abstract
Inflammation in paranasal sinus sometimes becomes chronic to take long terms for the treatment. The finding is
important for the early treatment, but general dentists may not recognize the findings because they focus on teeth
treatments. The purpose of this study was to develop a computer-aided detection (CAD) system for the inflammation in
paranasal sinus on dental panoramic radiographs (DPRs) by using the mandible contour and to demonstrate the potential usefulness of the CAD system by means of receiver operating characteristic analysis. The detection scheme consists of 3 steps: 1) Contour extraction of mandible, 2) Contralateral subtraction, and 3) Automated detection. The Canny operator and active contour model were applied to extract the edge at the first step. At the subtraction step, the right region of the extracted contour image was flipped to compare with the left region. Mutual information between two selected regions was obtained to estimate the shift parameters of image registration. The subtraction images were generated based on the shift parameter. Rectangle regions of left and right paranasal sinus on the subtraction image were determined based on the size of mandible. The abnormal side of the regions was determined by taking the difference between the averages of each region. Thirteen readers were responded to all cases without and with the automated results. The averaged AUC of all readers was increased from 0.69 to 0.73 with statistical significance (p=0.032) when the automated detection results were provided. In conclusion, the automated detection method based on contralateral subtraction technique improves readers' interpretation performance of inflammation in paranasal sinus on DPRs.
Recognition of upper airway and surrounding structures at MRI in pediatric PCOS and OSAS
Author(s):
Yubing Tong;
J. K. Udupa;
D. Odhner;
Sanghun Sin;
Raanan Arens
Show Abstract
Obstructive Sleep Apnea Syndrome (OSAS) is common in obese children with risk being 4.5 fold compared to normal control subjects. Polycystic Ovary Syndrome (PCOS) has recently been shown to be associated with OSAS that may further lead to significant cardiovascular and neuro-cognitive deficits. We are investigating image-based biomarkers to understand the architectural and dynamic changes in the upper airway and the surrounding hard and soft tissue structures via MRI in obese teenage children to study OSAS. At the previous SPIE conferences, we presented methods underlying Fuzzy Object Models (FOMs) for Automatic Anatomy Recognition (AAR) based on CT images of the thorax and the abdomen. The purpose of this paper is to demonstrate that the AAR approach is applicable to a different body region and image modality combination, namely in the study of upper airway structures via MRI. FOMs were built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. FOMs encode the uncertainty and variability present in the form and relationships among the objects over a study population. Totally 11 basic objects (17 including composite) were modeled. Automatic recognition for the best pose of FOMs in a given image was implemented by using four methods – a one-shot method that does not require search, another three searching methods that include Fisher Linear Discriminate (FLD), a b-scale energy optimization strategy, and optimum threshold recognition method. In all, 30 multi-fold cross validation experiments based on 15 patient MRI data sets were carried out to assess the accuracy of recognition. The results indicate that the objects can be recognized with an average location error of less than 5 mm or 2-3 voxels. Then the iterative relative fuzzy connectedness (IRFC) algorithm was adopted for delineation of the target organs based on the recognized results. The delineation results showed an overall FP and TP volume fraction of 0.02 and 0.93.
An optimal set of landmarks for metopic craniosynostosis diagnosis from shape analysis of pediatric CT scans of the head
Author(s):
Carlos S. Mendoza;
Nabile Safdar;
Emmarie Myers;
Tanakorn Kittisarapong;
Gary F. Rogers;
Marius George Linguraru
Show Abstract
Craniosynostosis (premature fusion of skull sutures) is a severe condition present in one of every 2000 newborns. Metopic craniosynostosis, accounting for 20-27% of cases, is diagnosed qualitatively in terms of skull shape abnormality, a subjective call of the surgeon. In this paper we introduce a new quantitative diagnostic feature for metopic craniosynostosis derived optimally from shape analysis of CT scans of the skull. We built a robust shape analysis pipeline that is capable of obtaining local shape differences in comparison to normal anatomy. Spatial normalization using 7-degree-of-freedom registration of the base of the skull is followed by a novel bone labeling strategy based on graph-cuts according to labeling priors. The statistical shape model built from 94 normal subjects allows matching a patient's anatomy to its most similar normal subject. Subsequently, the computation of local malformations from a normal subject allows characterization of the points of maximum malformation on each of the frontal bones adjacent to the metopic suture, and on the suture itself. Our results show that the malformations at these locations vary significantly (p<0.001) between abnormal/normal subjects and that an accurate diagnosis can be achieved using linear regression from these automatic measurements with an area under the curve for the receiver operating characteristic of 0.97.
Characterization of T2 hyperintensity lesions in patients with mild traumatic brain injury
Author(s):
Jesus J. Caban;
Savannah A. Green;
Gerard Riedy
Show Abstract
Mild traumatic brain injury (TBI) is often an invisible injury that is poorly understood and its sequelae can be difficult to diagnose. Recent neuroimaging studies on patients diagnosed with mild TBI (mTBI) have demonstrated an increase in hyperintense brain lesions on T2-weighted MR images. This paper presents an in-depth analysis of the multi-modal and morphological properties of T2 hyperintensity lesions among service members diagnosed with mTBI. A total of 790
punctuate T2 hyperintensity lesions from 89 mTBI subjects were analyzed and used to characterize the lesions based on different quantitative measurements. Morphological analysis shows that on average, T2 hyperintensity lesions have volumes of 23mm3 (±24.75), a roundness measure of 0.83 (±0.08) and an elongation of 7.90 (±2.49). The frontal lobe lesions demonstrated significantly more elongated lesions when compared to other areas of the brain.
Prediction of the potential clinical outcomes for post-resuscitated patients after cardiac arrest
Author(s):
Sungmin Hong;
Bojun Kwon;
Il Dong Yun;
Sang Uk Lee;
Kyuseok Kim;
Joonghee Kim
Show Abstract
Cerebral injuries after cardiac arrest are serious causes for morbidity. Many previous researches in the medical society
have been proposed to prognosticate the functional recoveries of post-resuscitated patients after cardiac arrest, but the validity of suggested features and the automation of prognostication have not been made yet. This paper presents the automatic classification method which predicts the potential clinical outcomes of post-resuscitated patients who suffered from cardiac arrest. The global features and the local features are adapted from the researches from the medical society. The global features, which are consisted of the percentage of the partial volume under the uniformly increasing thresholds, represent the global tendency of apparent diffusion coefficient value in a DWI. The local features are localized and measured on the refined local apparent diffusion coefficient minimal points. The local features represent the ischemic change of small areas in a brain. The features are trained and classified by the random forest method, which have been widely used in the machine learning society for classification. The validity of features is automatically evaluated during the classification process. The proposed method achieved the 0.129 false-positive rate while maintaining the perfect true-positive rate. The area-under-curve of the proposed method was 0.9516, which showed the feasibility and the robustness of the proposed method.
A novel approach of computer-aided detection of focal ground-glass opacity in 2D lung CT images
Author(s):
Song Li;
Xiabi Liu;
Ali Yang;
Kunpeng Pang;
Chunwu Zhou;
Xinming Zhao;
Yanfeng Zhao
Show Abstract
Focal Ground-Glass Opacity (fGGO) plays an important role in diagnose of lung cancers. This paper proposes a novel approach for detecting fGGOs in 2D lung CT images. The approach consists of two stages: extracting regions of interests (ROIs) and labeling each ROI as fGGO or non-fGGO. In the first stage, we use the techniques of Otsu thresholding and mathematical morphology to segment lung parenchyma from lung CT images and extract ROIs in lung parenchyma. In the second stage, a Bayesian classifier is constructed based on the Gaussian mixture Modeling (GMM) of the distribution of visual features of fGGOs to fulfill ROI identification. The parameters in the classifier are estimated from training data by the discriminative learning method of Max-Min posterior Pseudo-probabilities (MMP). A genetic algorithm is further developed to select compact and discriminative features for the classifier. We evaluated the proposed fGGO detection approach through 5-fold cross-validation experiments on a set of 69 lung CT scans that contain 70 fGGOs. The proposed approach achieves the detection sensitivity of 85.7% at the false positive rate of 2.5 per scan, which proves its effectiveness. We also demonstrate the usefulness of our genetic algorithm based feature selection method and MMP discriminative learning method through comparing them with without-selection strategy and Support Vector Machines (SVMs), respectively, in the experiments.
Multimodal 3D PET/CT system for bronchoscopic procedure planning
Author(s):
Ronnarit Cheirsilp;
William E. Higgins
Show Abstract
Integrated positron emission tomography (PET) / computed-tomography (CT) scanners give 3D multimodal data sets of the chest. Such data sets offer the potential for more complete and specific identification of suspect lesions and lymph nodes for lung-cancer assessment. This in turn enables better planning of staging bronchoscopies. The richness of the data, however, makes the visualization and planning process difficult. We present an integrated multimodal 3D PET/CT system that enables efficient region identification and bronchoscopic procedure planning. The system first invokes a series of automated 3D image-processing methods that construct a 3D chest model. Next, the user interacts with a set of interactive multimodal graphical tools that facilitate procedure planning for specific regions of interest (ROIs): 1) an interactive region candidate list that enables efficient ROI viewing in all tools; 2) a virtual PET-CT bronchoscopy rendering with SUV quantitative visualization to give a “fly through" endoluminal view of prospective ROIs; 3) transverse, sagittal, coronal multi-planar reformatted (MPR) views of the raw CT, PET, and fused CT-PET data; and 4) interactive multimodal volume/surface rendering to give a 3D perspective of the anatomy and candidate ROIs. In addition the ROI selection process is driven by a semi-automatic multimodal method for region identification. In this way, the system provides both global and local information to facilitate more specific ROI identification and procedure planning. We present results to illustrate the system's function and performance.
Content-based image retrieval for interstitial lung diseases using classification confidence
Author(s):
Jatindra Kumar Dash;
Sudipta Mukhopadhyay;
Nidhi Prabhakar;
Mandeep Garg;
Niranjan Khandelwal
Show Abstract
Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.
A new 3D texture feature based computer-aided diagnosis approach to differentiate pulmonary nodules
Author(s):
Fangfang Han;
Huafeng Wang;
Bowen Song;
Guopeng Zhang;
Hongbing Lu;
William Moore;
Hong Zhao;
Zhengrong Liang
Show Abstract
To distinguish malignant pulmonary nodules from benign ones is of much importance in computer-aided diagnosis of
lung diseases. Compared to many previous methods which are based on shape or growth assessing of nodules, this
proposed three-dimensional (3D) texture feature based approach extracted fifty kinds of 3D textural features from gray
level, gradient and curvature co-occurrence matrix, and more derivatives of the volume data of the nodules. To
evaluate the presented approach, the Lung Image Database Consortium public database was downloaded. Each case of the database contains an annotation file, which indicates the diagnosis results from up to four radiologists. In order to relieve partial-volume effect, interpolation process was carried out to those volume data with image slice thickness more than 1mm, and thus we had categorized the downloaded datasets to five groups to validate the proposed approach, one group of thickness less than 1mm, two types of thickness range from 1mm to 1.25mm and greater than 1.25mm (each type contains two groups, one with interpolation and the other without). Since support vector machine is based on statistical learning theory and aims to learn for predicting future data, so it was chosen as the classifier to perform the differentiation task. The measure on the performance was based on the area under the curve (AUC) of Receiver Operating Characteristics. From 284 nodules (122 malignant and 162 benign ones), the validation experiments reported a mean of 0.9051 and standard deviation of 0.0397 for the AUC value on average over 100 randomizations.
Integrating shape into an interactive segmentation framework
Author(s):
S. Kamalakannan;
B. Bryant;
H. Sari-Sarraf;
R. Long;
S. Antani;
G. Thoma
Show Abstract
This paper presents a novel interactive annotation toolbox which extends a well-known user-steered segmentation
framework, namely Intelligent Scissors (IS). IS, posed as a shortest path problem, is essentially driven by lower level
image based features. All the higher level knowledge about the problem domain is obtained from the user through mouse clicks. The proposed work integrates one higher level feature, namely shape up to a rigid transform, into the IS
framework, thus reducing the burden on the user and the subjectivity involved in the annotation procedure, especially
during instances of occlusions, broken edges, noise and spurious boundaries. The above mentioned scenarios are
commonplace in medical image annotation applications and, hence, such a tool will be of immense help to the medical
community. As a first step, an offline training procedure is performed in which a mean shape and the corresponding
shape variance is computed by registering training shapes up to a rigid transform in a level-set framework. The user
starts the interactive segmentation procedure by providing a training segment, which is a part of the target boundary. A partial shape matching scheme based on a scale-invariant curvature signature is employed in order to extract shape
correspondences and subsequently predict the shape of the unsegmented target boundary. A ‘zone of confidence’ is
generated for the predicted boundary to accommodate shape variations. The method is evaluated on segmentation of
digital chest x-ray images for lung annotation which is a crucial step in developing algorithms for screening Tuberculosis.
Extraction method of interlobar fissure based on multi-slice CT images
Author(s):
M. Matsuhiro;
H. Suzuki;
Y. Kawata;
N. Niki;
J. Ueno;
Y. Nakano;
E. Ogawa;
S. Muro;
M. Mishima;
H. Ohmatsu;
Noriyuki Moriyama
Show Abstract
Extraction of inter lobar fissure is an active study for diagnosis and treatment. However, lung diseased cases have problems. The proposed method covers diseased cases, and contains three phases of coarse extract, fine extract and correction, using behavior of membrane. We applied this method to normal and lung diseased cases. Rate (average±standard eviation) of gold standard within 2 mm of extraction result were 91.2±3.6% for normal, and 89.7±4.9% for lung diseased cases. Rate of extraction result within 2 mm of gold standard for normal cases were 95.5±3.7%, and 93.6±4.8% for lung diseased.
Automated lung field segmentation in CT images using mean shift clustering and geometrical features
Author(s):
Chanukya Krishna Chama;
Sudipta Mukhopadhyay;
Prabir Kumar Biswas;
Ashis Kumar Dhara;
Mahendra Kasuvinahally Madaiah;
Niranjan Khandelwal
Show Abstract
Lung field segmentation is a prerequisite for development of automated computer aided diagnosis system from chest computed tomography (CT) scans. Intensity based algorithm such as mean shift (MS) segmentation on CT images for delineation of lung field is reported as the best technique in terms of accuracy and speed in the literature. However, in presence of high dense abnormalities, accurate and automated delineation of lung field becomes difficult. So an improved lung field segmentation using mean shift clustering followed by geometric property based techniques such as lung region of interest (ROI) created from symmetric centroid map of two normal subjects, false positives (FP) reduction module (using eccentricity, solidity, area, centroid features) and false negatives (FN) reduction module (using overlap feature between clusters from MS label map and convex hull of costal lung) is proposed. The performance of the proposed algorithm is validated on images obtained from Lung Image Database Consortium (LIDC) - Image Database Resource Initiative (IDRI) public database of 17 subjects containing nodular patterns and from local database of 26 subjects containing interstitial lung disease (ILD) patterns. The proposed algorithm has achieved mean Modified Hausdorff Distance (MHD) in mm of 1.47 ± 4.31, Dice Similarity Coefficient (DSC) of 0.9854 ± 0.0288, sensitivity of 0.9771 ± 0.0433, specificity of 0.9991 ± 0.0014 for 133 normal images from 32 subjects and MHD in mm of 6.23 ± 9.00, DSC of 0.8954 ± 0.1498, sensitivity of 0.8468 ± 0.1908, specificity of 0.9969 ± 0.0061 for 296 abnormal images from 43 subjects.
Semi-quantitative assessment of pulmonary perfusion in children using dynamic contrast-enhanced MRI
Author(s):
Catalin Fetita;
William E. Thong;
Phalla Ou
Show Abstract
This paper addresses the study of semi-quantitative assessment of pulmonary perfusion acquired from dynamic
contrast-enhanced magnetic resonance imaging (DCE-MRI) in a study population mainly composed of children with
pulmonary malformations. The automatic analysis approach proposed is based on the indicator-dilution theory
introduced in 1954. First, a robust method is developed to segment the pulmonary artery and the lungs from anatomical MRI data, exploiting 2D and 3D mathematical morphology operators. Second, the time-dependent contrast signal of the lung regions is deconvolved by the arterial input function for the assessment of the local hemodynamic system parameters, ie. mean transit time, pulmonary blood volume and pulmonary blood flow. The discrete deconvolution method implements here a truncated singular value decomposition (tSVD) method. Parametric images for the entire lungs are generated as additional elements for diagnosis and quantitative follow-up. The preliminary results attest the feasibility of perfusion quantification in pulmonary DCE-MRI and open an interesting alternative to scintigraphy for this type of evaluation, to be considered at least as a preliminary decision in the diagnostic due to the large availability of the technique and to the non-invasive aspects.
Learning-based image preprocessing for robust computer-aided detection
Author(s):
Laks Raghupathi;
Pandu R. Devarakota;
Matthias Wolf
Show Abstract
Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to
reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists
in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.
Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA
Author(s):
Chuan Zhou;
Heang-Ping Chan;
Yanhui Guo;
Jun Wei;
Aamer Chughtai;
Lubomir M. Hadjiiski;
Baskaran Sundaram;
Smita Patel;
Jean W. Kuriakose;
Ella A. Kazerooni
Show Abstract
The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate
longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate
radiologists’ visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA
due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar
reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive
(FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary
vessels at prescreening. Based on Dijkstra’s algorithm, the optimal path (OP) is traced from the pulmonary trunk
bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated
using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE
candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP
classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set)
and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by
experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the
average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set
was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was
used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically
significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful
for FP reduction.
Multiscale intensity homogeneity transformation method and its application to computer-aided detection of pulmonary embolism in computed tomographic pulmonary angiography (CTPA)
Author(s):
Yanhui Guo;
Chuan Zhou;
Heang-Ping Chan;
Jun Wei;
Aamer Chughtai;
Baskaran Sundaram;
Lubomir M. Hadjiiski;
Smita Patel;
Ella A. Kazerooni
Show Abstract
A 3D multiscale intensity homogeneity transformation (MIHT) method was developed to reduce false positives (FPs) in our previously developed CAD system for pulmonary embolism (PE) detection. In MIHT, the voxel intensity of a PE candidate region was transformed to an intensity homogeneity value (IHV) with respect to the local median intensity. The IHVs were calculated in multiscales (MIHVs) to measure the intensity homogeneity, taking into account vessels of different sizes and different degrees of occlusion. Seven new features including the entropy, gradient, and moments that characterized the intensity distributions of the candidate regions were derived from the MIHVs and combined with the previously designed features that described the shape and intensity of PE candidates for the training of a linear classifier to reduce the FPs. 59 CTPA PE cases were collected from our patient files (UM set) with IRB approval and 69 cases from the PIOPED II data set with access permission. 595 and 800 PEs were identified as reference standard by experienced thoracic radiologists in the UM and PIOPED set, respectively. FROC analysis was used for performance evaluation. Compared with our previous CAD system, at a test sensitivity of 80%, the new method reduced the FP rate from 18.9 to 14.1/scan for the PIOPED set when the classifier was trained with the UM set and from 22.6 to 16.0/scan vice versa. The improvement was statistically significant (p<0.05) by JAFROC analysis. This study demonstrated that the MIHT method is effective in reducing FPs and improving the performance of the CAD system.
Quantitative consensus of supervised learners for diffuse lung parenchymal HRCT patterns
Author(s):
Sushravya Raghunath;
Srinivasan Rajagopalan;
Ronald A. Karwoski;
Brian J. Bartholmai;
Richard A. Robb
Show Abstract
Automated lung parenchymal classification usually relies on supervised learning of expert chosen regions representative of the visually differentiable HRCT patterns specific to different pathologies (eg. emphysema, ground glass, honey combing, reticular and normal). Considering the elusiveness of a single most discriminating similarity measure, a plurality of weak learners can be combined to improve the machine learnability. Though a number of quantitative combination strategies exist, their efficacy is data and domain dependent. In this paper, we investigate multiple (N=12) quantitative consensus approaches to combine the clusters obtained with multiple (n=33) probability density-based similarity measures. Our study shows that hypergraph based meta-clustering and probabilistic clustering provides optimal expert-metric agreement.
Automated localization of costophrenic recesses and costophrenic angle measurement on frontal chest radiographs
Author(s):
Pragnya Maduskar;
Laurens Hogeweg;
Rick Philipsen;
Bram van Ginneken
Show Abstract
Computer aided detection (CAD) of tuberculosis (TB) on chest radiographs (CXR) is difficult because the disease has varied manifestations, like opacification, hilar elevation, and pleural effusions. We have developed a CAD research prototype for TB (CAD4TB v1.08, Diagnostic Image Analysis Group, Nijmegen, The Netherlands) which is trained to detect textural abnormalities inside unobscured lung fields. If the only abnormality visible on a CXR would be a blunt costophrenic angle, caused by pleural fluid in the costophrenic recess, this is likely to be missed by texture analysis in the lung fields. The goal of this work is therefore to detect the presence of blunt costophrenic (CP) angles caused by pleural effusion on chest radiographs. The CP angle is the angle formed by the hemidiaphragm and the chest wall. We define the intersection point of both as the CP angle point. We first detect the CP angle point automatically from a lung field segmentation by finding the foreground pixel of each lung with maximum y location. Patches are extracted around the CP angle point and boundary tracing is performed to detect 10 consecutive pixels along the hemidiaphragm and the chest wall and derive the CP angle from these. We evaluate the method on a data set of 250 normal CXRs, 200 CXRs with only one or two blunt CP angles and 200 CXRs with one or two blunt CP angles but also other abnormalities. For these three groups, the CP angle location and angle measurements were accurate in 91%, 88%, and 92% of all the cases, respectively. The average CP angles for the three groups are indeed different with 71.6° ± 22.9, 87.5° ± 25.7, and 87.7° ± 25.3, respectively.
3D texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images
Author(s):
Ashis Kumar Dhara;
Sudipta Mukhopadhyay;
Niranjan Khandelwal
Show Abstract
In this paper we have investigated a new approach for texture features extraction using co-occurrence matrix from volumetric lung CT image. Traditionally texture analysis is performed in 2D and is suitable for images collected from 2D imaging modality. The use of 3D imaging modalities provide the scope of texture analysis from 3D object and 3D texture feature are more realistic to represent 3D object. In this work, Haralick's texture features are extended in 3D and computed from volumetric data considering 26 neighbors. The optimal texture features to characterize the internal structure of Solitary Pulmonary Nodules (SPN) are selected based on area under curve (AUC) values of ROC curve and p values from 2-tailed Student's t-test. The selected texture feature in 3D to represent SPN can be used in efficient Computer Aided Diagnostic (CAD) design plays an important role in fast and accurate lung cancer screening. The reduced number of input features to the CAD system will decrease the computational time and classification errors caused by irrelevant features. In the present work, SPN are classified from Ground Glass Nodule (GGN) using Artificial Neural Network (ANN) classifier considering top five 3D texture features and top five 2D texture features separately. The classification is performed on 92 SPN and 25 GGN from Imaging Database Resources Initiative (IDRI) public database and classification accuracy using 3D texture features and 2D texture features provide 97.17% and 89.1% respectively.
Automatic organ localizations on 3D CT images by using majority-voting of multiple 2D detections based on local binary patterns and Haar-like features
Author(s):
Xiangrong Zhou;
Shoutarou Yamaguchi;
Xinxin Zhou;
Huayue Chen;
Takeshi Hara;
Ryujiro Yokoyama;
Masayuki Kanematsu;
Hiroshi Fujita
Show Abstract
This paper describes an approach to accomplish the fast and automatic localization of the different inner organ regions on 3D CT scans. The proposed approach combines object detections and the majority voting technique to achieve the robust and quick organ localization. The basic idea of proposed method is to detect a number of 2D partial appearances of a 3D target region on CT images from multiple body directions, on multiple image scales, by using multiple feature spaces, and vote all the 2D detecting results back to the 3D image space to statistically decide one 3D bounding rectangle of the target organ. Ensemble learning was used to train the multiple 2D detectors based on template matching on local binary patterns and Haar-like feature spaces. A collaborative voting was used to decide the corner coordinates of the 3D bounding rectangle of the target organ region based on the coordinate histograms from detection results in three body directions. Since the architecture of the proposed method (multiple independent detections connected to a majority voting) naturally fits the parallel computing paradigm and multi-core CPU hardware, the proposed algorithm was easy to achieve a high computational efficiently for the organ localizations on a whole body CT scan by using general-purpose computers. We applied this approach to localization of 12 kinds of major organ regions independently on 1,300 torso CT scans. In our experiments, we randomly selected 300 CT scans (with human indicated organ and tissue locations) for training, and then, applied the proposed approach with the training results to localize each of the target regions on the other 1,000 CT scans for the performance testing. The experimental results showed the possibility of the proposed approach to automatically locate different kinds of organs on the whole body CT scans.
Computerized segmentation of ureters in CT urography (CTU) using COMPASS
Author(s):
Lubomir M. Hadjiiski;
Heang-Ping Chan;
Luke Niland;
Richard H. Cohan;
Elaine M. Caoili;
Chuan Zhou;
Jun Wei
Show Abstract
We are developing a computerized system for automated segmentation of ureters on CTU, as a critical component for computer-aided diagnosis of ureter cancer. A challenge for ureter segmentation is the presence of regions not well opacified with intravenous (IV) contrast. We propose a COmbined Model-guided Path-finding Analysis and Segmentation System (COMPASS) to track the ureters in CTU. COMPASS consists of three stages: (1) adaptive thresholding and region growing, (2) edge profile extraction and feature analysis, and (3) path-finding and propagation. 114 ureters, filled with IV contrast material, on 74 CTU scans from 74 patients were segmented. On average the ureter occupied 286 CT slices (range:164 to 399, median:301). More than half of the ureters contained malignant or benign lesions and some had ureter wall thickening due to malignancy. A starting point for each of the 114 ureters was selected manually, which served as an input to the COMPASS, to initialize the tracking. The path-finding and segmentation are guided by anatomical knowledge of the ureters in CTU. The segmentation performance was quantitatively assessed by estimating the percentage of the length that was successfully tracked and segmented for each ureter. Of the 114 ureters, 75 (66%) were segmented completely (100%), 99 (87%) were segmented through at least 70% of its length, and 104 (91%) were segmented at least 50%. Previously, without the model-guided approach, 61 (54%) ureters were segmented completely (100%), 80 (70%) were segmented through at least 70% of its length, and 96 (84%) were segmented at least 50%. COMPASS improved the ureter tracking, including regions across ureter lesions, wall thickening and the narrowing of the lumen.
Computer assisted measurement of femoral cortex thickening on radiographs
Author(s):
Jianhua Yao;
Yixun Liu;
Foster Chen;
Ronald M. Summers;
Timothy Bhattacharyya
Show Abstract
Radiographic features such as femoral cortex thickening have been frequently observed with atypical subtrochanteric fractures. These features may be a valuable finding to help prevent fractures before they happen. The current practice of manual measurement is often subjective and inconsistent. We developed a semi-automatic tool to consistently measure and monitor the progress of femoral cortex thickening on radiographs. By placing two seed points on each side of the femur, the program automatically extracts the periosteal and endosteal layers of the cortical shell by active contour models and B-spline fitting. Several measurements are taken along the femur shaft, including shaft diameter, cortical thickness, and integral area for medial and lateral cortex. The experiment was conducted on 52 patient datasets. The semi-automatic measurements were validated against manual measurements on 52 patients and demonstrated great improvement in consistency and accuracy (p<0.001).
Exploring the utility of axial lumbar MRI for automatic diagnosis of intervertebral disc abnormalities
Author(s):
Subarna Ghosh;
Vipin Chaudhary;
Gurmeet Dhillon
Show Abstract
In this paper, we explore the importance of axial lumbar MRI slices for automatic detection of abnormalities. In the past, only the sagittal views were taken into account for lumbar CAD systems, ignoring the fact that a radiologist scans through the axial slices as well, to confirm the diagnosis and quantify various abnormalities like herniation and stenosis. Hence, we present an automatic diagnosis system from axial slices using CNN(Convolutional Neural Network) for dynamic feature extraction and classification of normal and abnormal lumbar discs. We show 80:81% accuracy (with a specificity of 85:29% and sensitivity of 75:56%) on 86 cases (391 discs) using only an axial slice for each disc, which implies the usefulness of axial views for automatic lumbar abnormality diagnosis in conjunction with sagittal views.
A prostate CAD system based on multiparametric analysis of DCE T1-w, and DW automatically registered images
Author(s):
Valentina Giannini;
Anna Vignati;
Simone Mazzetti;
Massimo De Luca;
Christian Bracco;
Michele Stasi;
Filippo Russo;
Enrico Armando;
Daniele Regge
Show Abstract
Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this
benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided
biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic
resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive
detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric
framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically
produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and
diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.
Temporal subtraction system on torso FDG-PET scans based on statistical image analysis
Author(s):
Yusuke Shimizu;
Takeshi Hara;
Daisuke Fukuoka;
Xiangrong Zhou;
Chisako Muramatsu;
Satoshi Ito;
Kenta Hakozaki;
Shin-ichiro Kumita;
Kei-ichi Ishihara;
Tetsuro Katafuchi;
Hiroshi Fujita
Show Abstract
Diagnostic imaging on FDG-PET scans was often used to evaluate chemotherapy results of cancer patients. Radiologists compare the changes of lesions' activities between previous and current examinations for the evaluation. The purpose of this study was to develop a new computer-aided detection (CAD) system with temporal subtraction technique for FDGPET scans and to show the fundamental usefulness based on an observer performance study. Z-score mapping based on statistical image analysis was newly applied to the temporal subtraction technique. The subtraction images can be obtained based on the anatomical standardization results because all of the patients' scans were deformed into standard body shape. An observer study was performed without and with computer outputs to evaluate the usefulness of the scheme by ROC (receiver operating characteristics) analysis. Readers responded as confidence levels on a continuous scale from absolutely no change to definitely change between two examinations. The recognition performance of the computer outputs for the 43 pairs was 96% sensitivity with 31.1 false-positive marks per scan. The average of area-under-the-ROC-curve (AUC) from 4 readers in the observer performance study was increased from 0.85 without computer outputs to 0.90 with computer outputs (p=0.0389, DBM-MRMC). The average of interpretation time was slightly decreased from 42.11 to 40.04 seconds per case (p=0.625, Wilcoxon test). We concluded that the CAD system for torso FDG-PET scans with temporal subtraction technique might improve the diagnostic accuracy of radiologist in cancer therapy evaluation.
Segmentation of common carotid artery with active appearance models from ultrasound images
Author(s):
Xin Yang;
Wanji He;
Aaron Fenster;
Ming Yuchi;
Mingyue Ding
Show Abstract
Carotid atherosclerosis is a major cause of stroke, a leading cause of death and disability. In this paper, a new
segmentation method is proposed and evaluated for outlining the common carotid artery (CCA) from transverse view
images, which were sliced from three-dimensional ultrasound (3D US) of 1mm inter-slice distance (ISD), to support the
monitoring and assessment of carotid atherosclerosis. The data set consists of forty-eight 3D US images acquired from both left and right carotid arteries of twelve patients in two time points who had carotid stenosis of 60% or more at the baseline. The 3D US data were collected at baseline and three-month follow-up, where seven treated with 80mg atorvastatin and five with placebo. The baseline manual boundaries were used for Active Appearance Models (AAM) training; while the treatment data for segmentation testing and evaluation. The segmentation results were compared with experts manually outlined boundaries, as a surrogate for ground truth, for further evaluation. For the adventitia and lumen segmentations, the algorithm yielded Dice Coefficients (DC) of 92.06%±2.73% and 89.67%±3.66%, mean absolute distances (MAD) of 0.28±0.18 mm and 0.22±0.16 mm, maximum absolute distances (MAXD) of 0.71±0.28 mm and 0.59±0.21 mm, respectively. The segmentation results were also evaluated via Pratt’s figure of merit (FOM) with the value of 0.61±0.06 and 0.66±0.05, which provides a quantitative measure for judging the similarity.
Experimental results indicate that the proposed method can promote the carotid 3D US usage for a fast, safe and
economical monitoring of the atherosclerotic disease progression and regression during therapy.
Automatic segmentation of the lumen of the carotid artery in ultrasound B-mode images
Author(s):
André M. F. Santos;
Jão Manuel R. S. Tavares;
Luísa Sousa;
Rosa Santos;
Pedro Castro;
Elsa Azevedo
Show Abstract
A new algorithm is proposed for the segmentation of the lumen and bifurcation boundaries of the carotid artery in B-mode ultrasound images. It uses the hipoechogenic characteristics of the lumen for the identification of the carotid boundaries and the echogenic characteristics for the identification of the bifurcation boundaries. The image to be segmented is processed with the application of an anisotropic diffusion filter for speckle removal and morphologic operators are employed in the detection of the artery. The obtained information is then used in the definition of two initial contours, one corresponding to the lumen and the other to the bifurcation boundaries, for the posterior application of the Chan-vese level set segmentation model. A set of longitudinal B-mode images of the common carotid artery (CCA) was acquired with a GE Healthcare Vivid-e ultrasound system (GE Healthcare, United Kingdom). All the acquired images include a part of the CCA and of the bifurcation that separates the CCA into the internal and external carotid arteries. In order to achieve the uppermost robustness in the imaging acquisition process, i.e., images with high contrast and low speckle noise, the scanner was adjusted differently for each acquisition and according to the medical exam. The obtained results prove that we were able to successfully apply a carotid segmentation technique based on cervical ultrasonography. The main advantage of the new segmentation method relies on the automatic identification of the carotid lumen, overcoming the limitations of the traditional methods.
Assessment of implanted stent coverage of side-branches in intravascular optical coherence tomographic images
Author(s):
A. Wang;
J. Eggermont;
J.H.C. Reiber;
N. Dekker;
P.J.H. de Koning;
J. Dijkstra
Show Abstract
Coronary stents improve the blood flow by keeping narrowed vessels open, but small stent cells that overlay a side
branch may cause restenosis and obstruct the blood flow to the side branch. There are increasing demands for precise
measurement of the stent coverage of side branches for outcome evaluation and clinical research. Capturing micrometerresolution images, intravascular optical coherence tomography (IVOCT) allows proper visualization of the stent struts, which subsequently can be used for the coverage measurement purpose. In this paper, a new approach to compute the stent coverage of side branches in IVOCT image sequences is presented. The amount of the stent coverage of a side branch is determined by the ostial area of the stent cells that cover this side branch. First, the stent struts and the guide wires are detected to reconstruct the irregular stent surface and the stent cell contours are generated to segment their coverage area on the stent surface. Next, the covered side branches are detected and their
lumen contours are projected onto the stent surface to specify the side branch areas. By assessing the common parts
between the stent cell areas and the side branch areas, the stent cell coverage of side branches can be computed.
The evaluation based on a phantom data set demonstrated that the average error of the stent coverage of side branches is 8.9% ± 7.0%. The utility of the presented approach for in-vivo data sets was also proved by the testing on 12 clinical IVOCT image sequences.
A centerline-based estimator of vessel bifurcations in angiography images
Author(s):
Maysa M. G. Macedo;
Miguel A. Galarreta-Valverde;
Choukri Mekkaoui;
Marcel P. Jackowski
Show Abstract
The analysis of vascular structure based on vessel diameters, density and distance between bifurcations is an important
step towards the diagnosis of vascular anomalies. Moreover, vascular network extraction allows the study of angiogenesis. This work describes a technique that detects bifurcations in vascular networks in magnetic resonance angiography and computed tomography angiography images. Initially, a vessel tracking technique that uses the Hough transform and a matrix composed of second order partial derivatives of image intensity is used to estimate the scale and vessel direction, respectively. This semi-automatic technique is capable of connecting isolated tracked vessel segments and extracting a full tree from a vascular network with minimal user intervention. Vessel shape descriptors such as curvature are then used to identify bifurcations during tracking and to estimate the next branch direction. We have initially applied this technique on synthetic datasets and then on real images.
Automatic identification of origins of left and right coronary arteries in CT angiography for coronary arterial tree tracking and plaque detection
Author(s):
Chuan Zhou;
Heang-Ping Chan;
Aamer Chightai;
Jun Wei;
Lubomir M. Hadjiiski;
Prachi Agarwal;
Jean W. Kuriakose;
Ella A. Kazerooni
Show Abstract
Automatic tracking and segmentation of the coronary arterial tree is the basic step for computer-aided analysis of coronary disease. The goal of this study is to develop an automated method to identify the origins of the left coronary artery (LCA) and right coronary artery (RCA) as the seed points for the tracking of the coronary arterial trees. The heart region and the contrast-filled structures in the heart region are first extracted using morphological operations and EM estimation. To identify the ascending aorta, we developed a new multiscale aorta search method (MAS) method in which the aorta is identified based on a-priori knowledge of its circular shape. Because the shape of the ascending aorta in the cCTA axial view is roughly a circle but its size can vary over a wide range for different patients, multiscale circularshape priors are used to search for the best matching circular object in each CT slice, guided by the Hausdorff distance (HD) as the matching indicator. The location of the aorta is identified by finding the minimum HD in the heart region over the set of multiscale circular priors. An adaptive region growing method is then used to extend the above initially identified aorta down to the aortic valves. The origins at the aortic sinus are finally identified by a morphological gray level top-hat operation applied to the region-grown aorta with morphological structuring element designed for coronary arteries. For the 40 test cases, the aorta was correctly identified in 38 cases (95%). The aorta can be grown to the aortic root in 36 cases, and 36 LCA origins and 34 RCA origins can be identified within 10 mm of the locations marked by radiologists.
Automated registration of coronary arterial trees from multiple phases in coronary CT angiography (cCTA)
Author(s):
Lubomir Hadjiiski;
Chuan Zhou;
Heang-Ping Chan;
Aamer Chughtai;
Prachi Agarwal;
Jean Kuriakose;
Smita Patel;
Jun Wei;
Ella Kazerooni
Show Abstract
We are developing an automated registration method for coronary arterial trees from multiple-phase cCTA to build a best-quality tree to facilitate detection of stenotic plaques. Cubic B-spline with fast localized optimization (CBSO) is designed to register the initially segmented left and right coronary arterial trees (LCA or RCA) separately in adjacent phase pairs where displacements are small. First, the corresponding trees in phase 1 and 2 are registered. The phase 3 tree is then registered to the combined tree. Similarly the trees in phases 4, 5, and 6 are registered. An affine transform with quadratic terms and nonlinear simplex optimization (AQSO) is designed to register the trees between phases with large displacements, namely, registering the combined tree from phases 1, 2, and 3 to that from phases 4, 5, and 6. Finally, CBSO is again applied to the AQSO registered volumes for final refinement. The costs determined by the distances between the vessel centerlines, bifurcation points and voxels of the trees are minimized to guide both CBSO and AQSO registration. The registration performance was evaluated on 22 LCA and 22 RCA trees on 22 CTA scans with 6 phases from 22 patients. The average distance between the centerlines of the registered trees was used as a registration quality index. The average distances for LCA and RCA registration for 6 phases and 22 patients were 1.49 and 1.43 pixels, respectively. This study demonstrates the feasibility of using automated method for registration of coronary arterial trees from multiple cCTA phases.