Proceedings Volume 9034

Medical Imaging 2014: Image Processing

cover
Proceedings Volume 9034

Medical Imaging 2014: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 25 April 2014
Contents: 13 Sessions, 157 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2014
Volume Number: 9034

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9034
  • OCT and Ultrasound
  • Segmentation
  • Temporal and Motion Analysis
  • Cardiac and Vascular Imaging
  • DTI
  • Shape
  • Keynote and Brain
  • Classification and Texture
  • Registration
  • Atlas-based Segmentation
  • Magnetic Resonance Imaging
  • Poster Session
Front Matter: Volume 9034
icon_mobile_dropdown
Front Matter: Volume 9034
This PDF file contains the front matter associated with SPIE Proceedings Volume 9034, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
OCT and Ultrasound
icon_mobile_dropdown
An adaptive grid for graph-based segmentation in retinal OCT
Andrew Lang, Aaron Carass, Peter A. Calabresi, et al.
Graph-based methods for retinal layer segmentation have proven to be popular due to their efficiency and accuracy. These methods build a graph with nodes at each voxel location and use edges connecting nodes to encode the hard constraints of each layer's thickness and smoothness. In this work, we explore deforming the regular voxel grid to allow adjacent vertices in the graph to more closely follow the natural curvature of the retina. This deformed grid is constructed by fixing node locations based on a regression model of each layer's thickness relative to the overall retina thickness, thus we generate a subject specific grid. Graph vertices are not at voxel locations, which allows for control over the resolution that the graph represents. By incorporating soft constraints between adjacent nodes, segmentation on this grid will favor smoothly varying surfaces consistent with the shape of the retina. Our final segmentation method then follows our previous work. Boundary probabilities are estimated using a random forest classifier followed by an optimal graph search algorithm on the new adaptive grid to produce a final segmentation. Our method is shown to produce a more consistent segmentation with an overall accuracy of 3.38 μm across all boundaries.
Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices
Jing Wu, Bianca S. Gerendas, Sebastian M. Waldstein, et al.
Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses ground truth vessel shadow regions identified by expert graders at the Vienna Reading Center (VRC). The results presented here are intended to show the feasibility of this method for the accurate and precise extraction of suitable retinal vessel shadows from multiple vendor 3D SD-OCT scans for use in intra-vendor and cross-vendor 3D OCT registration, 2D fundus registration and actual retinal vessel segmentation. The resulting percentage of true vessel shadow segments to false positive segments identified by the proposed system compared to mean grader ground truth is 95%.
Locally constrained active contour: a region-based level set for ovarian cancer metastasis segmentation
Accurate segmentation of ovarian cancer metastases is clinically useful to evaluate tumor growth and determine follow-up treatment. We present a region-based level set algorithm with localization constraints to segment ovarian cancer metastases. Our approach is established on a representative region-based level set, Chan-Vese model, in which an active contour is driven by region competition. To reduce over-segmentation, we constrain the level set propagation within a narrow image band by embedding a dynamic localization function. The metastasis intensity prior is also estimated from image regions within the level set initialization. The localization function and intensity prior force the level set to stop at the desired metastasis boundaries. Our approach was validated on 19 ovarian cancer metastases with radiologist-labeled ground-truth on contrast-enhanced CT scans from 15 patients. The comparison between our algorithm and geodesic active contour indicated that the volume overlap was 75±10% vs. 56±6%, the Dice coefficient was 83±8% vs. 63±8%, and the average surface distance was 2.2±0.6mm vs. 4.4±0.9mm. Experimental results demonstrated that our algorithm outperformed traditional level set algorithms.
Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)
Mandana Javanshir Moghaddam, Tao Tan, Nico Karssemeijer, et al.
Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.
Cancer therapy prognosis using quantitative ultrasound spectroscopy and a kernel-based metric
Mehrdad J. Gangeh, Amr Hashim, Anoja Giles, et al.
In this study, a kernel-based metric based on the Hilbert-Schmidt independence criterion (HSIC) is proposed in a computer-aided-prognosis system to monitor cancer therapy effects. In order to induce tumour cell death, sarcoma xenograft tumour-bearing mice were injected with microbubbles followed by ultrasound and X-ray radiation therapy successively as a new anti-vascular treatment. High frequency (central frequency 30 MHz) ultrasound imaging was performed before and at different times after treatment and using spectroscopy, quantitative ultrasound (QUS) parametric maps were derived from the radiofrequency (RF) signals. The intensity histogram of midband fit parametric maps was computed to represent the pre- and post-treatment images. Subsequently, the HSIC-based metric between preand post-treatment samples were computed for each animal as a measure of distance between the two distributions. The HSIC-based metrics computes the distance between two distributions in a reproducing kernel Hilbert space (RKHS), meaning that by using a kernel, the input vectors are non-linearly mapped into a different, possibly high dimensional feature space. Computing the population means in this new space, enhanced group separability (compared to, e.g., Euclidean distance in the original feature space) is ideally obtained. The pre- and post-treatment parametric maps for each animal were thus represented by a dissimilarity measure, in which a high value of this metric indicated more treatment effect on the animal. It was shown in this research that this metric has a high correlation with cell death and if it was used in supervised learning, a high accuracy classification was obtained using a k-nearest-neighbor (k-NN) classifier.
Segmentation
icon_mobile_dropdown
Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation
Qinquan Gao, Akshay Asthana, Tong Tong, et al.
We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.
Failure analysis for model-based organ segmentation using outlier detection
During the last years Model-Based Segmentation (MBS) techniques have been used in a broad range of medical applications. In clinical practice, such techniques are increasingly employed for diagnostic purposes and treatment decisions. However, it is not guaranteed that a segmentation algorithm will converge towards the desired solution. In specific situations as in the presence of rare anatomical variants (which cannot be represented) or for images with an extremely low quality, a meaningful segmentation might not be feasible. At the same time, an automated estimation of the segmentation reliability is commonly not available. In this paper we present an approach for the identification of segmentation failures using concepts from the field of outlier detection. The approach is validated on a comprehensive set of Computed Tomography Angiography (CTA) images by means of Receiver Operating Characteristic (ROC) analysis. Encouraging results in terms of an Area Under the ROC Curve (AUC) of up to 0.965 were achieved.
Brain abnormality segmentation based on l1-norm minimization
Ke Zeng, Guray Erus, Manoj Tanwar, et al.
We present a method that uses sparse representations to model the inter-individual variability of healthy anatomy from a limited number of normal medical images. Abnormalities in MR images are then defined as deviations from the normal variation. More precisely, we model an abnormal (pathological) signal y as the superposition of a normal part ~y that can be sparsely represented under an example-based dictionary, and an abnormal part r. Motivated by a dense error correction scheme recently proposed for sparse signal recovery, we use l1- norm minimization to separate ~y and r. We extend the existing framework, which was mainly used on robust face recognition in a discriminative setting, to address challenges of brain image analysis, particularly the high dimensionality and low sample size problem. The dictionary is constructed from local image patches extracted from training images aligned using smooth transformations, together with minor perturbations of those patches. A multi-scale sliding-window scheme is applied to capture anatomical variations ranging from fine and localized to coarser and more global. The statistical significance of the abnormality term r is obtained by comparison to its empirical distribution through cross-validation, and is used to assign an abnormality score to each voxel. In our validation experiments the method is applied for segmenting abnormalities on 2-D slices of FLAIR images, and we obtain segmentation results consistent with the expert-defined masks.
Towards a comprehensive CT image segmentation for thoracic organ radiation dose estimation and reporting
Cristian Lorenz, Heike Ruppertshofen, Torbjörn Vik, et al.
Administered dose of ionizing radiation during medical imaging is an issue of increasing concern for the patient, for the clinical community, and for respective regulatory bodies. CT radiation dose is currently estimated based on a set of very simplifying assumptions which do not take the actual body geometry and organ specific doses into account. This makes it very difficult to accurately report imaging related administered dose and to track it for different organs over the life of the patient. In this paper this deficit is addressed in a two-fold way. In a first step, the absorbed radiation dose in each image voxel is estimated based on a Monte-Carlo simulation of X-ray absorption and scattering. In a second step, the image is segmented into tissue types with different radio sensitivity. In combination this allows to calculate the effective dose as a weighted sum of the individual organ doses. The main purpose of this paper is to assess the feasibility of automatic organ specific dose estimation. With respect to a commercially applicable solution and respective robustness and efficiency requirements, we investigated the effect of dose sampling rather than integration over the organ volume. We focused on the thoracic anatomy as the exemplary body region, imaged frequently by CT. For image segmentation we applied a set of available approaches which allowed us to cover the main thoracic radio-sensitive tissue types. We applied the dose estimation approach to 10 thoracic CT datasets and evaluated segmentation accuracy and administered dose and could show that organ specific dose estimation can be achieved.
Neuromuscular fiber segmentation through particle filtering and discrete optimization
Thomas Dietenbeck, François Varray, Jan Kybic, et al.
We present an algorithm to segment a set of parallel, intertwined and bifurcating fibers from 3D images, targeted for the identification of neuronal fibers in very large sets of 3D confocal microscopy images. The method consists of preprocessing, local calculation of fiber probabilities, seed detection, tracking by particle filtering, global supervised seed clustering and final voxel segmentation. The preprocessing uses a novel random local probability filtering (RLPF). The fiber probabilities computation is performed by means of SVM using steerable filters and the RLPF outputs as features. The global segmentation is solved by discrete optimization. The combination of global and local approaches makes the segmentation robust, yet the individual data blocks can be processed sequentially, limiting memory consumption. The method is automatic but efficient manual interactions are possible if needed. The method is validated on the Neuromuscular Projection Fibers dataset from the Diadem Challenge. On the 15 first blocks present, our method has a 99.4% detection rate. We also compare our segmentation results to a state-of-the-art method. On average, the performances of our method are either higher or equivalent to that of the state-of-the-art method but less user interactions is needed in our approach.
Prostate segmentation in MRI using fused T2-weighted and elastography images
Guy Nir, Ramin S. Sahebjavaher, Ali Baghani, et al.
Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.
Temporal and Motion Analysis
icon_mobile_dropdown
Characterizing growth patterns in longitudinal MRI using image contrast
Avantika Vardhan, Marcel Prastawa, Clement Vachet, et al.
Understanding the growth patterns of the early brain is crucial to the study of neuro-development. In the early stages of brain growth, a rapid sequence of biophysical and chemical processes take place. A crucial component of these processes, known as myelination, consists of the formation of a myelin sheath around a nerve fiber, enabling the effective transmission of neural impulses. As the brain undergoes myelination, there is a subsequent change in the contrast between gray matter and white matter as observed in MR scans. In this work, gray-white matter contrast is proposed as an effective measure of appearance which is relatively invariant to location, scanner type, and scanning conditions. To validate this, contrast is computed over various cortical regions for an adult human phantom. MR (Magnetic Resonance) images of the phantom were repeatedly generated using different scanners, and at different locations. Contrast displays less variability over changing conditions of scan compared to intensity-based measures, demonstrating that it is less dependent than intensity on external factors. Additionally, contrast is used to analyze longitudinal MR scans of the early brain, belonging to healthy controls and Down's Syndrome (DS) patients. Kernel regression is used to model subject-specific trajectories of contrast changing with time. Trajectories of contrast changing with time, as well as time-based biomarkers extracted from contrast modeling, show large differences between groups. The preliminary applications of contrast based analysis indicate its future potential to reveal new information not covered by conventional volumetric or deformation-based analysis, particularly for distinguishing between normal and abnormal growth patterns.
Registration of organs with sliding interfaces and changing topologies
Floris F. Berendsen, Alexis N. T. J. Kotte, Max A. Viergever, et al.
Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.
Elastic registration of prostate MR images based on state estimation of dynamical systems
Bahram Marami, Suha Ghoul, Shahin Sirouspour, et al.
Magnetic resonance imaging (MRI) is being increasingly used for image-guided biopsy and focal therapy of prostate cancer. A combined rigid and deformable registration technique is proposed to register pre-treatment diagnostic 3T magnetic resonance (MR) images, with the identified target tumor(s), to the intra-treatment 1.5T MR images. The pre-treatment 3T images are acquired with patients in strictly supine position using an endorectal coil, while 1.5T images are obtained intra-operatively just before insertion of the ablation needle with patients in the lithotomy position. An intensity-based registration routine rigidly aligns two images in which the transformation parameters is initialized using three pairs of manually selected approximate corresponding points. The rigid registration is followed by a deformable registration algorithm employing a generic dynamic linear elastic deformation model discretized by the finite element method (FEM). The model is used in a classical state estimation framework to estimate the deformation of the prostate based on a similarity metric between pre- and intra-treatment images. Registration results using 10 sets of prostate MR images showed that the proposed method can significantly improve registration accuracy in terms of target registration error (TRE) for all prostate substructures. The root mean square (RMS) TRE of 46 manually identified fiducial points was found to be 2.40±1.20 mm, 2.51±1.20 mm, and 2.28±1.22mm for the whole gland (WG), central gland (CG), and peripheral zone (PZ), respectively after deformable registration. These values are improved from 3.15±1.60 mm, 3.09±1.50 mm, and 3.20±1.73mm in the WG, CG and PZ, respectively resulted from rigid registration. Registration results are also evaluated based on the Dice similarity coefficient (DSC), mean absolute surface distances (MAD) and maximum absolute surface distances (MAXD) of the WG and CG in the prostate images.
A hybrid biomechanical model-based image registration method for sliding objects
Lianghao Han, David Hawkes, Dean Barratt
The sliding motion between two anatomic structures, such as lung against chest wall, liver against surrounding tissues, produces a discontinuous displacement field between their boundaries. Capturing the sliding motion is quite challenging for intensity-based image registration methods in which a smoothness condition has commonly been applied to ensure the deformation consistency of neighborhood voxels. Such a smoothness constraint contradicts motion physiology at the boundaries of these anatomic structures. Although various regularisation schemes have been developed to handle sliding motion under the framework of non-rigid intensity-based image registration, the recovered displacement field may still not be physically plausible. In this study, a new framework that incorporates a patient-specific biomechanical model with a non-rigid image registration scheme for motion estimation of sliding objects has been developed. The patient-specific model provides the motion estimation with an explicit simulation of sliding motion, while the subsequent non-rigid image registration compensates for smaller residuals of the deformation due to the inaccuracy of the physical model. The algorithm was tested against the results of the published literature using 4D CT data from 10 lung cancer patients. The target registration error (TRE) of 3000 landmarks with the proposed method (1.37±0.89 mm) was significantly lower than that with the popular B-spline based free form deformation (FFD) registration (4.5±3.9 mm), and was smaller than that using the B-spline based FFD registration with the sliding constraint (1.66±1.14 mm) or using the B-spline based FFD registration on segmented lungs (1.47±1.1 mm). A paired t-test showed that the improvement of registration performance with the proposed method was significant (p<0.01). The propose method also achieved the best registration performance on the landmarks near lung surfaces. Since biomechanical models captured most of the lung deformation, the final estimated deformation field was more physically plausible.
Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy
H. Furtado, E. Steiner, M. Stock, et al.
Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190±35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.
Cardiac and Vascular Imaging
icon_mobile_dropdown
Automated epicardial fat volume quantification from non-contrast CT
Xiaowei Ding, Demetri Terzopoulos, Mariana Diaz-Zamudio, et al.
Epicardial fat volume (EFV) is now regarded as a significant imaging biomarker for cardiovascular risk strat-ification. Manual or semi-automated quantification of EFV includes tedious and careful contour drawing of pericardium on fine image features. We aimed to develop and validate a fully-automated, accurate algorithm for EVF quantification from non-contrast CT using active contours and multiple atlases registration. This is a knowledge-based model that can segment both the heart and pericardium accurately by initializing the location and shape of the heart in large scale from multiple co-registered atlases and locking itself onto the pericardium actively. The deformation process is driven by pericardium detection, extracting only the white contours repre- senting the pericardium in the CT images. Following this step, we can calculate fat volume within this region (epicardial fat) using standard fat attenuation range. We validate our algorithm on CT datasets from 15 patients who underwent routine assessment of coronary calcium. Epicardial fat volume quantified by the algorithm (69.15 ± 8.25 cm3) and the expert (69.46 ± 8.80 cm3) showed excellent correlation (r = 0.96, p < 0.0001) with no significant differences by comparison of individual data points (p = 0.9). The algorithm achieved a Dice overlap of 0.93 (range 0.88 - 0.95). The total time was less than 60 sec on a standard windows computer. Our results show that fast accurate automated knowledge-based quantification of epicardial fat volume from non-contrast CT is feasible. To our knowledge, this is also the first fully automated algorithms reported for this task.
Blood flow quantification using optical flow methods in a body fitted coordinate system
Peter Maday, Richard Brosig, Jurgen Endres, et al.
In this paper a blood flow quantification method that is based on a physically motivated dense 2D flow estimation algorithm is outlined. It yields accurate time varying volumetric flow rate measurements based on digital subtraction angiography (DSA) image sequences, with robustness to significant inter-frame displacements. Time varying volumetric flow rates are estimated for individual non-branching vascular segments based on the estimated 2D flow fields and a 3D vessel segmentation from a 3D Rotational Angiography (3DRA) acquisition. The novelty of the approach lies in the use of a vessel aligned coordinate system for the problem formulation. The coordinate functions are generated using the Schwarz-Christoffel1(SC) map that yields a solution with coordinate lines aligned with the vessel boundaries. The use of vessel aligned coordinates enables the easy and accurate handling of boundary conditions in the irregular domain of a vessel lumen while only requiring slight modifications to the used finite difference approach. Unlike traditional coarse to fine methods we use an anisotropic scaling strategy that enables the estimation of flows with larger inter frame displacements. The evaluation of our method is based on highly realistic synthetic DSA datasets for a number of cases. Ground truth volumetric flow rate values are compared against the measurements and a high degree of fidelity is observed. Performance measures are obtained with varying flow velocities and acquisition rates.
3D geometric analysis of the aorta in 3D MRA follow-up pediatric image data
Stefan Wörz, Abdulsattar Alrajab, Raoul Arnold, et al.
We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model which requires only relatively few model parameters. The new model is used in conjunction with a two-step fitting scheme for refining the segmentation result yielding an accurate segmentation of the vascular shape. Moreover, we include a novel adaptive background masking scheme and we describe a spatial normalization scheme to align the segmentation results from follow-up examinations. We have evaluated our proposed approach using different 3D synthetic images and we have successfully applied the approach to follow-up pediatric 3D MRA image data.
Tensor-based tracking of the aorta in phase-contrast MR images
Yoo-Jin Azad, Anton Malsam, Sebastian Ley, et al.
The velocity-encoded magnetic resonance imaging (PC-MRI) is a valuable technique to measure the blood flow velocity in terms of time-resolved 3D vector fields. For diagnosis, presurgical planning and therapy control monitoring the patient’s hemodynamic situation is crucial. Hence, an accurate and robust segmentation of the diseased vessel is the basis for further methods like the computation of the blood pressure. In the literature, there exist some approaches to transfer the methods of processing DT-MR images to PC-MR data, but the potential of this approach is not fully exploited yet. In this paper, we present a method to extract the centerline of the aorta in PC-MR images by applying methods from the DT-MRI. On account of this, in the first step the velocity vector fields are converted into tensor fields. In the next step tensor-based features are derived and by applying a modified tensorline algorithm the tracking of the vessel course is accomplished. The method only uses features derived from the tensor imaging without the use of additional morphology information. For evaluation purposes we applied our method to 4 volunteer as well as 26 clinical patient datasets with good results. In 29 of 30 cases our algorithm accomplished to extract the vessel centerline.
Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI
The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.
Nonrigid motion compensation in B-mode and contrast enhanced ultrasound image sequences of the carotid artery
Diego D. B. Carvalho, Zeynettin Akkus, Johan G. Bosch, et al.
In this work, we investigate nonrigid motion compensation in simultaneously acquired (side-by-side) B-mode ultrasound (BMUS) and contrast enhanced ultrasound (CEUS) image sequences of the carotid artery. These images are acquired to study the presence of intraplaque neovascularization (IPN), which is a marker of plaque vulnerability. IPN quantification is visualized by performing the maximum intensity projection (MIP) on the CEUS image sequence over time. As carotid images contain considerable motion, accurate global nonrigid motion compensation (GNMC) is required prior to the MIP. Moreover, we demonstrate that an improved lumen and plaque differentiation can be obtained by averaging the motion compensated BMUS images over time. We propose to use a previously published 2D+t nonrigid registration method, which is based on minimization of pixel intensity variance over time, using a spatially and temporally smooth B-spline deformation model. The validation compares displacements of plaque points with manual trackings by 3 experts in 11 carotids. The average (± standard deviation) root mean square error (RMSE) was 99±74μm for longitudinal and 47±18μm for radial displacements. These results were comparable with the interobserver variability, and with results of a local rigid registration technique based on speckle tracking, which estimates motion in a single point, whereas our approach applies motion compensation to the entire image. In conclusion, we evaluated that the GNMC technique produces reliable results. Since this technique tracks global deformations, it can aid in the quantification of IPN and the delineation of lumen and plaque contours.
DTI
icon_mobile_dropdown
Influence of image registration on ADC images computed from free-breathing diffusion MRIs of the abdomen
Jean-Marie Guyader, Livia Bernardin, Naomi H. M. Douglas, et al.
The apparent diffusion coefficient (ADC) is an imaging biomarker providing quantitative information on the diffusion of water in biological tissues. This measurement could be of relevance in oncology drug development, but it suffers from a lack of reliability. ADC images are computed by applying a voxelwise exponential fitting to multiple diffusion-weighted MR images (DW-MRIs) acquired with different diffusion gradients. In the abdomen, respiratory motion induces misalignments in the datasets, creating visible artefacts and inducing errors in the ADC maps. We propose a multistep post-acquisition motion compensation pipeline based on 3D non-rigid registrations. It corrects for motion within each image and brings all DW-MRIs to a common image space. The method is evaluated on 10 datasets of free-breathing abdominal DW-MRIs acquired from healthy volunteers. Regions of interest (ROIs) are segmented in the right part of the abdomen and measurements are compared in the three following cases: no image processing, Gaussian blurring of the raw DW-MRIs and registration. Results show that both blurring and registration improve the visual quality of ADC images, but compared to blurring, registration yields visually sharper images. Measurement uncertainty is reduced both by registration and blurring. For homogeneous ROIs, blurring and registration result in similar median ADCs, which are lower than without processing. In a ROI at the interface between liver and kidney, registration and blurring yield different median ADCs, suggesting that uncorrected motion introduces a bias. Our work indicates that averaging procedures on the scanner should be avoided, as they remove the opportunity to perform motion correction.
A new method for joint susceptibility artefact correction and super-resolution for dMRI
Lars Ruthotto, Siawoosh Mohammadi, Nikolaus Weiskopf
Diffusion magnetic resonance imaging (dMRI) has become increasingly relevant in clinical research and neuroscience. It is commonly carried out using the ultra-fast MRI acquisition technique Echo-Planar Imaging (EPI). While offering crucial reduction of acquisition times, two limitations of EPI are distortions due to varying magnetic susceptibilities of the object being imaged and its limited spatial resolution. In the recent years progress has been made both for susceptibility artefact correction and increasing of spatial resolution using image processing and reconstruction methods. However, so far, the interplay between both problems has not been studied and super-resolution techniques could only be applied along one axis, the slice-select direction, limiting the potential gain in spatial resolution. In this work we describe a new method for joint susceptibility artefact correction and super-resolution in EPI-MRI that can be used to increase resolution in all three spatial dimensions and in particular increase in-plane resolutions. The key idea is to reconstruct a distortion-free, high-resolution image from a number of low-resolution EPI data that are deformed in different directions. Numerical results on dMRI data of a human brain indicate that this technique has the potential to provide for the first time in-vivo dMRI at mesoscopic spatial resolution (i.e. 500μm); a spatial resolution that could bridge the gap between white-matter information from ex-vivo histology (≈1μm) and in-vivo dMRI (≈2000μm).
A dual spherical model for multi-shell diffusion imaging
Y. Rathi, O. Michailovich, K. Setsompop, et al.
Multi-shell diffusion imaging (MSDI) allows to characterize the subtle tissue properties of neurons along with providing valuable information about the ensemble average diffusion propagator. Several methods, both para- metric and non-parametric, have been proposed to analyze MSDI data. In this work, we propose a hybrid model, which is non-parametric in the angular domain but parametric in the radial domain. This has the advantage of allowing arbitrary number of fiber orientations in the angular domain, yet requiring as little as two b-value shells in the radial (q-space) domain. Thus, an extensive sampling of the q-space is not required to compute the diffusion propagator. This model, which we term as the dual-spherical" model, requires estimation of two functions on the sphere to completely (and continuously) model the entire q-space diffusion signal. Specifically, we formulate the cost function so that the diffusion signal is guaranteed to monotonically decrease with b-value for user-defined range of b-values. This is in contrast to other methods, which do not enforce such a constraint, resulting in in-accurate modeling of the diffusion signal (where the signal values could potentially increase with b-value). We also show the relation of our proposed method with that of diffusional kurtosis imaging and how our model extends the kurtosis model. We use the standard spherical harmonics to estimate these functions on the sphere and show its efficacy using synthetic and in-vivo experiments. In particular, on synthetic data, we computed the normalized mean squared error and the average angular error in the estimated orientation distribution function (ODF) and show that the proposed technique works better than the existing work which only uses a parametric model for estimating the radial decay of the diffusion signal with b-value.
Multi-modal pharmacokinetic modelling for DCE-MRI: using diffusion weighted imaging to constrain the local arterial input function
Valentin Hamy, Marc Modat, Rebecca Shipley, et al.
The routine acquisition of multi-modal magnetic resonance imaging data in oncology yields the possibility of combined model fitting of traditionally separate models of tissue structure and function. In this work we hypothesise that diffusion weighted imaging data may help constrain the fitting of pharmacokinetic models to dynamic contrast enhanced (DCE) MRI data. Parameters related to tissue perfusion in the intra-voxel incoherent motion (IVIM) modelling of diffusion weighted MRI provide local information on how tissue is likely to perfuse that can be utilised to guide DCE modelling via local modification of the arterial input function (AIF). In this study we investigate, based on multi-parametric head and neck MRI of 8 subjects (4 with head and neck tumours), the benefit of incorporating parameters derived from the IVIM model within the DCE modelling procedure. Although we find the benefit of this procedure to be marginal on the data used in this work, it is conceivable that a technique of this type will be of greater use in a different application.
Intramyocellular lipid dependence on skeletal muscle fiber type and orientation characterized by diffusion tensor imaging and 1H-MRS
Sunil K. Valaparla, Feng Gao, Muhammad Abdul-Ghani, et al.
When muscle fibers are aligned with the B0 field, intramyocellular lipids (IMCL), important for providing energy during physical activity, can be resolved in proton magnetic resonance spectra (1H-MRS). Various muscles of the leg differ significantly in their proportion of fibers and angular distribution. This study determined the influence of muscle fiber type and orientation on IMCL using 1H-MRS and diffusion tensor imaging (DTI). Muscle fiber orientation relative to B0 was estimated by pennation angle (PA) measurements from DTI, providing orientation-specific extramyocellular lipid (EMCL) chemical shift data that were used for subject-specific IMCL quantification. Vastus lateralis (VL), tibialis anterior (TA) and soleus (SO) muscles of 6 healthy subjects (21-40 yrs) were studied on a Siemens 3T MRI system with a flex 4-channel coil. 1H-MRS were acquired using stimulated echo acquisition mode (STEAM, TR=3s, TE=270ms). DTI was performed using single shot EPI (b=600s/mm2, 30 directions, TR=4.5s, TE=82ms, and ten×5mm slices) with center slice indexed to the MRS voxel. The average PA’s measured from ROI analysis of primary eigenvectors were PA=19.46±5.43 for unipennate VL, 15.65±3.73 for multipennate SO, and 7.04±3.34 for bipennate TA. Chemical shift (CS) was calculated using [3cos2θ-1] dependence: 0.17±0.02 for VL, 0.18±0.01 for SO and 0.19±0.004 ppm for TA. IMCL-CH2 concentrations from spectral analysis were 12.77±6.3 for VL, 3.07±1.63 for SO and 0.27±0.08 mmol/kg ww for TA. Small PA’s were measured in TA and large CS with clear separation between EMCL and IMCL peaks were observed. Larger variations in PA were measured VL and SO resulting in an increased overlap of the EMCL on IMCL peaks.
Shape
icon_mobile_dropdown
A statistical shape+pose model for segmentation of wrist CT images
Emran Mohammad Abu Anas, Abtin Rasoulian, Paul St. John, et al.
In recent years, there has been significant interest to develop a model of the wrist joint that can capture the statistics of shape and pose variations in a patient population. Such a model could have several clinical applications such as bone segmentation, kinematic analysis and prosthesis development. In this paper, we present a novel statistical model of the wrist joint based on the analysis of shape and pose variations of carpal bones across a group of subjects. The carpal bones are jointly aligned using a group-wise Gaussian Mixture Model registration technique, where principal component analysis is used to determine the mean shape and the main modes of its variations. The pose statistics are determined by using principal geodesics analysis, where statistics of similarity transformations between individual subjects and the mean shape are computed in a linear tangent space. We also demonstrate an application of the model for segmentation of wrist CT images.
Statistical shape and appearance models without one-to-one correspondences
One-to-one correspondences are fundamental for the creation of classical statistical shape and appearance models. At the same time, the identification of these correspondences is the weak point of such model-based methods. Hufnagel et al.1 proposed an alternative method using correspondence probabilities instead of exact one-to- one correspondences for a statistical shape model. In this work, we extended the approach by incorporating appearance information into the model. For this purpose, we introduce a point-based representation of image data combining position and appearance information. Then, we pursue the concept of probabilistic correspondences and use a maximum a-posteriori (MAP) approach to derive a statistical shape and appearance model. The model generation as well as the model fitting can be expressed as a single global optimization criterion with respect to model parameters. In a first evaluation, we show the feasibility of the proposed approach and evaluate the model generation and model-based segmentation using 2D lung CT slices.
A framework for joint image-and-shape analysis
Yi Gao, Allen Tannenbaum, Sylvain Bouix
Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.
Groupwise shape analysis of the hippocampus using spectral matching
Mahsa Shakeri, Hervé Lombaert, Sarah Lippé, et al.
The hippocampus is a prominent subcortical feature of interest in many neuroscience studies. Its subtle morphological changes often predicate illnesses, including Alzheimer’s, schizophrenia or epilepsy. The precise location of structural differences requires a reliable correspondence between shapes across a population. In this paper, we propose an automated method for groupwise hippocampal shape analysis based on a spectral decomposition of a group of shapes to solve the correspondence problem between sets of meshes. The framework generates diffeomorphic correspondence maps across a population, which enables us to create a mean shape. Morphological changes are then located between two groups of subjects. The performance of the proposed method was evaluated on a dataset of 42 hippocampus shapes and compared with a state-of-the-art structural shape analysis approach, using spherical harmonics. Difference maps between mean shapes of two test groups demonstrates that the two approaches showed results with insignificant differences, while Gaussian curvature measures calculated between matched vertices showed a better fit and reduced variability with spectral matching.
3D shape analysis of heterochromatin foci based on a 3D spherical harmonics intensity model
Simon Eck, Stefan Wörz, Katharina Müller-Ott, et al.
We propose a novel approach for 3D shape analysis of heterochromatin foci in 3D confocal light microscopy images of cell nuclei. The approach is based on a 3D parametric intensity model and uses a spherical harmonics (SH) expansion. The model parameters including the SH coefficients are automatically determined by least squares fitting of the model to the image intensities. Based on the obtained SH coefficients, a shape descriptor is determined, which enables distinguishing heterochromatin foci based on their 3D shape to characterize compaction states of heterochromatin. Our approach has been successfully applied to real static and dynamic 3D microscopy image data.
Improved statistical power with a sparse shape model in detecting an aging effect in the hippocampus and amygdala
Moo K. Chung, Seung-Goo Kim, Stacey M. Schaefer, et al.
The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.
Keynote and Brain
icon_mobile_dropdown
Large scale digital atlases in neuroscience
M. Hawrylycz, D. Feng, C. Lau, et al.
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
Smoothness parameter tuning for generalized hierarchical continuous max-flow segmentation
John S. H. Baxter, Martin Rajchl, A. Jonathan McLeod, et al.
Simultaneous segmentation of multiple anatomical objects from medical images has become of increasing interest to the medical imaging community, especially when information concerning these objects such as grouping or hierarchical relationships can facilitate segmentation. Single parameter Potts models have often been used to address these multi-region problems, but such parameterization is not sufficient when regions have largely different regularization requirements. These problems can be addressed by introducing smoothing hierarchies with capture grouping relationships at the expense of additional parameterization. Tuning of these parameters to provide optimal segmentation accuracy efficiently is still an open problem in optimal image segmentation. This paper presents two mechanisms, one iterative and one more computationally efficient, for estimating optimal smoothness parameters for any arbitrary hierarchical model based on multi-objective optimization theory. These methods are evaluated using 5 segmentations of the brain from the IBSR database containing 35 distinct regions. The iterative estimator provides equivalent performance to the downhill simplex method, but takes significantly less computation time (93 vs. 431 minutes), allowing for more complicated models to be used without worry as to prohibitive parameter tuning procedures.
Bilayered anatomically constrained split-and-merge expectation maximisation algorithm (BiASM) for brain segmentation
Dealing with pathological tissues is a very challenging task in medical brain segmentation. The presence of pathology can indeed bias the ultimate results when the model chosen is not appropriate and lead to missegmentations and errors in the model parameters. Model fit and segmentation accuracy are impaired by the lack of flexibility of the model used to represent the data. In this work, based on a finite Gaussian mixture model, we dynamically introduce extra degrees of freedom so that each anatomical tissue considered is modelled as a mixture of Gaussian components. The choice of the appropriate number of components per tissue class relies on a model selection criterion. Its purpose is to balance the complexity of the model with the quality of the model fit in order to avoid overfitting while allowing flexibility. The parameters optimisation, constrained with the additional knowledge brought by probabilistic anatomical atlases, follows the expectation maximisation (EM) framework. Split-and-merge operations bring the new flexibility to the model along with a data-driven adaptation. The proposed methodology appears to improve the segmentation when pathological tissue are present as well as the model fit when compared to an atlas-based expectation maximisation algorithm with a unique component per tissue class. These improvements in the modelling might bring new insight in the characterisation of pathological tissues as well as in the modelling of partial volume effect.
Fast CEUS image segmentation based on self organizing maps
Julie Paire, Vincent Sauvage, Adelaïde Albouy-Kissi, et al.
Contrast-enhanced ultrasound (CEUS) has recently become an important technology for lesion detection and characterization. CEUS is used to investigate the perfusion kinetics in tissue over time, which relates to tissue vascularization. In this paper, we present an interactive segmentation method based on the neural networks, which enables to segment malignant tissue over CEUS sequences. We use Self-Organizing-Maps (SOM), an unsupervised neural network, to project high dimensional data to low dimensional space, named a map of neurons. The algorithm gathers the observations in clusters, respecting the topology of the observations space. This means that a notion of neighborhood between classes is defined. Adjacent observations in variables space belong to the same class or related classes after classification. Thanks to this neighborhood conservation property and associated with suitable feature extraction, this map provides user friendly segmentation tool. It will assist the expert in tumor segmentation with fast and easy intervention. We implement SOM on a Graphics Processing Unit (GPU) to accelerate treatment. This allows a greater number of iterations and the learning process to converge more precisely. We get a better quality of learning so a better classification. Our approach allows us to identify and delineate lesions accurately. Our results show that this method improves markedly the recognition of liver lesions and opens the way for future precise quantification of contrast enhancement.
Classification and Texture
icon_mobile_dropdown
Spectral-spatial classification using tensor modeling for cancer detection with hyperspectral imaging
Guolan Lu, Luma Halig, Dongsheng Wang, et al.
As an emerging technology, hyperspectral imaging (HSI) combines both the chemical specificity of spectroscopy and the spatial resolution of imaging, which may provide a non-invasive tool for cancer detection and diagnosis. Early detection of malignant lesions could improve both survival and quality of life of cancer patients. In this paper, we introduce a tensor-based computation and modeling framework for the analysis of hyperspectral images to detect head and neck cancer. The proposed classification method can distinguish between malignant tissue and healthy tissue with an average sensitivity of 96.97% and an average specificity of 91.42% in tumor-bearing mice. The hyperspectral imaging and classification technology has been demonstrated in animal models and can have many potential applications in cancer research and management.
Texture feature analysis for prediction of postoperative liver failure prior to surgery
Amber L. Simpson, Richard K. Do, E. Patricia Parada, et al.
Texture analysis of preoperative CT images of the liver is undertaken in this study. Standard texture features were extracted from portal-venous phase contrast-enhanced CT scans of 36 patients prior to major hepatic resection and correlated to postoperative liver failure. Differences between patients with and without postoperative liver failure were statistically significant for contrast (measure of local variation), correlation (linear dependency of gray levels on neighboring pixels), cluster prominence (asymmetry), and normalized inverse difference moment (local homogeneity). Though texture features have been used to diagnose and characterize lesions, to our knowledge, parenchymal statistical variation has not been quantified and studied. We demonstrate that texture analysis is a valuable tool for quantifying liver function prior to surgery, which may help to identify and change the preoperative management of patients at higher risk for overall morbidity.
Detection and location of 127 anatomical landmarks in diverse CT datasets
Mohammad A. Dabbah, Sean Murphy, Hippolyte Pello, et al.
The automatic detection and localization of anatomical landmarks has wide application, including intra and interpatient registration, study location and navigation, and the targeting of specialized algorithms. In this paper, we demonstrate the automatic detection and localization of 127 anatomically defined landmarks distributed throughout the body, excluding arms. Landmarks are defined on the skeleton, vasculature and major organs. Our approach builds on the classification forests method,1 using this classifier with simple image features which can be efficiently computed. For the training and validation of the method we have used 369 CT volumes on which radiographers and anatomists have marked ground truth (GT) - that is the locations of all defined landmarks occurring in that volume. A particular challenge is to deal with the wide diversity of datasets encountered in radiology practice. These include data from all major scanner manufacturers, different extents covering single and multiple body compartments, truncated cardiac acquisitions, with and without contrast. Cases with stents and catheters are also represented. Validation is by a leave-one-out method, which we show can be efficiently implemented in the context of decision forest methods. Mean location accuracy of detected landmarks is 13.45mm overall; execution time averages 7s per volume on a modern server machine. We also present localization ROC analysis to characterize detection accuracy - that is to decide if a landmark is or is not present in a given dataset.
Unsupervised detection of abnormalities in medical images using salient features
Sharon Alpert, Pavel Kisilev
In this paper we propose a new method for abnormality detection in medical images which is based on the notion of medical saliency. The proposed method is general and is suitable for a variety of tasks related to detection of: 1) lesions and microcalcifications (MCC) in mammographic images, 2) stenoses in angiographic images, 3) lesions found in magnetic resonance (MRI) images of brain. The main idea of our approach is that abnormalities manifest as rare events, that is, as salient areas compared to normal tissues. We define the notion of medical saliency by combining local patch information from the lightness channel with geometric shape local descriptors. We demonstrate the efficacy of the proposed method by applying it to various modalities, and to various abnormality detection problems. Promising results are demonstrated for detection of MCC and of masses in mammographic images, detection of stenoses in angiography images, and detection of lesions in brain MRI. We also demonstrate how the proposed automatic abnormality detection method can be combined with a system that performs supervised classification of mammogram images into benign or malignant/premalignant MCC's. We use a well known DDSM mammogram database for the experiment on MCC classification, and obtain 80% accuracy in classifying images containing premalignant MCC versus benign ones. In contrast to supervised detection methods, the proposed approach does not rely on ground truth markings, and, as such, is very attractive and applicable for big corpus image data processing.
Recognizing surgeon's actions during suture operations from video sequences
Ye Li, Jun Ohya, Toshio Chiba, et al.
Because of the shortage of nurses in the world, the realization of a robotic nurse that can support surgeries autonomously is very important. More specifically, the robotic nurse should be able to autonomously recognize different situations of surgeries so that the robotic nurse can pass necessary surgical tools to the medical doctors in a timely manner. This paper proposes and explores methods that can classify suture and tying actions during suture operations from the video sequence that observes the surgery scene that includes the surgeon’s hands. First, the proposed method uses skin pixel detection and foreground extraction to detect the hand area. Then, interest points are randomly chosen from the hand area so that their 3D SIFT descriptors are computed. A word vocabulary is built by applying hierarchical K-means to these descriptors, and the words’ frequency histogram, which corresponds to the feature space, is computed. Finally, to classify the actions, either SVM (Support Vector Machine), Nearest Neighbor rule (NN) for the feature space or a method that combines “sliding window” with NN is performed. We collect 53 suture videos and 53 tying videos to build the training set and to test the proposed method experimentally. It turns out that the NN gives higher than 90% accuracies, which are better recognition than SVM. Negative actions, which are different from either suture or tying action, are recognized with quite good accuracies, while “Sliding window” did not show significant improvements for suture and tying and cannot recognize negative actions.
Registration
icon_mobile_dropdown
MR to CT registration of brains using image synthesis
Snehashis Roy, Aaron Carass, Amod Jog, et al.
Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.
Fast automatic estimation of the optimization step size for nonrigid image registration
Image registration is often used in the clinic, for example during radiotherapy and image-guide surgery, but also for general image analysis. Currently, this process is often very slow, yet for intra-operative procedures the speed is crucial. For intensity-based image registration, a nonlinear optimization problem should be solved, usually by (stochastic) gradient descent. This procedure relies on a proper setting of a parameter which controls the optimization step size. This parameter is difficult to choose manually however, since it depends on the input data, optimization metric and transformation model. Previously, the Adaptive Stochastic Gradient Descent (ASGD) method has been proposed that automatically chooses the step size, but it comes at high computational cost. In this paper, we propose a new computationally efficient method to automatically determine the step size, by considering the observed distribution of the voxel displacements between iterations. A relation between the step size and the expectation and variance of the observed distribution is then derived. Experiments have been performed on 3D lung CT data (19 patients) using a nonrigid B-spline transformation model. For all tested dissimilarity metrics (mean squared distance, normalized correlation, mutual information, normalized mutual information), we obtained similar accuracy as ASGD. Compared to ASGD whose estimation time is progressively increasing with the number of parameters, the estimation time of the proposed method is substantially reduced to an almost constant time, from 40 seconds to no more than 1 second when the number of parameters is 105.
Detection and correction of inconsistency-based errors in non-rigid registration
Tobias Gass, Gabor Szekely, Orcun Goksel
In this paper we present a novel post-processing technique to detect and correct inconsistency-based errors in non-rigid registration. While deformable registration is ubiquitous in medical image computing, assessing its quality has yet been an open problem. We propose a method that predicts local registration errors of existing pairwise registrations between a set of images, while simultaneously estimating corrected registrations. In the solution the error is constrained to be small in areas of high post-registration image similarity, while local registrations are constrained to be consistent between direct and indirect registration paths. The latter is a critical property of an ideal registration process, and has been frequently used to asses the performance of registration algorithms. In our work, the consistency is used as a target criterion, for which we efficiently find a solution using a linear least-squares model on a coarse grid of registration control points. We show experimentally that the local errors estimated by our algorithm correlate strongly with true registration errors in experiments with known, dense ground-truth deformations. Additionally, the estimated corrected registrations consistently improve over the initial registrations in terms of average deformation error or TRE for different registration algorithms on both simulated and clinical data, independent of modality (MRI/CT), dimensionality (2D/3D) and employed primary registration method (demons/Markov-randomfield).
A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT
Jens N. Kaftan, Marcin Kopaczka, Andreas Wimmer, et al.
Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.
A symmetric block-matching framework for global registration
Marc Modat, David M. Cash, Pankaj Daga, et al.
Most registration algorithms suffer from a directionality bias that has been shown to largely impact on subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of non-linear registration but little work has been done in the context of global registration. We propose a symmetric approach based on a block-matching technique and least trimmed square regression. The proposed method is suitable for multi-modal registration and is robust to outliers in the input images. The symmetric framework is compared to the original asymmetric block-matching technique, outperforming it in terms accuracy and robustness.
Atlas-based Segmentation
icon_mobile_dropdown
Statistical label fusion with hierarchical performance models
Andrew J. Asman, Alexander S. Dagley, Bennett A. Landman
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy.
Applying the algorithm "assessing quality using image registration circuits" (AQUIRC) to multi-atlas segmentation
Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.
Robust optic nerve segmentation on clinically acquired CT
Swetasudha Panda, Andrew J. Asman, Michael P. DeLisi, et al.
The optic nerve is a sensitive central nervous system structure, which plays a critical role in many devastating pathological conditions. Several methods have been proposed in recent years to segment the optic nerve automatically, but progress toward full automation has been limited. Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. Herein we evaluate a framework for robust and fully automated segmentation of the optic nerves, eye globes and muscles. We employ a robust registration procedure for accurate registrations, variable voxel resolution and image fieldof- view. We demonstrate the efficacy of an optimal combination of SyN registration and a recently proposed label fusion algorithm (Non-local Spatial STAPLE) that accounts for small-scale errors in registration correspondence. On a dataset containing 30 highly varying computed tomography (CT) images of the human brain, the optimal registration and label fusion pipeline resulted in a median Dice similarity coefficient of 0.77, symmetric mean surface distance error of 0.55 mm, symmetric Hausdorff distance error of 3.33 mm for the optic nerves. Simultaneously, we demonstrate the robustness of the optimal algorithm by segmenting the optic nerve structure in 316 CT scans obtained from 182 subjects from a thyroid eye disease (TED) patient population.
Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors
Mengyuan Liu, Sharmishtaa Seshamani, Lisa Harrylock, et al.
One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.
Personalized articulated atlas with a dynamic adaptation strategy for bone segmentation in CT or CT/MR head and neck images
Sebastian Steger, Florian Jung, Stefan Wesarg
This paper presents a novel segmentation method for the joint segmentation of individual bones in CT- or CT/MR- head and neck images. It is based on an articulated atlas for CT images that learned the shape and appearance of the individual bones along with the articulation between them from annotated training instances. First, a novel dynamic adaptation strategy for the atlas is presented in order to increase the rate of successful adaptations. Then, if a corresponding CT image is available the atlas can be enriched with personalized information about shape, appearance and size of the individual bones from that image. Using mutual information, this personalized atlas is adapted to an MR image in order to propagate segmentations. For evaluation, a head and neck bone atlas created from 15 manually annotated training images was adapted to 58 clinically acquired head andneck CT datasets. Visual inspection showed that the automatic dynamic adaptation strategy was successful for all bones in 86% of the cases. This is a 22% improvement compared to the traditional gradient descent based approach. In leave-one-out cross validation manner the average surface distance of the correctly adapted items was found to be 0.6 8mm. In 20 cases corresponding CT/MR image pairs were available and the atlas could be personalized and adapted to the MR image. This was successful in 19 cases.
Magnetic Resonance Imaging
icon_mobile_dropdown
Intra voxel analysis in MRI
Michele Ambrosanio, Fabio Baselice, Giampaolo Ferraioli, et al.
A new application of Compressive Sensing (CS) in Magnetic Resonance Imaging (MRI) field is presented. In particular, first results of the Intra Voxel Analysis (IVA) technique are reported. The idea is to exploit CS peculiarities in order to distinguish different contributions inside the same resolution cell, instead of reconstructing images from not fully sampled k-space acquisition. Applied to MRI field, this means the possibility of estimating the presence of different tissues inside the same voxel, i.e. in one pixel of the obtained image. In other words, the method is the first attempt, as far as we know, of achieving Spectroscopy-like results starting from each pixel of MR images. In particular, tissues are distinguished each others by evaluating their spin-spin relaxation times. Within this manuscript, first results on clinical dataset, in particular a phantom made by aqueous solution and oil and an occipital brain lesion corresponding to a metastatic breast cancer nodule, are reported. Considering the phantom dataset, in particular focusing on the slice where the separation between water and oil occurs, the methodology is able to distinguish the two components with different spin-spin relaxation times. With respect to clinical dataset,focusing on a voxel of the lesion area, the approach is able to detect the presence of two tissues, namely the healthy and the cancer related ones, while in other location outside the lesion only the healthy tissue is detected. Of course, these are the first results of the proposed methodology, further studies on different types of clinical datasets are required in order to widely validate the approach. Although few datasets have been considered, results seem both interesting and promising.
A new application of compressive sensing in MRI
Fabio Baselice, Giampaolo Ferraioli, Flavia Lenti, et al.
Image formation in Magnetic Resonance Imaging (MRI) is the procedure which allows the generation of the image starting from data acquired in the so called k-space. At the present, many image formation techniques have been presented, working with different k-space filling strategies. Recently, Compressive Sampling (CS) has been successfully used for image formation from non fully sampled k-space acquisitions, due to its interesting property of reconstructing signal from highly undetermined linear systems. The main advantage consists in greatly reducing the acquisition time. Within this manuscript, a novel application of CS to MRI field is presented, named Intra Voxel Analysis (IVA). The idea is to achieve the so-called super resolution, i.e. the possibility of distinguish anatomical structures smaller than the spatial resolution of the image. For this aim, multiple Spin Echo images acquired with different Echo Times are required. The output of the algorithm is the estimation of the number of contributions present in the same pixel, i.e. the number of tissues inside the same voxel, and their spin-spin relaxation times. This allows us not only to identify the number of involved tissues, but also to discriminate them. At the present, simulated case studies have been considered, obtaining interesting and promising results. In particular, a study on the required number of images, on the estimation noise and on the regularization parameter of different CS algorithms has been conducted. As future work, the method will be applied to real clinical datasets, in order to validate the estimations.
Novel MRI-derived quantitative biomarker for cardiac function applied to classifying ischemic cardiomyopathy within a Bayesian rule learning framework
Prahlad G. Menon, Lailonny Morris, Mara Staines, et al.
Characterization of regional left ventricular (LV) function may have application in prognosticating timely response and informing choice therapy in patients with ischemic cardiomyopathy. The purpose of this study is to characterize LV function through a systematic analysis of 4D (3D + time) endocardial motion over the cardiac cycle in an effort to define objective, clinically useful metrics of pathological remodeling and declining cardiac performance, using standard cardiac MRI data for two distinct patient cohorts accessed from CardiacAtlas.org: a) MESA – a cohort of asymptomatic patients; and b) DETERMINE – a cohort of symptomatic patients with a history of ischemic heart disease (IHD) or myocardial infarction. The LV endocardium was segmented and a signed phase-to-phase Hausdorff distance (HD) was computed at 3D uniformly spaced points tracked on segmented endocardial surface contours, over the cardiac cycle. An LV-averaged index of phase-to-phase endocardial displacement (P2PD) time-histories was computed at each tracked point, using the HD computed between consecutive cardiac phases. Average and standard deviation in P2PD over the cardiac cycle was used to prepare characteristic curves for the asymptomatic and IHD cohort. A novel biomarker of RMS error between mean patient-specific characteristic P2PD over the cardiac cycle for each individual patient and the cumulative P2PD characteristic of a cohort of asymptomatic patients was established as the RMS-P2PD marker. The novel RMS-P2PD marker was tested as a cardiac function based feature for automatic patient classification using a Bayesian Rule Learning (BRL) framework. The RMS-P2PD biomarker indices were significantly different for the symptomatic patient and asymptomatic control cohorts (p<0.001). BRL accurately classified 83.8% of patients correctly from the patient and control populations, with leave-one-out cross validation, using standard indices of LV ejection fraction (LV-EF) and LV end-systolic volume index (LV-ESVI). This improved to 91.9% with inclusion of the RMS-P2PD biomarker and was congruent with improvements in both sensitivity for classifying patients and specificity for identifying asymptomatic controls from 82.6% up to 95.7%. RMS-P2PD, when contrasted against a collective normal reference, is a promising biomarker to investigate further in its utility for identifying quantitative signs of pathological endocardial function which may boost standard image makers as precursors of declining cardiac performance.
Correction of dental artifacts within the anatomical surface in PET/MRI using active shape models and k-nearest-neighbors
Claes N. Ladefoged, Flemming L. Andersen, Sune H. Keller, et al.
In combined PET/MR, attenuation correction (AC) is performed indirectly based on the available MR image information. Metal implant-induced susceptibility artifacts and subsequent signal voids challenge MR-based AC. Several papers acknowledge the problem in PET attenuation correction when dental artifacts are ignored, but none of them attempts to solve the problem. We propose a clinically feasible correction method which combines Active Shape Models (ASM) and k- Nearest-Neighbors (kNN) into a simple approach which finds and corrects the dental artifacts within the surface boundaries of the patient anatomy. ASM is used to locate a number of landmarks in the T1-weighted MR-image of a new patient. We calculate a vector of offsets from each voxel within a signal void to each of the landmarks. We then use kNN to classify each voxel as belonging to an artifact or an actual signal void using this offset vector, and fill the artifact voxels with a value representing soft tissue. We tested the method using fourteen patients without artifacts, and eighteen patients with dental artifacts of varying sizes within the anatomical surface of the head/neck region. Though the method wrongly filled a small volume in the bottom part of a maxillary sinus in two patients without any artifacts, due to their abnormal location, it succeeded in filling all dental artifact regions in all patients. In conclusion, we propose a method, which combines ASM and kNN into a simple approach which, as the results show, succeeds to find and correct the dental artifacts within the anatomical surface.
Poster Session
icon_mobile_dropdown
Computer-aided classification of liver tumors in 3D ultrasound images with combined deformable model segmentation and support vector machine
Myungeun Lee, Jong Hyo Kim, Moon Ho Park, et al.
In this study, we propose a computer-aided classification scheme of liver tumor in 3D ultrasound by using a combination of deformable model segmentation and support vector machine. For segmentation of tumors in 3D ultrasound images, a novel segmentation model was used which combined edge, region, and contour smoothness energies. Then four features were extracted from the segmented tumor including tumor edge, roundness, contrast, and internal texture. We used a support vector machine for the classification of features. The performance of the developed method was evaluated with a dataset of 79 cases including 20 cysts, 20 hemangiomas, and 39 hepatocellular carcinomas, as determined by the radiologist's visual scoring. Evaluation of the results showed that our proposed method produced tumor boundaries that were equal to or better than acceptable in 89.8% of cases, and achieved 93.7% accuracy in classification of cyst and hemangioma.
Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones
Fitsum A. Reda, Zhigang Peng, Shu Liao, et al.
Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.
Joint source based analysis of multiple brain structures in studying major depressive disorder
Mahdi Ramezani, Abtin Rasoulian, Tom Hollenstein, et al.
We propose a joint Source-Based Analysis (jSBA) framework to identify brain structural variations in patients with Major Depressive Disorder (MDD). In this framework, features representing position, orientation and size (i.e. pose), shape, and local tissue composition are extracted. Subsequently, simultaneous analysis of these features within a joint analysis method is performed to generate the basis sources that show signi cant di erences between subjects with MDD and those in healthy control. Moreover, in a cross-validation leave- one-out experiment, we use a Fisher Linear Discriminant (FLD) classi er to identify individuals within the MDD group. Results show that we can classify the MDD subjects with an accuracy of 76% solely based on the information gathered from the joint analysis of pose, shape, and tissue composition in multiple brain structures.
A multi-view approach to multi-modal MRI cluster ensembles
Carlos Andrés Méndez, Paul Summers, Gloria Menegaz
It has been shown that the combination of multi-modal MRI images improve the discrimination of diseased tissue. However the fusion of dissimilar imaging data for classification and segmentation purposes is not a trivial task, there is an inherent difference in information domains, dimensionality and scales. This work proposes a multiview consensus clustering methodology for the integration of multi-modal MR images into a unified segmentation of tumoral lesions for heterogeneity assessment. Using a variety of metrics and distance functions this multi-view imaging approach calculates multiple vectorial dissimilarity-spaces for each one of the MRI modalities and makes use of the concepts behind cluster ensembles to combine a set of base unsupervised segmentations into an unified partition of the voxel-based data. The methodology is specially designed for combining DCE-MRI and DTI-MR, for which a manifold learning step is implemented in order to account for the geometric constrains of the high dimensional diffusion information.
Comparative study of two sparse multinomial logistic regression models in decoding visual stimuli from brain activity of fMRI
Sutao Song, Gongxiang Chen, Yu Zhan, et al.
Recently, sparse algorithms, such as Sparse Multinomial Logistic Regression (SMLR), have been successfully applied in decoding visual information from functional magnetic resonance imaging (fMRI) data, where the contrast of visual stimuli was predicted by a classifier. The contrast classifier combined brain activities of voxels with sparse weights. For sparse algorithms, the goal is to learn a classifier whose weights distributed as sparse as possible by introducing some prior belief about the weights. There are two ways to introduce a sparse prior constraints for weights: the Automatic Relevance Determination (ARD-SMLR) and Laplace prior (LAP-SMLR). In this paper, we presented comparison results between the ARD-SMLR and LAP-SMLR models in computational time, classification accuracy and voxel selection. Results showed that, for fMRI data, no significant difference was found in classification accuracy between these two methods when voxels in V1 were chosen as input features (totally 1017 voxels). As for computation time, LAP-SMLR was superior to ARD-SMLR; the survived voxels for ARD-SMLR was less than LAP-SMLR. Using simulation data, we confirmed the classification performance for the two SMLR models was sensitive to the sparsity of the initial features, when the ratio of relevant features to the initial features was larger than 0.01, ARD-SMLR outperformed LAP-SMLR; otherwise, LAP-SMLR outperformed LAP-SMLR. Simulation data showed ARD-SMLR was more efficient in selecting relevant features.
Classification of microscopy images of Langerhans islets
Jan Švihlík, Jan Kybic, David Habart, et al.
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
Classification of normal and pathological aging processes based on brain MRI morphology measures
Reported studies describing normal and abnormal aging based on anatomical MRI analysis do not consider morphological brain changes, but only volumetric measures to distinguish among these processes. This work presents a classification scheme, based both on size and shape features extracted from brain volumes, to determine different aging stages: healthy control (HC) adults, mild cognitive impairment (MCI), and Alzheimer's disease (AD). Three support vector machines were optimized and validated for the pair-wise separation of these three classes, using selected features from a set of 3D discrete compactness measures and normalized volumes of several global and local anatomical structures. Our analysis show classification rates of up to 98.3% between HC and AD; of 85% between HC and MCI and of 93.3% for MCI and AD separation. These results outperform those reported in the literature and demonstrate the viability of the proposed morphological indexes to classify different aging stages.
Support vector machine based IS/OS disruption detection from SD-OCT images
Liyun Wang, Weifang Zhu, Jianping Liao, et al.
In this paper, we sought to find a method to detect the Inner Segment /Outer Segment (IS/OS)disruption region automatically. A novel support vector machine (SVM) based method was proposed for IS/OS disruption detection. The method includes two parts: training and testing. During the training phase, 7 features from the region around the fovea are calculated. Support vector machine (SVM) is utilized as the classification method. In the testing phase, the training model derived is utilized to classify the disruption and non-disruption region of the IS/OS, and calculate the accuracy separately. The proposed method was tested on 9 patients' SD-OCT images using leave-one-out strategy. The preliminary results demonstrated the feasibility and efficiency of the proposed method.
Breast tissue classification in digital tomosynthesis images based on global gradient minimization and texture features
Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.
A minimum spanning forest based hyperspectral image classification method for cancerous tissue detection
Robert Pike, Samuel K. Patton, Guolan Lu, et al.
Hyperspectral imaging is a developing modality for cancer detection. The rich information associated with hyperspectral images allow for the examination between cancerous and healthy tissue. This study focuses on a new method that incorporates support vector machines into a minimum spanning forest algorithm for differentiating cancerous tissue from normal tissue. Spectral information was gathered to test the algorithm. Animal experiments were performed and hyperspectral images were acquired from tumor-bearing mice. In vivo imaging experimental results demonstrate the applicability of the proposed classification method for cancer tissue classification on hyperspectral images.
Protein crystallization image classification with elastic net
Jeffrey Hung, John Collins, Mehari Weldetsion, et al.
Protein crystallization plays a crucial role in pharmaceutical research by supporting the investigation of a protein’s molecular structure through X-ray diffraction of its crystal. Due to the rare occurrence of crystals, images must be manually inspected, a laborious process. We develop a solution incorporating a regularized, logistic regression model for automatically evaluating these images. Standard image features, such as shape context, Gabor filters and Fourier transforms, are first extracted to represent the heterogeneous appearance of our images. Then the proposed solution utilizes Elastic Net to select relevant features. Its L1-regularization mitigates the effects of our large dataset, and its L2- regularization ensures proper operation when the feature number exceeds the sample number. A two-tier cascade classifier based on naïve Bayes and random forest algorithms categorized the images. In order to validate the proposed method, we experimentally compare it with naïve Bayes, linear discriminant analysis, random forest, and their two-tier cascade classifiers, by 10-fold cross validation. Our experimental results demonstrate a 3-category accuracy of 74%, outperforming other models. In addition, Elastic Net better reduces the false negatives responsible for a high, domain specific risk. To the best of our knowledge, this is the first attempt to apply Elastic Net to classifying protein crystallization images. Performance measured on a large pharmaceutical dataset also fared well in comparison with those presented in the previous studies, while the reduction of the high-risk false negatives is promising.
Example based lesion segmentation
Snehashis Roy, Qing He, Aaron Carass, et al.
Automatic and accurate detection of white matter lesions is a significant step toward understanding the progression of many diseases, like Alzheimer’s disease or multiple sclerosis. Multi-modal MR images are often used to segment T2 white matter lesions that can represent regions of demyelination or ischemia. Some automated lesion segmentation methods describe the lesion intensities using generative models, and then classify the lesions with some combination of heuristics and cost minimization. In contrast, we propose a patch-based method, in which lesions are found using examples from an atlas containing multi-modal MR images and corresponding manual delineations of lesions. Patches from subject MR images are matched to patches from the atlas and lesion memberships are found based on patch similarity weights. We experiment on 43 subjects with MS, whose scans show various levels of lesion-load. We demonstrate significant improvement in Dice coefficient and total lesion volume compared to a state of the art model-based lesion segmentation method, indicating more accurate delineation of lesions.
Classification of essential tremors (ET) disorder and healthy controls using a masking technique
Rakshatha P. Krishnamurthy, Neelam Sinha, Jitender Saini, et al.
In this study, a novel method is proposed to build a Resting State fMRI (RS fMRI) classifier to discriminate between healthy controls and data of Essential Tremors (ET) disorder. Distinction between healthy controls and diseased subjects data using RS fMRI is more useful in light of the fact that certain patients suffering from neuropsychiatric disorders may be unable to perform the tasks specified for acquisition. Specifically the neurologic disorder that we consider is ET for the reason that fMRI of this disorder is least explored and hence, functionally affected regions of this disease is not clearly known. Regional Homogeneity (ReHo) feature for healthy controls and ET patients was extracted as a mapping to brain function during resting state. One sample t-test was performed for both normal and patient data and regions with significant ReHo values were procured for both the data. The t-test maps respective to the two data groups, consisting of clusters with significant ReHo values, were used as masks respectively on ReHo maps of each of the groups. These masked ReHo maps were used as features as input to a linear classifier. The performance of the proposed scheme for classification of Healthy controls and ET was evaluated and the resulting generalization rate of the classifier was 100% for a dataset consisting of 11 samples in both the groups. The performance of the proposed masking technique remains to be evaluated with a dataset consisting of a large number of samples for ET and Healthy controls.
Variability sensitivity of dynamic texture based recognition in clinical CT data
Roland Kwitt, Sharif Razzaque, Jeffrey Lowell, et al.
Dynamic texture recognition using a database of template models has recently shown promising results for the task of localizing anatomical structures in Ultrasound video. In order to understand its clinical value, it is imperative to study the sensitivity with respect to inter-patient variability as well as sensitivity to acquisition parameters such as Ultrasound probe angle. Fully addressing patient and acquisition variability issues, however, would require a large database of clinical Ultrasound from many patients, acquired in a multitude of controlled conditions, e.g., using a tracked transducer. Since such data is not readily attainable, we advocate an alternative evaluation strategy using abdominal CT data as a surrogate. In this paper, we describe how to replicate Ultrasound variabilities by extracting subvolumes from CT and interpreting the image material as an ordered sequence of video frames. Utilizing this technique, and based on a database of abdominal CT from 45 patients, we report recognition results on an organ (kidney) recognition task, where we try to discriminate kidney subvolumes/videos from a collection of randomly sampled negative instances. We demonstrate that (1) dynamic texture recognition is relatively insensitive to inter-patient variation while (2) viewing angle variability needs to be accounted for in the template database. Since naively extending the template database to counteract variability issues can lead to impractical database sizes, we propose an alternative strategy based on automated identification of a small set of representative models.
Multi-view learning based robust collimation detection in digital radiographs
Hongda Mao, Zhigang Peng, Frank Dennerlein, et al.
In X-ray examinations, it is essential that radiographers carefully use collimation to the appropriate anatomy of interest to minimize the overall integral dose to the patient. The shadow regions are not diagnostically meaningful and could impair the overall image quality. Thus, it is desirable to detect the collimation and exclude the shadow regions to optimize image display. However, due to the large variability of collimated images, collimation detection remains a challenging task. In this paper, we consider a region of interest (ROI) in an image, such as the collimation, can be described by two distinct views, a cluster of pixels within the ROI and the corners of the ROI. Based on this observation, we propose a robust multi-view learning based strategy for collimation detection in digital radiography. Specifically, one view is from random forests learning based region detector, which provides pixel-wise image classification and each pixel is labeled as either in-collimation or out-of-collimation. The other view is from a discriminative, learning-based landmark detector, which detects the corners and localizes the collimation within the image. Nevertheless, given the huge variability of the collimated images, the detection from either view alone may not be perfect. Therefore, we adopt an adaptive view fusing step to obtain the final detection by combining region and corner detection. We evaluate our algorithm in a database with 665 X-ray images in a wide variety of types and dosages and obtain a high detection accuracy (95%), compared with using region detector alone (87%) and landmark detector alone (83%).
Adaptive temporal smoothing of sinogram data using Karhunen-Loeve (KL) transform for myocardial blood flow estimation from dose-reduced dynamic CT
Dimple Modgil, Adam M. Alessio, Michael D. Bindschadler, et al.
There is a strong need for an accurate and easily available technique for myocardial blood flow (MBF) estimation to aid in the diagnosis and treatment of coronary artery disease (CAD). Dynamic CT would provide a quick and widely available technique to do so. However, its biggest limitation is the dose imparted to the patient. We are exploring techniques to reduce the patient dose by either reducing the tube current or by reducing the number of temporal frames in the dynamic CT sequence. Both of these dose reduction techniques result in very noisy data. In order to extract the myocardial blood flow information from the noisy sinograms, we have been looking at several data-domain smoothing techniques. In our previous work,1 we explored the sinogram restoration technique in both the spatial and temporal domain. In this work, we explore the use of Karhunen-Loeve (KL) transform to provide temporal smoothing in the sinogram domain. This technique has been applied previously to dynamic image sequences in PET.2, 3 We find that the cluster-based KL transform method yields noticeable improvement in the smoothness of time attenuation curves (TAC). We make use of a quantitative blood flow model to estimate MBF from these TACs and determine which smoothing method provides the most accurate MBF estimates.
Implementation of compressive sensing for preclinical cine-MRI
Elliot Tan, Ming Yang, Lixin Ma, et al.
This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.
Analytic heuristics for a fast DSC-MRI
M. Virgulin, M. Castellaro, F. Marcuzzi, et al.
Hemodynamics of the human brain may be studied with Dynamic Susceptibility Contrast MRI (DSC-MRI) imaging. The sequence of volumes obtained exhibits a strong spatiotemporal correlation, that can be exploited to predict which measurements will bring mostly the new information contained in the next frames. In general, the sampling speed is an important issue in many applications of the MRI, so that the focus of many current researches is to study methods to reduce the number of measurement samples needed for each frame without degrading the image quality. For the DSC-MRI, the frequency under-sampling of single frame can be exploited to make more frequent space or time acquisitions, thus increasing the time resolution and allowing the analysis of fast dynamics not yet observed. Generally (and also for MRI), the recovery of sparse signals has been achieved by Compressed Sensing (CS) techniques, which are based on statistical properties rather than deterministic ones.. By studying analytically the compound Fourier+Wavelet transform, involved in the processes of reconstruction and sparsification of MR images, we propose a deterministic technique for a rapid-MRI, exploiting the relations between the wavelet sparse representation of the recovered and the frequency samples. We give results on real images and on artificial phantoms with added noise, showing the superiority of the methods both with respect to classical Iterative Hard Thresholding (IHT) and to Location Constraint Approximate Message Passing (LCAMP) reconstruction algorithms.
Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion
High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.
Adaptive multi-scale total variation minimization filter for low dose CT imaging
Alexander Zamyatin, Gene Katsevich, Roman Krylov, et al.
In this work we revisit TV filter and propose an improved version that is tailored to diagnostic CT purposes. We revise TV cost function, which results in symmetric gradient function that leads to more natural noise texture. We apply a multi-scale approach to resolve noise grain issue in CT images. We examine noise texture, granularity, and loss of low contrast in the test images. We also discuss potential acceleration by Nesterov and Conjugate Gradient methods.
Semi-supervised clustering for parcellating brain regions based on resting state fMRI data
Many unsupervised clustering techniques have been adopted for parcellating brain regions of interest into functionally homogeneous subregions based on resting state fMRI data. However, the unsupervised clustering techniques are not able to take advantage of exiting knowledge of the functional neuroanatomy readily available from studies of cytoarchitectonic parcellation or meta-analysis of the literature. In this study, we propose a semi-supervised clustering method for parcellating amygdala into functionally homogeneous subregions based on resting state fMRI data. Particularly, the semi-supervised clustering is implemented under the framework of graph partitioning, and adopts prior information and spatial consistent constraints to obtain a spatially contiguous parcellation result. The graph partitioning problem is solved using an efficient algorithm similar to the well-known weighted kernel k-means algorithm. Our method has been validated for parcellating amygdala into 3 subregions based on resting state fMRI data of 28 subjects. The experiment results have demonstrated that the proposed method is more robust than unsupervised clustering and able to parcellate amygdala into centromedial, laterobasal, and superficial parts with improved functionally homogeneity compared with the cytoarchitectonic parcellation result. The validity of the parcellation results is also supported by distinctive functional and structural connectivity patterns of the subregions and high consistency between coactivation patterns derived from a meta-analysis and functional connectivity patterns of corresponding subregions.
Sparse and shrunken estimates of MRI networks in the brain and their influence on network properties
Rafael Romero-Garcia, Line H. Clemmensen
Estimation of morphometric relationships between cortical regions is a widely used approach to identify and characterize structural connectivity. The elevated number of regions that can be considered in a whole-brain correlation analysis might lead to overfitted models. However, the overfitting can be avoided by using regularization methods. We found that, as expected, non-regularized correlations had low variability when a scarce number of variables were considered. However, a slight increase of variables led to an increase of variance of several magnitude orders. On the other hand, the regularized approaches showed more stable results with a relative low variance at the expense of a little bias. Interestingly, topological properties as local and global efficiency estimated in networks constructed from traditional non-regularized correlations also showed higher variability when compared to those from regularized networks. Our findings suggest that a population-based connectivity study can achieve a more robust description of cortical topology through regularization of the correlation estimates. Four regularization methods were examined: Two with shrinkage (Ridge and Schäfer’s shrinkage), one with sparsity (Lasso) and one with both shrinkage and sparsity (Elastic net). Furthermore, the different regularizations resulted in different correlation estimates as well as network properties. The shrunken estimates resulted in lower variance of the estimates than the sparse estimates.
Frequency-selective quantification of skin perfusion behavior during allergic testing using photoplethysmography imaging
Nikolai Blanik, Claudia Blazek, Carina Pereira, et al.
Diagnosis of allergic immediate-type reactions is dependent on the visual assessment of the attending physician. With our novel non-obtrusive, camera-based photoplethysmography imaging (PPGI) setup, perfusion in the allergic testing area can be quantified and results displayed with spatial resolution in functional mappings. Thereby, each PPGI camera pixel can be assumed to be a classical (skin-based) reflective mode PPG sensor. An algorithm for post-processing of collected PPGI video sequences was developed to transfer black-and-white PPGI images into virtual 3D perfusion maps. For the first time, frequency selected perfusion quantification was assessed. For the presented evaluation, PPGI data from our clinical study were used [1]. For this purpose, different concentrations of histamine dilutions were administered to 27 healthy volunteers. Our results show clear trends in an increase in heartbeat synchronous perfusion rhythms and, simultaneously, a decrease of lower frequency vasomotor rhythms in these areas. These results, published for the first time, allow new insight into the distribution of skin perfusion dynamics and demonstrate the intuitive clinical usability of the proposed system.
Characterizing human retinotopic mapping with conformal geometry: a preliminary study
Duyan Ta, Jie Shi, Brian Barton, et al.
Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. Here we test whether VFMs V1 and V2 obey the least restrictive of all geometric mappings; that is, whether they are anglepreserving and therefore maintain conformal mapping. We measured retinotopic organization in individual subjects using standard traveling-wave fMRI methods. Visual stimuli consisted of black and white, drifting checkerboards comprising rotating wedges and expanding rings to measure the cortical representations of polar angle and eccentricity, respectively. These representations were then projected onto a 3D cortical mesh of each hemisphere. By generating a mapped unit disk that is conformal of the VFMs using spherical stereographic projection and computing the parameterized coordinates of the eccentricity and polar angle gradients, we computed Beltrami coefficients to check whether the mapping from the visual field to the V1 and V2 cortical representations is conformal. We find that V1 and V2 exhibit local conformality. Our analysis of the Beltrami coefficient shows that selected regions of V1 and V2 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex. These results suggest that such a mathematical model can be used to characterize the early VFMs in human visual cortex.
Fusion of digital breast tomosynthesis images via wavelet synthesis for improved lesion conspicuity
Harishwaran Hariharan, Victor Pomponiu, Bin Zheng, et al.
Full-field digital mammography (FFDM) is the most common screening procedure for detecting early breast cancer. However, due to complications such as overlapping breast tissue in projection images, the efficacy of FFDM reading is reduced. Recent studies have shown that digital breast tomosynthesis (DBT), in combination with FFDM, increases detection sensitivity considerably while decreasing false-positive, recall rates. There is a huge interest in creating diagnostically accurate 2-D interpretations from the DBT slices. Most of the 2-D syntheses rely on visualizing the maximum intensities (brightness) from each slice through different methods. We propose a wavelet based fusion method, where we focus on preserving holistic information from larger structures such as masses while adding high frequency information that is relevant and helpful for diagnosis. This method enables the spatial generation of a 2D image from a series of DBT images, each of which contains both smooth and coarse structures distributed in the wavelet domain. We believe that the wavelet-synthesized images, generated from their DBT image datasets, provide radiologists with improved lesion and micro-calcification conspicuity as compared with FFDM images. The potential impact of this fusion method is (1) Conception of a device-independent, data-driven modality that increases the conspicuity of lesions, thereby facilitating early detection and potentially reducing recall rates; (2) Reduction of the accompanying radiation dose to the patient.
Smoothing fields of weighted collections with applications to diffusion MRI processing
Gunnar A Sigurdsson, Jerry L. Prince
Using modern diffusion weighted magnetic resonance imaging protocols, the orientations of multiple neuronal fiber tracts within each voxel can be estimated. Further analysis of these populations, including application of fiber tracking and tract segmentation methods, is often hindered by lack of spatial smoothness of the estimated orientations. For example, a single noisy voxel can cause a fiber tracking method to switch tracts in a simple crossing tract geometry. In this work, a generalized spatial smoothing framework that handles multiple orientations as well as their fractional contributions within each voxel is proposed. The approach estimates an optimal fuzzy correspondence of orientations and fractional contributions between voxels and smooths only between these correspondences. Avoiding a requirement to obtain exact correspondences of orientations reduces smoothing anomalies due to propagation of erroneous correspondences around noisy voxels. Phantom experiments are used to demonstrate both visual and quantitative improvements in postprocessing steps. Improvement over smoothing in the measurement domain is also demonstrated using both phantoms and in vivo human data.
Non-local total variation method for despeckling of ultrasound images
Despeckling of ultrasound images, as a very active topic research in medical image processing, plays an important or even indispensable role in subsequent ultrasound image processing. The non-local total variation (NLTV) method has been widely applied to denoising images corrupted by Gaussian noise, but it cannot provide satisfactory restoration results for ultrasound images corrupted by speckle noise. To address this problem, a novel non-local total variation despeckling method is proposed for speckle reduction. In the proposed method, the non-local gradient is computed on the images restored by the optimized Bayesian non-local means (OBNLM) method and it is introduced into the total variation method to suppress speckle in ultrasound images. Comparisons of the restoration performance are made among the proposed method and such state-of-the-art despeckling methods as the squeeze box filter (SBF), the non-local means (NLM) method and the OBNLM method. The quantitative comparisons based on synthetic speckled images show that the proposed method can provide higher Peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) than compared despeckling methods. The subjective visual comparisons based on synthetic and real ultrasound images demonstrate that the proposed method outperforms other compared algorithms in that it can achieve better performance of noise reduction, artifact avoidance, edge and texture preservation.
Stent enhancement using a locally adaptive unsharp masking filter in digital x-ray fluoroscopy
Yuhao Jiang, Eranda Ekanayake
Low exposure X-ray fluoroscopy is used to guide some complicate interventional procedures. Due to the inherent high levels of noise, improving the visibility of some interventional devices such as stent will greatly benefit those interventional procedures. Stent, which is made up of tiny steel wires, is also suffered from contrast dilutions of large flat panel detector pixels. A novel adaptive unsharp masking filter has been developed to improve stent contrast in real-time applications. In unsharp masking processing, the background is estimated and subtracted from the original input image to create a foreground image containing objects of interest. A background estimator is therefore critical in the unsharp masking processing. In this specific study, orientation filter kernels are used as the background estimator. To make the process simple and fast, the kernels average along a line of pixels. A high orientation resolution of 18° is used. A nonlinear operator is then used to combine the information from the images generated from convolving the original background and noise only images with orientation filters. A computerized Monte Carlo simulation followed by ROC study is used to identify the best nonlinear operator. We then apply the unsharp masking filter to the images with stents present. It is shown that the locally adaptive unsharp making filter is an effective filter for improving stent visibility in the interventional fluoroscopy. We also apply a spatio-temporal channelized human observer model to quantitatively optimize and evaluate the filter.
A local technique for contrast preserving medical image enhancement
Suresh Raj Pant, Deepak Ghimire, Keunho Park, et al.
This paper presents a method for contrast enhancement of medical images with preserving the local image details. The proposed method incorporates CLAHE and local image contrast preserving dynamic range compression. The method controls the amplification while preserving the local contrast of the image. The range of the gain parameter for local contrast enhancement varies from one image to another. The local contrast enhancement at any pixel position depends on the corresponding pixel neighborhood edge density. We have performed several experiments based on different image quality measures. Our proposed method provides more information about the image detail which affects the medical diagnosis. The experimental results by different image quality measures show that the output image quality of our proposed method is better than the CLAHE output.
Robust isotropic super-resolution by maximizing a Laplace posterior for MRI volumes
Xian-Hua Han, Yutaro Iwamoto, Akihiko Shiino, et al.
Magnetic resonance imaging can only acquire volume data with finite resolution due to various factors. In particular, the resolution in one direction (such as the slice direction) is much lower than others (such as the in-plane direction), yielding un-realistic visualizations. This study explores to reconstruct MRI isotropic resolution volumes from three orthogonal scans. This proposed super- resolution reconstruction is formulated as a maximum a posterior (MAP) problem, which relies on the generation model of the acquired scans from the unknown high-resolution volumes. Generally, the deviation ensemble of the reconstructed high-resolution (HR) volume from the available LR ones in the MAP is represented as a Gaussian distribution, which usually results in some noise and artifacts in the reconstructed HR volume. Therefore, this paper investigates a robust super-resolution by formulating the deviation set as a Laplace distribution, which assumes sparsity in the deviation ensemble based on the possible insight of the appeared large values only around some unexpected regions. In addition, in order to achieve reliable HR MRI volume, we integrates the priors such as bilateral total variation (BTV) and non-local mean (NLM) into the proposed MAP framework for suppressing artifacts and enriching visual detail. We validate the proposed robust SR strategy using MRI mouse data with high-definition resolution in two direction and low-resolution in one direction, which are imaged in three orthogonal scans: axial, coronal and sagittal planes. Experiments verifies that the proposed strategy can achieve much better HR MRI volumes than the conventional MAP method even with very high-magnification factor: 10.
New multiscale speckle suppression and edge enhancement with nonlinear diffusion and homomorphic filtering for medical ultrasound imaging
Jinbum Kang, Yangmo Yoo
Speckle, shown as a granular pattern, considerably degrades the image quality of ultrasound B-mode imaging and lowers the performance of image segmentation and registration techniques. Thus, speckle reduction while preserving the tissue structure (e.g., edges and boundaries of lesions) is important for ultrasound B-mode imaging. In this paper, a new approach for speckle reduction and edge enhancement based on laplacian pyramid nonlinear diffusion and homomorphic filtering (LPNDHF) is proposed for ultrasound B-mode imaging. In LPNDHF, nonlinear diffusion with a weighting factor is applied in multi-scale domain (i.e., laplacian pyramid) for effectively suppressing the speckle. In addition, in order to overcome the drawback from the previous LPND method, i.e., blurred edges, homomorphic filtering for edge and contrast enhancement is also applied from a finer scale to a coarser scale. From the simulation study, the proposed LPNDHF method showed the higher edge preservation and structure similarity values compared to the LPND and LPND with shock filtering (LPNDSF). Also, the LPNDHF provided the higher CNR values compared to LPND and LPNDSF, i.e., 5.02 vs. 3.66 and 2.91, respectively. From the tissue mimicking phantom study, the similar improvement in CNR was achieved from the LPNDHF over LPND and LPNDSF, i.e., 2.35 vs. 1.83 and 1.30. Moreover, the consistent results were obtained with the in vivo abdominal study. These preliminary results demonstrate that the proposed LPNDHF can improve the image quality of ultrasound B-mode imaging by increasing contrast and enhancing the specific signal details while effectively suppressing speckle.
Evaluating the predictive power of multivariate tensor-based morphometry in Alzheimer's disease progression via convex fused sparse group Lasso
Sinchai Tsao, Niharika Gajawelli, Jiayu Zhou, et al.
Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.
Recognizing patterns of visual field loss using unsupervised machine learning
Siamak Yousefi, Michael H. Goldbaum, Linda M. Zangwill, et al.
Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.
False positive reduction of microcalcification cluster detection in digital breast tomosynthesis
Ning Xu, Sheng Yi, Paulo Mendonca, et al.
Digital breast tomosynthesis (DBT) is a new modality that has strong potential in improving the sensitivity and specificity of breast mass detection. However, the detection of microcalcifications (MCs) in DBT is challenging because radiologists have to search for the often subtle signals in many slices. We are developing a computer-aided detection (CAD) system to assist radiologists in reading DBT. The system consists of four major steps, namely: image enhancement; pre-screening of MC candidates; false-positive (FP) reduction, and detection of MC cluster candidates of clinical interest. We propose an algorithm for reducing FPs by using 3D characteristics of MC clusters in DBT. The proposed method takes the MC candidates from the pre-screening step described in [14] as input, which are then iteratively clustered to provide training samples to a random-forest classifier and a rule-based classifier. The random forest classifier is used to learn a discriminative model of MC clusters using 3D texture features, whereas the rule-based classifier revisits the initial training samples and enhances them by combining median filtering and graph-cut-based segmentation followed by thresholding on the final number of MCs belonging to the candidate cluster. The outputs of these two classifiers are combined according to the prediction confidence of the random-forest classifier. We evaluate the proposed FP-reduction algorithm on a data set of two-view DBT from 40 breasts with biopsy-proven MC clusters. The experimental results demonstrate a significant reduction in FP detections, with a final sensitivity of 92.2% for an FP rate of 50%.
Unsupervised nonlinear dimensionality reduction machine learning methods applied to multiparametric MRI in cerebral ischemia: preliminary results
Vishwa S. Parekh, Jeremy R. Jacobs, Michael A. Jacobs
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
On study design in neuroimaging heritability analyses
Mary Ellen Koran, Bo Li, Neda Jahanshad, et al.
Imaging genetics is an emerging methodology that combines genetic information with imaging-derived metrics to understand how genetic factors impact observable structural, functional, and quantitative phenotypes. Many of the most well-known genetic studies are based on Genome-Wide Association Studies (GWAS), which use large populations of related or unrelated individuals to associate traits and disorders with individual genetic factors. Merging imaging and genetics may potentially lead to improved power of association in GWAS because imaging traits may be more sensitive phenotypes, being closer to underlying genetic mechanisms, and their quantitative nature inherently increases power. We are developing SOLAR-ECLIPSE (SE) imaging genetics software which is capable of performing genetic analyses with both large-scale quantitative trait data and family structures of variable complexity. This program can estimate the contribution of genetic commonality among related subjects to a given phenotype, and essentially answer the question of whether or not the phenotype is heritable. This central factor of interest, heritability, offers bounds on the direct genetic influence over observed phenotypes. In order for a trait to be a good phenotype for GWAS, it must be heritable: at least some proportion of its variance must be due to genetic influences. A variety of family structures are commonly used for estimating heritability, yet the variability and biases for each as a function of the sample size are unknown. Herein, we investigate the ability of SOLAR to accurately estimate heritability models based on imaging data simulated using Monte Carlo methods implemented in R. We characterize the bias and the variability of heritability estimates from SOLAR as a function of sample size and pedigree structure (including twins, nuclear families, and nuclear families with grandparents).
Determination of the intervertebral disc space from CT images of the lumbar spine
Robert Korez, Darko Štern, Boštjan Likar, et al.
Degenerative changes of the intervertebral disc are among the most common causes of low back pain, where for individuals with significant symptoms surgery may be needed. One of the interventions is the total disc replacement surgery, where the degenerated disc is replaced by an artificial implant. For designing implants with good bone contact and continuous force distribution, the morphology of the intervertebral disc space and vertebral body endplates is of considerable importance. In this study we propose a method for the determination of the intervertebral disc space from three-dimensional (3D) computed tomography (CT) images of the lumbar spine. The first step of the proposed method is the construction of a model of vertebral bodies in the lumbar spine. For this purpose, a chain of five elliptical cylinders is initialized in the 3D image and then deformed to resemble vertebral bodies by introducing 25 shape parameters. The parameters are obtained by aligning the chain to the vertebral bodies in the CT image according to image intensity and appearance information. The determination of the intervertebral disc space is finally achieved by finding the planes that fit the endplates of the obtained parametric 3D models, and placing points in the space between the planes of adjacent vertebrae that enable surface reconstruction of the intervertebral disc space. The morphometric analysis of images from 20 subjects yielded 11:3 ± 2:6, 12:1 ± 2:4, 12:8 ± 2:0 and 12:9 ± 2:7 cm3 in terms of L1-L2, L2-L3, L3-L4 and L4-L5 intervertebral disc space volume, respectively.
Blood flow quantification using 1D CFD parameter identification
Richard Brosig, Markus Kowarschik, Peter Maday, et al.
Patient-specific measurements of cerebral blood flow provide valuable diagnostic information concerning cerebrovascular diseases rather than visually driven qualitative evaluation. In this paper, we present a quantitative method to estimate blood flow parameters with high temporal resolution from digital subtraction angiography (DSA) image sequences. Using a 3D DSA dataset and a 2D+t DSA sequence, the proposed algorithm employs a 1D Computational Fluid Dynamics (CFD) model for estimation of time-dependent flow values along a cerebral vessel, combined with an additional Advection Diffusion Equation (ADE) for contrast agent propagation. The CFD system, followed by the ADE, is solved with a finite volume approximation, which ensures the conservation of mass. Instead of defining a new imaging protocol to obtain relevant data, our cost function optimizes the bolus arrival time (BAT) of the contrast agent in 2D+t DSA sequences. The visual determination of BAT is common clinical practice and can be easily derived from and be compared to values, generated by a 1D-CFD simulation. Using this strategy, we ensure that our proposed method fits best to clinical practice and does not require any changes to the medical work flow. Synthetic experiments show that the recovered flow estimates match the ground truth values with less than 12% error in the mean flow rates.
Arterial tree tracking from anatomical landmarks in magnetic resonance angiography scans
Alison O'Neil, Erin Beveridge, Graeme Houston, et al.
This paper reports on arterial tree tracking in fourteen Contrast Enhanced MRA volumetric scans, given the positions of a predefined set of vascular landmarks, by using the A* algorithm to find the optimal path for each vessel based on voxel intensity and a learnt vascular probability atlas. The algorithm is intended for use in conjunction with an automatic landmark detection step, to enable fully automatic arterial tree tracking. The scan is filtered to give two further images using the top-hat transform with 4mm and 8mm cubic structuring elements. Vessels are then tracked independently on the scan in which the vessel of interest is best enhanced, as determined from knowledge of typical vessel diameter and surrounding structures. A vascular probability atlas modelling expected vessel location and orientation is constructed by non-rigidly registering the training scans to the test scan using a 3D thin plate spline to match landmark correspondences, and employing kernel density estimation with the ground truth center line points to form a probability density distribution. Threshold estimation by histogram analysis is used to segment background from vessel intensities. The A* algorithm is run using a linear cost function constructed from the threshold and the vascular atlas prior. Tracking results are presented for all major arteries excluding those in the upper limbs. An improvement was observed when tracking was informed by contextual information, with particular benefit for peripheral vessels.
Automated volumetric breast density derived by shape and appearance modeling
Serghei Malkov, Karla Kerlikowske, John Shepherd
The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.
Multiple fuzzy object modeling improves sensitivity in automatic anatomy recognition
Leticia Rittner, Jayaram K. Udupa, Drew A. Torigian
Computerized automatic anatomy recognition (AAR) is an essential step for implementing body-wide quantitative radiology (QR). Our strategy to automatically identify and delineate various organs in a given body region is based on fuzzy models and an organ hierarchy. In previous years, the basic algorithms of our AAR approach - model building, recognition, and delineation - and their evaluation were presented. In the present paper, we propose to replace the single fuzzy model built for each organ by a set of fuzzy models built for the same organ. Based on a dataset composed of CT images of the Thorax region of 50 subjects, our experiments indicate that recognition performance improves when using multiple models instead of a single model for each organ. It is interesting to point out that the improvement is not uniform for all organs, leading us to conclude that some organs will benefit from the multiple model approach more than others.
An artifact-robust, shape library-based algorithm for automatic segmentation of inner ear anatomy in post-cochlear-implantation CT
Fitsum A. Reda, Jack H. Noble, Robert F. Labadie, et al.
A cochlear implant (CI) is a device that restores hearing using an electrode array that is surgically placed in the cochlea. After implantation, the CI is programmed to attempt to optimize hearing outcome. Currently, we are testing an imageguided CI programming (IGCIP) technique we recently developed that relies on knowledge of relative position of intracochlear anatomy to implanted electrodes. IGCIP is enabled by a number of algorithms we developed that permit determining the positions of electrodes relative to intra-cochlear anatomy using a pre- and a post-implantation CT. One issue with this technique is that it cannot be used for many subjects for whom a pre-implantation CT was not acquired. Pre-implantation CT has been necessary because it is difficult to localize the intra-cochlear structures in post-implantation CTs alone due to the image artifacts that obscure the cochlea. In this work, we present an algorithm for automatically segmenting intra-cochlear anatomy in post-implantation CTs. Our approach is to first identify the labyrinth and then use its position as a landmark to localize the intra-cochlea anatomy. Specifically, we identify the labyrinth by first approximately estimating its position by mapping a labyrinth surface of another subject that is selected from a library of such surfaces and then refining this estimate by a standard shape model-based segmentation method. We tested our approach on 10 ears and achieved overall mean and maximum errors of 0.209 and 0.98 mm, respectively. This result suggests that our approach is accurate enough for developing IGCIP strategies based solely on post-implantation CTs.
Measurement of blood flow velocity for in vivo video sequences with motion estimation methods
Yansong Liu, Eli Saber, Angela Glading, et al.
Measurement of blood flow velocity for in vivo microscopic video is an invasive approach to study microcirculation systems, which has been applied in clinical analysis and physiological study. The video sequences investigated in this paper are recording the microcirculation in a rat brain using a CCD camera with a frame rate of 30 fps. To evaluate the accuracy and feasibility of applying motion estimation methods, we have compared both current optical flow and particle image velocimetry (PIV) techniques using cross-correlation by testing them with simulated vessel images and in vivo microscopic video sequences. The accuracy is evaluated by calculating the mean square root values of the results of these two methods based on ground truth. The limitations of applying both algorithms to our particular video sequences are discussed in terms of noise, the effect of large displacements, and vascular structures. The sources of erroneous motion vectors resulting from utilizing microscopic video with standard frame rate are addressed in this paper. Based on the above, a modified cross-correlation PIV technique called adaptive window cross-correlation (AWCC) is proposed to improve the performance of detecting motions in thinner and slightly complex vascular structures.
Interpolation of longitudinal shape and image data via optimal mass transport
Yi Gao, Liang-Jia Zhu, Sylvain Bouix, et al.
Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving “mass” (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.
Respiratory motion variations from skin surface on lung cancer patients from 4D CT data
Nicolas Gallego-Ortiz, Jonathan Orban de Xivry, Antonin Descampe, et al.
In radiation therapy of thorax and abdomen regions, knowing how respiratory motion modifies tumor position and trajectory is crucial for accurate dose delivery to tumors while avoiding healthy tissue and organs at risk. Three types of variations are studied: motion amplitudes measured from the patient's skin surface and internal tumor trajectory, internal/external correlations and tumor trajectory baseline shift. Four male patients with lung cancer with three repeated 4D computed tomography (4DCT) scans, taken on different treatment days, were studied. Surfaces were extracted from 4DCT scans by segmentation. Motion over specific regions of interest was analyzed with respect to the motion of the tumor center of mass and correlation coefficient was computed. Tumor baseline shifts were analyzed after rigid registration based on vertebrae and surface registration. External amplitude variations were observed between fractions. Correlation coefficients of internal trajectories and external distances are greater than 0.6 in the abdomen. This correlation was observable and significant for all patients showing that the external motion is a good surrogate for internal movement on an intra-fraction basis. However for the inter-fraction case, external amplitude variations were observed between fractions and no correlation was found with the internal amplitude variations. Moreover, baseline shifts after surface registration were greater than those after vertebrae registration and the mean distance between surfaces after registration was not correlated to the magnitude of the baseline shift. These two observations show that, with the current representation of the external surface, inter-fraction variations are not detectable on the surface.
Motion estimation for nuclear medicine: a probabilistic approach
Accurate, Respiratory Motion Modelling of the abdominal-thoracic organs serves as a pre-requisite for motion correction of Nuclear Medicine (NM) Images. Many respiratory motion models to date build a static correspondence between a parametrized external surrogate signal and internal motion. Mean drifts in respiratory motion, changes in respiratory style and noise conditions of the external surrogate signal motivates a more adaptive approach to capture non-stationary behavior. To this effect we utilize the application of our novel Kalman model with an incorporated expectation maximization step to allow adaptive learning of model parameters with changing respiratory observations. A comparison is made with a popular total least squares (PCA) based approach. It is demonstrated that in the presence of noisy observations the Kalman framework outperforms the static PCA model, however, both methods correct for respiratory motion in the computational anthropomorphic phantom to < 2mm. Motion correction performed on 3 dynamic MRI patient datasets using the Kalman model results in correction of respiratory motion to ≈ 3mm.
Automatic lobar segmentation for diseased lungs using an anatomy-based priority knowledge in low-dose CT images
Sang Joon Park, Jung Im Kim, Jin Mo Goo, et al.
Lung lobar segmentation in CT images is a challenging tasks because of the limitations in image quality inherent to CT image acquisition, especially low-dose CT for clinical routine environment. Besides, complex anatomy and abnormal lesions in the lung parenchyma makes segmentation difficult because contrast in CT images are determined by the differential absorption of X-rays by neighboring structures, such as tissue, vessel or several pathological conditions. Thus, we attempted to develop a robust segmentation technique for normal and diseased lung parenchyma. The images were obtained with low-dose chest CT using soft reconstruction kernel (Sensation 16, Siemens, Germany). Our PC-based in-house software segmented bronchial trees and lungs with intensity adaptive region-growing technique. Then the horizontal and oblique fissures were detected by using eigenvalues-ratio of the Hessian matrix in the lung regions which were excluded from airways and vessels. To enhance and recover the faithful 3-D fissure plane, our proposed fissure enhancing scheme were applied to the images. After finishing above steps, for careful smoothening of fissure planes, 3-D rolling-ball algorithm in xyz planes were performed. Results show that success rate of our proposed scheme was achieved up to 89.5% in the diseased lung parenchyma.
Splitting of overlapping nuclei guided by robust combinations of concavity points
Marina E. Plissiti, Eleni Louka, Christophoros Nikou
In this work, we propose a novel and robust method for the accurate separation of elliptical overlapped nuclei in microscopic images. The method is based on both the information provided by the global boundary of the nuclei cluster and the detection of concavity points along this boundary. The number of the nuclei and the area of each nucleus included in the cluster are estimated automatically by exploiting the different parts of the cluster boundary demarcated by the concavity points. More specifically, based on the set of concavity points detected in the image of the clustered nuclei, all the possible configurations of candidate ellipses that fit to them are estimated by least squares fitting. For each configuration, an index measuring the fitting residual is computed and the configuration providing the minimum error is selected. The method may successfully separate multiple (more than two) clustered nuclei as the fitting residual is a robust indicator of the number of overlapping elliptical structures even if many erroneous concavity points are present due to noise. Moreover, the algorithm has been evaluated on cytological images of conventional Pap smears and compares favorably with state of the art methods both in terms of accuracy and execution time.
Brain tumor locating in 3D MR volume using symmetry
Pavel Dvorak, Karel Bartusek
This work deals with the automatic determination of a brain tumor location in 3D magnetic resonance volumes. The aim of this work is not the precise segmentation of the tumor and its parts but only the detection of its location. This work is the first step in the tumor segmentation process, an important topic in neuro-image processing. The algorithm expects 3D magnetic resonance volumes of brain containing a tumor. The detection is based on locating the area that breaks the left-right symmetry of the brain. This is done by multi-resolution comparing of corresponding regions in left and right hemisphere. The output of the computation is the probabilistic map of the tumor location. The created algorithm was tested on 80 volumes from publicly available BRATS databases containing 3D brain volumes afflicted by a brain tumor. These pathological structures had various sizes and shapes and were located in various parts of the brain. The locating performance of the algorithm was 85% for T1-weighted volumes, 91% for T1-weighted contrast enhanced volumes, 96% for FLAIR and T2-wieghted volumes and 95% for their combinations.
CT image noise reduction using rotational-invariant feature in Stockwell transform
Iterative reconstruction and other noise reduction methods have been employed in CT to improve image quality and to reduce radiation dose. The non-local means (NLM) filter emerges as a popular choice for image-based noise reduction in CT. However, the original NLM method cannot incorporate similar structures if they are in a rotational format, resulting in ineffective denoising in some locations of the image and non-uniform noise reduction across the image. We have developed a novel rotational-invariant image texture feature derived from the multiresolutional Stockwell-transform (ST), and applied it to CT image noise reduction so that similar structures can be identified and fully utilized even when they are in different orientations. We performed a computer simulation study in CT to demonstrate better efficiency in terms of utilizing redundant information in the image and more uniform noise reduction achieved by ST than by NLM.
Robust vessel detection and segmentation in ultrasound images by a data-driven approach
Ping Guo, Qiang Wang, Xiaotao Wang, et al.
This paper presents a learning based vessel detection and segmentation method in real-patient ultrasound (US) liver images. We aim at detecting multiple shaped vessels robustly and automatically, including vessels with weak and ambiguous boundaries. Firstly, vessel candidate regions are detected by a data-driven approach. Multi-channel vessel enhancement maps with complement performances are generated and aggregated under a Conditional Random Field (CRF) framework. Vessel candidates are obtained by thresholding the saliency map. Secondly, regional features are extracted and the probability of each region being a vessel is modeled by random forest regression. Finally, a fast levelset method is developed to refine vessel boundaries. Experiments have been carried out on an US liver dataset with 98 patients. The dataset contains both normal and abnormal liver images. The proposed method in this paper is compared with a traditional Hessian based method, and the average precision is promoted by 56 percents and 7.8 percents for vessel detection and classification, respectively. This improvement shows that our method is more robust to noise, therefore has a better performance than the Hessian based method for the detection of vessels with weak and ambiguous boundaries.
Enhancement of 3D modeling and classification of microcalcifications in breast computed tomography (BCT)
Hiam Alquran, Eman Shaheen, J. Michael O'Connor, et al.
Current computer aided diagnosis (CADx) software for digital mammography relies mainly on 2D techniques. With the emergence of three-dimensional (3D) breast imaging modalities such as breast Computed Tomography (BCT), there is an opportunity to analyze 3D features in the classification of calcifications. We previously reported our initial work on automated 3D feature detection and classification based on morphological descriptions for single microcalcifications within clusters [1]. In this work, we propose the expansion of the 3D classification methods to include novel microcalcification morphological feature detection such as including more morphological classes and replacing the 2D Radon transform by a 3D Radon transform. Results show that the classification rate improved compared to the previously reported results from a total of 546 to 559 consistently classified calcifications out of 635 total calcifications. This slight improvement is due to the use of the 3D Radon transform and incorporating methods to detect two classes not previously implemented. Future work will focus on adding feature detection and classification of cluster patterns.
Quantitative analysis of rib movement based on dynamic chest bone images: preliminary results
R. Tanaka, S. Sanada, M. Oda, et al.
Rib movement during respiration is one of the diagnostic criteria in pulmonary impairments. In general, the rib movement is assessed in fluoroscopy. However, the shadows of lung vessels and bronchi overlapping ribs prevent accurate quantitative analysis of rib movement. Recently, an image-processing technique for separating bones from soft tissue in static chest radiographs, called “bone suppression technique”, has been developed. Our purpose in this study was to evaluate the usefulness of dynamic bone images created by the bone suppression technique in quantitative analysis of rib movement. Dynamic chest radiographs of 10 patients were obtained using a dynamic flat-panel detector (FPD). Bone suppression technique based on a massive-training artificial neural network (MTANN) was applied to the dynamic chest images to create bone images. Velocity vectors were measured in local areas on the dynamic bone images, which formed a map. The velocity maps obtained with bone and original images for scoliosis and normal cases were compared to assess the advantages of bone images. With dynamic bone images, we were able to quantify and distinguish movements of ribs from those of other lung structures accurately. Limited rib movements of scoliosis patients appeared as reduced rib velocity vectors. Vector maps in all normal cases exhibited left-right symmetric distributions, whereas those in abnormal cases showed nonuniform distributions. In conclusion, dynamic bone images were useful for accurate quantitative analysis of rib movements: Limited rib movements were indicated as a reduction of rib movement and left-right asymmetric distribution on vector maps. Thus, dynamic bone images can be a new diagnostic tool for quantitative analysis of rib movements without additional radiation dose.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Context based algorithmic framework for identifying and classifying embedded images of follicle units
Md. Mahbubur Rahman, S. S. Iyengar, Wei Zeng, et al.
Medical image processing has been very emerging research areas in recent days. These types of images are naturally so noisy. To count the target objects is never easy. But the proper treatment depends on the accuracy of the successful locating and counting of the desired objects in an image. Some research work can do this type of segmentation of images, but they include so many constraints on the input images that these solutions cannot be applied in a generalized way to most of the images. Even a slight variation in nature of an input image can lead to a major incorrectness of the result. So we developed a generalized way to count a very noisy part of human body, the hair follicle on the scalp. The objective of this research is to count the number of hair follicle groups and the number of follicles into each group in a microscopic image of human scalp. The follicles are nonstandard in shape i.e. they do not maintain any standard shape like rectangle, oval, circle etc. Moreover the follicles are overlapping with one another in many cases. So it is hard to separate them. Here we will present a technique to count the number of follicle group as well as number of follicles in each group. We also applied well-known techniques to cluster the objects detected and a method to generate a neighboring connected graph in order to calculate the inter follicular distances.
A framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT
Jianping Liao, Haoyu Chen, Chunlei Zhou, et al.
Occlusion of retinal artery leads to severe ischemia and dysfunction of retina. Quantitative analysis of the reflectivity in the retina is very needed to quantitative assessment of the severity of retinal ischemia. In this paper, we proposed a framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT images. The proposed framework consists of five main steps. First, a pre-processing step is applied to the input OCT images. Second, the graph search method was applied to segment multiple surfaces in OCT images. Third, the RAO region was detected based on texture classification method. Fourth, the layer segmentation was refined using the detected RAO regions. Finally, the retinal layer intensity analysis was performed. The proposed method was tested on tested on 27 clinical Spectral domain OCT images. The preliminary results show the feasibility and efficacy of the proposed method.
Single 3D cell segmentation from optical CT microscope images
The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.
Traversing and labeling interconnected vascular tree structures from 3D medical images
Walter G. O'Dell, Sindhuja Tirumalai Govindarajan, Ankit Salgia, et al.
Purpose: Detailed characterization of pulmonary vascular anatomy has important applications for the diagnosis and management of a variety of vascular diseases. Prior efforts have emphasized using vessel segmentation to gather information on the number or branches, number of bifurcations, and branch length and volume, but accurate traversal of the vessel tree to identify and repair erroneous interconnections between adjacent branches and neighboring tree structures has not been carefully considered. In this study, we endeavor to develop and implement a successful approach to distinguishing and characterizing individual vascular trees from among a complex intermingling of trees. Methods: We developed strategies and parameters in which the algorithm identifies and repairs false branch inter-tree and intra-tree connections to traverse complicated vessel trees. A series of two-dimensional (2D) virtual datasets with a variety of interconnections were constructed for development, testing, and validation. To demonstrate the approach, a series of real 3D computed tomography (CT) lung datasets were obtained, including that of an anthropomorphic chest phantom; an adult human chest CT; a pediatric patient chest CT; and a micro-CT of an excised rat lung preparation. Results: Our method was correct in all 2D virtual test datasets. For each real 3D CT dataset, the resulting simulated vessel tree structures faithfully depicted the vessel tree structures that were originally extracted from the corresponding lung CT scans. Conclusion: We have developed a comprehensive strategy for traversing and labeling interconnected vascular trees and successfully implemented its application to pulmonary vessels observed using 3D CT images of the chest.
Standardized anatomic space for abdominal fat quantification
Yubing Tong, Jayaram K. Udupa, Drew A. Torigian
The ability to accurately measure subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) from images is important for improved assessment and management of patients with various conditions such as obesity, diabetes mellitus, obstructive sleep apnea, cardiovascular disease, kidney disease, and degenerative disease. Although imaging and analysis methods to measure the volume of these tissue components have been developed [1, 2], in clinical practice, an estimate of the amount of fat is obtained from just one transverse abdominal CT slice typically acquired at the level of the L4-L5 vertebrae for various reasons including decreased radiation exposure and cost [3-5]. It is generally assumed that such an estimate reliably depicts the burden of fat in the body. This paper sets out to answer two questions related to this issue which have not been addressed in the literature. How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? At what anatomic location do the volumes of SAT and VAT correlate maximally with the corresponding single-slice area measures? To answer these questions, we propose two approaches for slice localization: linear mapping and non-linear mapping which is a novel learning based strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. We then study the volume-to-area correlations and determine where they become maximal. We demonstrate on 50 abdominal CT data sets that this mapping achieves significantly improved consistency of anatomic localization compared to current practice. Our results also indicate that maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized.
Registration of segmented histological images using thin plate splines and belief propagation
We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.
Accurate, fully-automated registration of coronary arteries for volumetric CT digital subtraction angiography
Marco Razeto, Brian Mohr, Kazumasa Arakita, et al.
Diagnosis of coronary artery disease with Coronary Computed Tomography Angiography (CCTA) is complicated by the presence of signi cant calci cation or stents. Volumetric CT Digital Subtraction Angiography (CTDSA) has recently been shown to be e ective at overcoming these limitations. Precise registration of structures is essential as any misalignment can produce artifacts potentially inhibiting clinical interpretation of the data. The fully-automated registration method described in this paper addresses the problem by combining a dense deformation eld with rigid-body transformations where calci cations/stents are present. The method contains non-rigid and rigid components. Non-rigid registration recovers the majority of motion artifacts and produces a dense deformation eld valid over the entire scan domain. Discrete domains are identi ed in which rigid registrations very accurately align each calci cation/stent. These rigid-body transformations are combined within the immediate area of the deformation eld using a distance transform to minimize distortion of the surrounding tissue. A recent interim analysis of a clinical feasibility study evaluated reader con dence and diagnostic accuracy in conventional CCTA and CTDSA registered using this method. Conventional invasive coronary angiography was used as the reference. The study included 27 patients scanned with a second-generation 320-row CT detector in which 41 lesions were identi ed. Compared to conventional CCTA, CTDSA improved reader con dence in 13/36 (36%) of segments with severe calci cation and 3/5 (60%) of segments with coronary stents. Also, the false positive rate of CTDSA was reduced compared to conventional CCTA from 18% (24/130) to 14% (19/130).
A multi-resolution strategy for a multi-objective deformable image registration framework that accommodates large anatomical differences
Tanja Alderliesten, Peter A. N. Bosman, Jan-Jakob Sonke, et al.
Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two “non-fixed” grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.
An adaptive patient specific deformable registration for breast images of positron emission tomography and magnetic resonance imaging using finite element approach
A patient specific registration model based on finite element method was investigated in this study. Image registration of Positron Emission Tomography (PET) and Magnetic Resonance imaging (MRI) has been studied a lot. Surface-based registration is extensively applied in medical imaging. We develop and evaluate a registration method combine surface-based registration with biomechanical modeling. .Four sample cases of patients with PET and MRI breast scans performed within 30 days were collected from hospital. K-means clustering algorithm was used to segment images into two parts, which is fat tissue and neoplasm [2]. Instead of placing extrinsic landmarks on patients’ body which may be invasive, we proposed a new boundary condition to simulate breast deformation during two screening. Then a three dimensional model with meshes was built. Material properties were assigned to this model according to previous studies. The whole registration was based on a biomechanical finite element model, which could simulate deformation of breast under pressure.
Computed tomography lung iodine contrast mapping by image registration and subtraction
Keith Goatman, Costas Plakas, Joanne Schuijf, et al.
Pulmonary embolism (PE) is a relatively common and potentially life threatening disease, affecting around 600,000 people annually in the United States alone. Prompt treatment using anticoagulants is effective and saves lives, but unnecessary treatment risks life threatening haemorrhage. The specificity of any diagnostic test for PE is therefore as important as its sensitivity. Computed tomography (CT) angiography is routinely used to diagnose PE. However, there are concerns it may over-report the condition. Additional information about the severity of an occlusion can be obtained from an iodine contrast map that represents tissue perfusion. Such maps tend to be derived from dual-energy CT acquisitions. However, they may also be calculated by subtracting pre- and post-contrast CT scans. Indeed, there are technical advantages to such a subtraction approach, including better contrast-to-noise ratio for the same radiation dose, and bone suppression. However, subtraction relies on accurate image registration. This paper presents a framework for the automatic alignment of pre- and post-contrast lung volumes prior to subtraction. The registration accuracy is evaluated for seven subjects for whom pre- and post-contrast helical CT scans were acquired using a Toshiba Aquilion ONE scanner. One hundred corresponding points were annotated on the pre- and post-contrast scans, distributed throughout the lung volume. Surface-to-surface error distances were also calculated from lung segmentations. Prior to registration the mean Euclidean landmark alignment error was 2.57mm (range 1.43–4.34 mm), and following registration the mean error was 0.54mm (range 0.44–0.64 mm). The mean surface error distance was 1.89mm before registration and 0.47mm after registration. There was a commensurate reduction in visual artefacts following registration. In conclusion, a framework for pre- and post-contrast lung registration has been developed that is sufficiently accurate for lung subtraction iodine mapping.
A hybrid biomechanical intensity based deformable image registration of lung 4DCT
Navid Samavati, Michael Velec, Kristy Brock
Deformable Image Registration (DIR) has been extensively studied over the past two decades due to its essential role in many image-guided interventions. Morfeus is a DIR algorithm that works based on finite element biomechanical modeling. However, Morfeus does not utilize the entire image contrast and features which could potentially lead to a more accurate registration result. A hybrid biomechanical intensity-based method is proposed to investigate this potential benefit. Inhale and exhale 4DCT lung images of 26 patients were initially registered using Morfeus by modeling contact surface between the lungs and the chest cavity. The resulting deformations using Morfeus were refined using a B-spline intensity-based algorithm (Drop, Munich, Germany). Important parameters in Drop including grid spacing, number of pyramids, and regularization coefficient were optimized on 10 randomly-chosen patients (out of 26). The remaining parameters were selected empirically. Target Registration Error (TRE) was calculated by measuring the Euclidean distance of common anatomical points on both images before and after registration. For each patient a minimum of 30 points/lung were used. The Hybrid method resulted in mean±SD (90th%) TRE of 1.5±1.4 (2.8) mm compared to 3.1±2.0 (5.6) using Morfeus and 2.6±2.6 (6.2) using Drop alone.
Two-step FEM-based Liver-CT registration: improving internal and external accuracy
Cristina Oyarzun Laura, Klaus Drechsler, Stefan Wesarg
To know the exact location of the internal structures of the organs, especially the vasculature, is of great importance for the clinicians. This information allows them to know which structures/vessels will be affected by certain therapy and therefore to better treat the patients. However the use of internal structures for registration is often disregarded especially in physical based registration methods. In this paper we propose an algorithm that uses finite element methods to carry out a registration of liver volumes that will not only have accuracy in the boundaries of the organ but also in the interior. Therefore a graph matching algorithm is used to find correspondences between the vessel trees of the two livers to be registered. In addition to this an adaptive volumetric mesh is generated that contains nodes in the locations in which correspondences were found. The displacements derived from those correspondences are the input for the initial deformation of the model. The first deformation brings the internal structures to their final deformed positions and the surfaces close to it. Finally, thin plate splines are used to refine the solution at the boundaries of the organ achieving an improvement in the accuracy of 71%. The algorithm has been evaluated in CT clinical images of the abdomen.
Normal distributions transform in multi-modal image registration of optical coherence tomography and computed tomography datasets
Jesús Díaz Díaz, Mauro H. Riva, Omid Majdani, et al.
In recent years, optical coherence tomography (OCT) has gained increasing attention not only as an imaging device, but also as a navigation system for surgical interventions. This approach demands to register intraoperative OCT to pre-operative computed tomography (CT) data. In this study, we evaluate algorithms for multi-modal image registration of OCT and CT data of a human temporal bone specimen. We focus on similarity measures that are common in this field, e.g., normalized mutual information, normalized cross correlation, and iterative closest point. We evaluate and compare their accuracies to the relatively new normal distribution transform (NDT), that is very common in simultaneous localization and mapping applications, but is not widely used in image registration. Matching is realized considering appropriate image pre-processing, the aforementioned similarity measures, and local optimization algorithms, as well as line search optimization. For evaluation purpose, the results of a point-based registration with fiducial landmarks are regarded as ground truth. First results indicate that state of the art similarity functions do not perform with the desired accuracy, when applied to unprocessed image data. In contrast, NDT seems to achieve higher registration accuracy.
Automatic registration of imaging mass spectrometry data to the Allen Brain Atlas transcriptome
Walid M. Abdelmoula, Ricardo J. Carreira, Reinald Shyti, et al.
Imaging Mass Spectrometry (IMS) is an emerging molecular imaging technology that provides spatially resolved information on biomolecular structures; each image pixel effectively represents a molecular mass spectrum. By combining the histological images and IMS-images, neuroanatomical structures can be distinguished based on their biomolecular features as opposed to morphological features. The combination of IMS data with spatially resolved gene expression maps of the mouse brain, as provided by the Allen Mouse Brain atlas, would enable comparative studies of spatial metabolic and gene expression patterns in life-sciences research and biomarker discovery. As such, it would be highly desirable to spatially register IMS slices to the Allen Brain Atlas (ABA). In this paper, we propose a multi-step automatic registration pipeline to register ABA histology to IMS- images. Key novelty of the method is the selection of the best reference section from the ABA, based on pre-processed histology sections. First, we extracted a hippocampus-specific geometrical feature from the given experimental histological section to initially localize it among the ABA sections. Then, feature-based linear registration is applied to the initially localized section and its two neighbors in the ABA to select the most similar reference section. A non-rigid registration yields a one-to-one mapping of the experimental IMS slice to the ABA. The pipeline was applied on 6 coronal sections from two mouse brains, showing high anatomical correspondence, demonstrating the feasibility of complementing biomolecule distributions from individual mice with the genome-wide ABA transcriptome.
Wavelet based free-form deformations for nonrigid registration
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Non-rigid target tracking in 2D ultrasound images using hierarchical grid interpolation
Lucas Royer, Marie Babel, Alexandre Krupa
In this paper, we present a new non-rigid target tracking method within 2D ultrasound (US) image sequence. Due to the poor quality of US images, the motion tracking of a tumor or cyst during needle insertion is considered as an open research issue. Our approach is based on well-known compression algorithm in order to make our method work in real-time which is a necessary condition for many clinical applications. Toward that end, we employed a dedicated hierarchical grid interpolation algorithm (HGI) which can represent a large variety of deformations compared to other motion estimation algorithms such as Overlapped Block Motion Compensation (OBMC), or Block Motion Algorithm (BMA). The sum of squared difference of image intensity is selected as similarity criterion because it provides a good trade-off between computation time and motion estimation quality. Contrary to the others methods proposed in the literature, our approach has the ability to distinguish both rigid and non-rigid motions which are observed in ultrasound image modality. Furthermore, this technique does not take into account any prior knowledge about the target, and limits the user interaction which usually complicates the medical validation process. Finally, a technique aiming at identifying the main phases of a periodic motion (e.g. breathing motion) is introduced. The new approach has been validated from 2D ultrasound images of real human tissues which undergo rigid and non-rigid deformations.
Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI
Eileen Hwuang, Mirabela Rusu, Sudha Karthigeyan, et al.
Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.
A constrained registration problem based on Ciarlet-Geymonat stored energy
Ratiba Derfoul, Carole Le Guyader
In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke’s law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of 􀀀-convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.
Automatic 3D segmentation of spinal cord MRI using propagated deformable models
B. De Leener, J. Cohen-Adad, S. Kadoury
Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 ± 0.05 mm (mean absolute distance error) in the cervical region and 0.27 ± 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.
Interactive approach to segment organs at risk in radiotherapy treatment planning
Jose Dolz, Hortense A. Kirisli, Romain Viard, et al.
Accurate delineation of organs at risk (OAR) is required for radiation treatment planning (RTP). However, it is a very time consuming and tedious task. The use in clinic of image guided radiation therapy (IGRT) becomes more and more popular, thus increasing the need of (semi-)automatic methods for delineation of the OAR. In this work, an interactive segmentation approach to delineate OAR is proposed and validated. The method is based on the combination of watershed transformation, which groups small areas of similar intensities in homogeneous labels, and graph cuts approach, which uses these labels to create the graph. Segmentation information can be added in any view – axial, sagittal or coronal -, making the interaction with the algorithm easy and fast. Subsequently, this information is propagated within the whole volume, providing a spatially coherent result. Manual delineations made by experts of 6 OAR - lungs, kidneys, liver, spleen, heart and aorta – over a set of 9 computed tomography (CT) scans were used as reference standard to validate the proposed approach. With a maximum of 4 interactions, a Dice similarity coefficient (DSC) higher than 0.87 was obtained, which demonstrates that, with the proposed segmentation approach, only few interactions are required to achieve similar results as the ones obtained manually. The integration of this method in the RTP process may save a considerable amount of time, and reduce the annotation complexity.
Auxiliary anatomical labels for joint segmentation and atlas registration
Tobias Gass, Gabor Szekely, Orcun Goksel
This paper studies improving joint segmentation and registration by introducing auxiliary labels for anatomy that has similar appearance to the target anatomy while not being part of that target. Such auxiliary labels help avoid false positive labelling of non-target anatomy by resolving ambiguity. A known registration of a segmented atlas can help identify where a target segmentation should lie. Conversely, segmentations of anatomy in two images can help them be better registered. Joint segmentation and registration is then a method that can leverage information from both registration and segmentation to help one another. It has received increasing attention recently in the literature. Often, merely a single organ of interest is labelled in the atlas. In the presense of other anatomical structures with similar appearance, this leads to ambiguity in intensity based segmentation; for example, when segmenting individual bones in CT images where other bones share the same intensity profile. To alleviate this problem, we introduce automatic generation of additional labels in atlas segmentations, by marking similar-appearance non-target anatomy with an auxiliary label. Information from the auxiliary-labeled atlas segmentation is then incorporated by using a novel coherence potential, which penalizes differences between the deformed atlas segmentation and the target segmentation estimate. We validated this on a joint segmentation-registration approach that iteratively alternates between registering an atlas and segmenting the target image to find a final anatomical segmentation. The results show that automatic auxiliary labelling outperforms the same approach using a single label atlasses, for both mandibular bone segmentation in 3D-CT and corpus callosum segmentation in 2D-MRI.
Improving accuracy in coronary lumen segmentation via explicit calcium exclusion, learning-based ray detection and surface optimization
Felix Lugauer, Jingdan Zhang, Yefeng Zheng, et al.
Invasive cardiac angiography (catheterization) is still the standard in clinical practice for diagnosing coronary artery disease (CAD) but it involves a high amount of risk and cost. New generations of CT scanners can acquire high-quality images of coronary arteries which allow for an accurate identification and delineation of stenoses. Recently, computational fluid dynamics (CFD) simulation has been applied to coronary blood flow using geometric lumen models extracted from CT angiography (CTA). The computed pressure drop at stenoses proved to be indicative for ischemia-causing lesions, leading to non-invasive fractional flow reserve (FFR) derived from CTA. Since the diagnostic value of non-invasive procedures for diagnosing CAD relies on an accurate extraction of the lumen, a precise segmentation of the coronary arteries is crucial. As manual segmentation is tedious, time-consuming and subjective, automatic procedures are desirable. We present a novel fully-automatic method to accurately segment the lumen of coronary arteries in the presence of calcified and non-calcified plaque. Our segmentation framework is based on three main steps: boundary detection, calcium exclusion and surface optimization. A learning-based boundary detector enables a robust lumen contour detection via dense ray-casting. The exclusion of calcified plaque is assured through a novel calcium exclusion technique which allows us to accurately capture stenoses of diseased arteries. The boundary detection results are incorporated into a closed set formulation whose minimization yields an optimized lumen surface. On standardized tests with clinical data, a segmentation accuracy is achieved which is comparable to clinical experts and superior to current automatic methods.
Surface-based reconstruction and diffusion MRI in the assessment of gray and white matter damage in multiple sclerosis
Matteo Caffini, Niels Bergsland, Marcella Laganà, et al.
Despite advances in the application of nonconventional MRI techniques in furthering the understanding of multiple sclerosis pathogenic mechanisms, there are still many unanswered questions, such as the relationship between gray and white matter damage. We applied a combination of advanced surface-based reconstruction and diffusion tensor imaging techniques to address this issue. We found significant relationships between white matter tract integrity indices and corresponding cortical structures. Our results suggest a direct link between damage in white and gray matter and contribute to the notion of gray matter loss relating to clinical disability.
Uterus segmentation in dynamic MRI using LBP texture descriptors
R. Namias, M.-E. Bellemare, M. Rahim, et al.
Pelvic floor disorders cover pathologies of which physiopathology is not well understood. However cases get prevalent with an ageing population. Within the context of a project aiming at modelization of the dynamics of pelvic organs, we have developed an efficient segmentation process. It aims at alleviating the radiologist with a tedious one by one image analysis. From a first contour delineating the uterus-vagina set, the organ border is tracked along a dynamic mri sequence. The process combines movement prediction, local intensity and texture analysis and active contour geometry control. Movement prediction allows a contour intitialization for next image in the sequence. Intensity analysis provides image-based local contour detection enhanced by local binary pattern (lbp) texture descriptors. Geometry control prohibits self intersections and smoothes the contour. Results show the efficiency of the method with images produced in clinical routine.
Robust automated lymph node segmentation with random forests
David Allen, Le Lu, Jianhua Yao, et al.
Enlarged lymph nodes may indicate the presence of illness. Therefore, identification and measurement of lymph nodes provide essential biomarkers for diagnosing disease. Accurate automatic detection and measurement of lymph nodes can assist radiologists for better repeatability and quality assurance, but is challenging as well because lymph nodes are often very small and have a highly variable shape. In this paper, we propose to tackle this problem via supervised statistical learning-based robust voxel labeling, specifically the random forest algorithm. Random forest employs an ensemble of decision trees that are trained on labeled multi-class data to recognize the data features and is adopted to handle lowlevel image features sampled and extracted from 3D medical scans. Here we exploit three types of image features (intensity, order-1 contrast and order-2 contrast) and evaluate their effectiveness in random forest feature selection setting. The trained forest can then be applied to unseen data by voxel scanning via sliding windows (11×11×11), to assign the class label and class-conditional probability to each unlabeled voxel at the center of window. Voxels from the manually annotated lymph nodes in a CT volume are treated as positive class; background non-lymph node voxels as negatives. We show that the random forest algorithm can be adapted and perform the voxel labeling task accurately and efficiently. The experimental results are very promising, with AUCs (area under curve) of the training and validation ROC (receiver operating characteristic) of 0.972 and 0.959, respectively. The visualized voxel labeling results also confirm the validity.
Spatially aware expectation maximization (SpAEM): application to prostate TRUS segmentation
Mahdi Orooji, Rachel Sparks, B. Nicolas Bloch, et al.
In this paper we introduce Spatially Aware Expectation Maximization (SpAEM), a new parameter estimation method which incorporates information pertaining to spatial prior probability into the traditional expectation- maximization framework. For estimating the parameters of a given class, the spatial prior probability allows us to weight the contribution of any pixel based on the probability of that pixel belonging to the class of interest. In this paper we evaluate SpAEM for the problem of prostate capsule segmentation in transrectal ultrasound (TRUS) images. In cohort of 6 patients, SpAEM qualitatively and quantitatively outperforms traditional EM in distinguishing the foreground (prostate) from background (non-prostate) regions by around 45% in terms of the Sorensen Dice overlap measure, when compared against expert annotations. The variance of the estimated parameters measured via Cramer-Rao Lower Bound suggests that SpAEM yields unbiased estimates. Finally, on a synthetic TRUS image, the Cramer-Von Mises (CVM) criteria shows that SpAEM improves the estimation accuracy by around 51% and 88% for prostate and background, respectively, as compared to traditional EM.
Combining watershed and graph cuts methods to segment organs at risk in radiotherapy
Jose Dolz, Hortense A. Kirisli, Romain Viard, et al.
Computer-aided segmentation of anatomical structures in medical images is a valuable tool for efficient radiation therapy planning (RTP). As delineation errors highly affect the radiation oncology treatment, it is crucial to delineate geometric structures accurately. In this paper, a semi-automatic segmentation approach for computed tomography (CT) images, based on watershed and graph-cuts methods, is presented. The watershed pre-segmentation groups small areas of similar intensities in homogeneous labels, which are subsequently used as input for the graph-cuts algorithm. This methodology does not require of prior knowledge of the structure to be segmented; even so, it performs well with complex shapes and low intensity. The presented method also allows the user to add foreground and background strokes in any of the three standard orthogonal views – axial, sagittal or coronal - making the interaction with the algorithm easy and fast. Hence, the segmentation information is propagated within the whole volume, providing a spatially coherent result. The proposed algorithm has been evaluated using 9 CT volumes, by comparing its segmentation performance over several organs - lungs, liver, spleen, heart and aorta - to those of manual delineation from experts. A Dice´s coefficient higher than 0.89 was achieved in every case. That demonstrates that the proposed approach works well for all the anatomical structures analyzed. Due to the quality of the results, the introduction of the proposed approach in the RTP process will be a helpful tool for organs at risk (OARs) segmentation.
Interactive segmentation of tongue contours in ultrasound video sequences using quality maps
Sarah Ghrenassia, Lucie Ménard, Catherine Laporte
Ultrasound (US) imaging is an effective and non invasive way of studying the tongue motions involved in normal and pathological speech, and the results of US studies are of interest for the development of new strategies in speech therapy. State-of-the-art tongue shape analysis techniques based on US images depend on semi-automated tongue segmentation and tracking techniques. Recent work has mostly focused on improving the accuracy of the tracking techniques themselves. However, occasional errors remain inevitable, regardless of the technique used, and the tongue tracking process must thus be supervised by a speech scientist who will correct these errors manually or semi-automatically. This paper proposes an interactive framework to facilitate this process. In this framework, the user is guided towards potentially problematic portions of the US image sequence by a segmentation quality map that is based on the normalized energy of an active contour model and automatically produced during tracking. When a problematic segmentation is identified, corrections to the segmented contour can be made on one image and propagated both forward and backward in the problematic subsequence, thereby improving the user experience. The interactive tools were tested in combination with two different tracking algorithms. Preliminary results illustrate the potential of the proposed framework, suggesting that the proposed framework generally improves user interaction time, with little change in segmentation repeatability.
Automatic FDG-PET-based tumor and metastatic lymph node segmentation in cervical cancer
Dídac R. Arbonès, Henrik G. Jensen, Annika Loft, et al.
Treatment of cervical cancer, one of the three most commonly diagnosed cancers worldwide, often relies on delineations of the tumour and metastases based on PET imaging using the contrast agent 18F-Fluorodeoxyglucose (FDG). We present a robust automatic algorithm for segmenting the gross tumour volume (GTV) and metastatic lymph nodes in such images. As the cervix is located next to the bladder and FDG is washed out through the urine, the PET-positive GTV and the bladder cannot be easily separated. Our processing pipeline starts with a histogram-based region of interest detection followed by level set segmentation. After that, morphological image operations combined with clustering, region growing, and nearest neighbour labelling allow to remove the bladder and to identify the tumour and metastatic lymph nodes. The proposed method was applied to 125 patients and no failure could be detected by visual inspection. We compared our segmentations with results from manual delineations of corresponding MR and CT images, showing that the detected GTV lays at least 97.5% within the MR/CT delineations. We conclude that the algorithm has a very high potential for substituting the tedious manual delineation of PET positive areas.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
Arie Nakhmani, Ron Kikinis, Allen Tannenbaum
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Real-time 3D medical structure segmentation using fast evolving active contours
Xiaotao Wang, Qiang Wang, Zhihui Hao, et al.
Segmentation of 3D medical structures in real-time is an important as well as intractable problem for clinical applications due to the high computation and memory cost. We propose a novel fast evolving active contour model in this paper to reduce the requirements of computation and memory. The basic idea is to evolve the brief represented dynamic contour interface as far as possible per iteration. Our method encodes zero level set via a single unordered list, and evolves the list recursively by adding activated adjacent neighbors to its end, resulting in active parts of the zero level set moves far enough per iteration along with list scanning. To guarantee the robustness of this process, a new approximation of curvature for integer valued level set is proposed as the internal force to penalize the list smoothness and restrain the list continual growth. Besides, list scanning times are also used as an upper hard constraint to control the list growing. Together with the internal force, efficient regional and constrained external forces, whose computations are only performed along the unordered list, are also provided to attract the list toward object boundaries. Specially, our model calculates regional force only in a narrowband outside the zero level set and can efficiently segment multiple regions simultaneously as well as handle the background with multiple components. Compared with state-of-the-art algorithms, our algorithm is one-order of magnitude faster with similar segmentation accuracy and can achieve real-time performance for the segmentation of 3D medical structures on a standard PC.
Finding seed points for organ segmentation using example annotations
Ranveer Joyseeree, Henning Müller
Organ segmentation is important in diagnostic medicine to make current decision-support tools more effective and efficient. Performing it automatically can save time and labor. In this paper, a method to perform automatic identification of seed points for the segmentation of organs in three-dimensional (3D) non-annotated, full- body magnetic resonance (MR) and computed tomography (CT) volumes is presented. It uses 3D MR and CT acquisitions along with corresponding organ annotations from the Visual Concept Extraction Challenge in Radiology (VISCERAL) banchmark. A training MR or CT volume is first registered affinely with a carefully-chosen reference volume. The registration transform obtained is then used to warp the annotations accompanying that training volume. The process is repeated for several other training volumes. For each organ of interest, an overlap volume is created by merging the warped training annotations corresponding to it. Next, a 3D probability map for organ location on the reference volume is derived from each overlap volume. The centroid of each probability map is determined and it represents a suitable seed point for segmentation of each organ. Afterwards, the reference volume can be affinely mapped onto any non-annotated volume and the mapping applied to the pre-computed volume containing the centroid and the probability distribution for an organ of interest. Segmentation on the non-annotated volume may then be started using existing region-growing segmentation algorithms with the warped centroid as the seed point and the warped probability distribution as an aid to the stopping criterion. The approach yields very promising results.
Atherosclerotic carotid lumen segmentation in combined B-mode and contrast enhanced ultrasound images
Zeynettin Akkus, Diego D. B. Carvalho, Stefan Klein, et al.
Patients with carotid atherosclerotic plaques carry an increased risk of cardiovascular events such as stroke. Ultrasound has been employed as a standard for diagnosis of carotid atherosclerosis. To assess atherosclerosis, the intima contour of the carotid artery lumen should be accurately outlined. For this purpose, we use simultaneously acquired side-by-side longitudinal contrast enhanced ultrasound (CEUS) and B-mode ultrasound (BMUS) images and exploit the information in the two imaging modalities for accurate lumen segmentation. First, nonrigid motion compensation is performed on both BMUS and CEUS image sequences, followed by averaging over the 150 time frames to produce an image with improved signal-to-noise ratio (SNR). After that, we segment the lumen from these images using a novel method based on dynamic programming which uses the joint histogram of the CEUS and BMUS pair of images to distinguish between background, lumen, tissue and artifacts. Finally, the obtained lumen contour in the improved-SNR mean image is transformed back to each time frame of the original image sequence. Validation was done by comparing manual lumen segmentations of two independent observers with automated lumen segmentations in the improved-SNR images of 9 carotid arteries from 7 patients. The root mean square error between the two observers was 0.17±0.10mm and between automated and average of manual segmentation of two observers was 0.19±0.06mm. In conclusion, we present a robust and accurate carotid lumen segmentation method which overcomes the complexity of anatomical structures, noise in the lumen, artifacts and echolucent plaques by exploiting the information in this combined imaging modality.
Shape-constrained multi-atlas segmentation of spleen in CT
Zhoubing Xu, Bo Li, Swetasudha Panda, et al.
Spleen segmentation on clinically acquired CT data is a challenging problem given the complicity and variability of abdominal anatomy. Multi-atlas segmentation is a potential method for robust estimation of spleen segmentations, but can be negatively impacted by registration errors. Although labeled atlases explicitly capture information related to feasible organ shapes, multi-atlas methods have largely used this information implicitly through registration. We propose to integrate a level set shape model into the traditional label fusion framework to create a shape-constrained multi-atlas segmentation framework. Briefly, we (1) adapt two alternative atlas-to-target registrations to obtain the loose bounds on the inner and outer boundaries of the spleen shape, (2) project the fusion estimate to registered shape models, and (3) convert the projected shape into shape priors. With the constraint of the shape prior, our proposed method offers a statistically significant improvement in spleen labeling accuracy with an increase in DSC by 0.06, a decrease in symmetric mean surface distance by 4.01 mm, and a decrease in symmetric Hausdorff surface distance by 23.21 mm when compared to a locally weighted vote (LWV) method.
Multi-atlas segmentation with particle-based group-wise image registration
We propose a novel multi-atlas segmentation method that employs a group-wise image registration method for the brain segmentation on rodent magnetic resonance (MR) images. The core element of the proposed segmentation is the use of a particle-guided image registration method that extends the concept of particle correspondence into the volumetric image domain. The registration method performs a group-wise image registration that simultaneously registers a set of images toward the space defined by the average of particles. The particle-guided image registration method is robust with low signal-to-noise ratio images as well as differing sizes and shapes observed in the developing rodent brain. Also, the use of an implicit common reference frame can prevent potential bias induced by the use of a single template in the segmentation process. We show that the use of a particle guided-image registration method can be naturally extended to a novel multi-atlas segmentation method and improves the registration method to explicitly use the provided template labels as an additional constraint. In the experiment, we show that our segmentation algorithm provides more accuracy with multi-atlas label fusion and stability against pair-wise image registration. The comparison with previous group-wise registration method is provided as well.
Development of automated extraction method of biliary tract from abdominal CT volumes based on local intensity structure analysis
Kusuto Koga, Yuichiro Hayashi, Tomoaki Hirose, et al.
In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.
Automatic detection of mitochondria from electron microscope tomography images: a curve fitting approach
Mitochondria are sub-cellular components which are mainly responsible for synthesis of adenosine tri-phosphate (ATP) and involved in the regulation of several cellular activities such as apoptosis. The relation between some common diseases of aging and morphological structure of mitochondria is gaining strength by an increasing number of studies. Electron microscope tomography (EMT) provides high-resolution images of the 3D structure and internal arrangement of mitochondria. Studies that aim to reveal the correlation between mitochondrial structure and its function require the aid of special software tools for manual segmentation of mitochondria from EMT images. Automated detection and segmentation of mitochondria is a challenging problem due to the variety of mitochondrial structures, the presence of noise, artifacts and other sub-cellular structures. Segmentation methods reported in the literature require human interaction to initialize the algorithms. In our previous study, we focused on 2D detection and segmentation of mitochondria using an ellipse detection method. In this study, we propose a new approach for automatic detection of mitochondria from EMT images. First, a preprocessing step was applied in order to reduce the effect of nonmitochondrial sub-cellular structures. Then, a curve fitting approach was presented using a Hessian-based ridge detector to extract membrane-like structures and a curve-growing scheme. Finally, an automatic algorithm was employed to detect mitochondria which are represented by a subset of the detected curves. The results show that the proposed method is more robust in detection of mitochondria in consecutive EMT slices as compared with our previous automatic method.
Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting
Min Jin Lee, Helen Hong, Jin Wook Chung
We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.
Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging
Md. Murad Hossain, Khalid AlMuhanna, Limin Zhao, et al.
3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.
Bladder segmentation in MR images with watershed segmentation and graph cut algorithm
Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.
Neurosphere segmentation in brightfield images
Jierong Cheng, Wei Xiong, Shue Ching Chia, et al.
The challenge of segmenting neurospheres (NSPs) from brightfield images includes uneven background illumination (vignetting), low contrast and shadow-casting appearance near the well wall. We propose a pipeline for neurosphere segmentation in brightfield images, focusing on shadow-casting removal. Firstly, we remove vignetting by creating a synthetic blank field image from a set of brightfield images of the whole well. Then, radial line integration is proposed to remove the shadow-casting and therefore facilitate automatic segmentation. Furthermore, a weighted bi-directional decay function is introduced to prevent undesired gradient effect of line integration on NSPs without shadow-casting. Afterward, multiscale Laplacian of Gaussian (LoG) and localized region-based level set are used to detect the NSP boundaries. Experimental results show that our proposed radial line integration method (RLI) achieves higher detection accuracy over existing methods in terms of precision, recall and F-score with less computational time.
3D pre- versus post-season comparisons of surface and relative pose of the corpus callosum in contact sport athletes
Yi Lao, Niharika Gajawelli, Lauren Haas, et al.
Mild traumatic brain injury (MTBI) or concussive injury affects 1.7 million Americans annually, of which 300,000 are due to recreational activities and contact sports, such as football, rugby, and boxing[1]. Finding the neuroanatomical correlates of brain TBI non-invasively and precisely is crucial for diagnosis and prognosis. Several studies have shown the in influence of traumatic brain injury (TBI) on the integrity of brain WM [2-4]. The vast majority of these works focus on athletes with diagnosed concussions. However, in contact sports, athletes are subjected to repeated hits to the head throughout the season, and we hypothesize that these have an influence on white matter integrity. In particular, the corpus callosum (CC), as a small structure connecting the brain hemispheres, may be particularly affected by torques generated by collisions, even in the absence of full blown concussions. Here, we use a combined surface-based morphometry and relative pose analyses, applying on the point distribution model (PDM) of the CC, to investigate TBI related brain structural changes between 9 pre-season and 9 post-season contact sport athlete MRIs. All the data are fed into surface based morphometry analysis and relative pose analysis. The former looks at surface area and thickness changes between the two groups, while the latter consists of detecting the relative translation, rotation and scale between them.
A versatile tomographic forward- and back-projection approach on multi-GPUs
Andreas Fehringer, Tobias Lasser, Irene Zanette, et al.
Iterative tomographic reconstruction gets more and more into the focus of interest for x-ray computed tomography as parallel high-performance computing finds its way into compact and affordable computing systems in form of GPU devices. However, when it comes to the point of high-resolution x-ray computed tomography, e. g. measured at synchrotron facilities, the limited memory and bandwidth of such devices are soon stretched to their limits. Especially keeping the core part of tomographic reconstruction, the projectors, both versatile and fast for large datasets is challenging. Therefore, we demonstrate a multi-GPU accelerated forward- and backprojector based on projection matrices and taking advantage of two concepts to distribute large datasets into smaller units. The first concept involves splitting up the volume into chunks of slices perpendicular to the axis of rotation. The result is many perfectly independent tasks which then can be solved by distinct GPU devices. A novel ultrafast precalculation kernel prevents unnecessary data transfers for cone-beam geometries. Datasets with a great number of projections can additionally take advantage of the second concept, a split-up into angular wedges. We demonstrate the portability of our projectors to multiple devices and the associated speedup on a high-resolution liver sample measured at the synchrotron. With our splitting approaches, we gained factors of 3.5 - 3.9 on a system with four and 7.5 - 8.0 with eight GPUs. The computing time for our test example decreased from 23:5 s to 2:94 s in the latter case.
Genomic connectivity networks based on the BrainSpan atlas of the developing human brain
Ahmed Mahfouz, Mark N. Ziats, Owen M. Rennert, et al.
The human brain comprises systems of networks that span the molecular, cellular, anatomic and functional levels. Molecular studies of the developing brain have focused on elucidating networks among gene products that may drive cellular brain development by functioning together in biological pathways. On the other hand, studies of the brain connectome attempt to determine how anatomically distinct brain regions are connected to each other, either anatomically (diffusion tensor imaging) or functionally (functional MRI and EEG), and how they change over development. A global examination of the relationship between gene expression and connectivity in the developing human brain is necessary to understand how the genetic signature of different brain regions instructs connections to other regions. Furthermore, analyzing the development of connectivity networks based on the spatio-temporal dynamics of gene expression provides a new insight into the effect of neurodevelopmental disease genes on brain networks. In this work, we construct connectivity networks between brain regions based on the similarity of their gene expression signature, termed "Genomic Connectivity Networks" (GCNs). Genomic connectivity networks were constructed using data from the BrainSpan Transcriptional Atlas of the Developing Human Brain. Our goal was to understand how the genetic signatures of anatomically distinct brain regions relate to each other across development. We assessed the neurodevelopmental changes in connectivity patterns of brain regions when networks were constructed with genes implicated in the neurodevelopmental disorder autism (autism spectrum disorder; ASD). Using graph theory metrics to characterize the GCNs, we show that ASD-GCNs are relatively less connected later in development with the cerebellum showing a very distinct expression of ASD-associated genes compared to other brain regions.
Wavelets based algorithm for the evaluation of enhanced liver areas
Matheus Alvarez, Diana Rodrigues de Pina, Guilherme Giacomini, et al.
Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Noncontrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.
3D segmentation of masses in DCE-MRI images using FCM and adaptive MRF
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging modality for the detection of breast cancer. Automated segmentation of breast lesions in DCE-MRI images is challenging due to inherent signal-to-noise ratios and high inter-patient variability. A novel 3D segmentation method based on FCM and MRF is proposed in this study. In this method, a MRI image is segmented by spatial FCM, firstly. And then MRF segmentation is conducted to refine the result. We combined with the 3D information of lesion in the MRF segmentation process by using segmentation result of contiguous slices to constraint the slice segmentation. At the same time, a membership matrix of FCM segmentation result is used for adaptive adjustment of Markov parameters in MRF segmentation process. The proposed method was applied for lesion segmentation on 145 breast DCE-MRI examinations (86 malignant and 59 benign cases). An evaluation of segmentation was taken using the traditional overlap rate method between the segmented region and hand-drawing ground truth. The average overlap rates for benign and malignant lesions are 0.764 and 0.755 respectively. Then we extracted five features based on the segmentation region, and used an artificial neural network (ANN) to classify between malignant and benign cases. The ANN had a classification performance measured by the area under the ROC curve of AUC=0.73. The positive and negative predictive values were 0.86 and 0.58, respectively. The results demonstrate the proposed method not only achieves a better segmentation performance in accuracy also has a reasonable classification performance.