Share Email Print

Proceedings Paper

Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI
Author(s): Eileen Hwuang; Mirabela Rusu; Sudha Karthigeyan; Shannon C. Agner; Rachel Sparks; Natalie Shih; John E. Tomaszewski; Mark Rosen; Michael Feldman; Anant Madabhushi
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.

Paper Details

Date Published: 21 March 2014
PDF: 15 pages
Proc. SPIE 9034, Medical Imaging 2014: Image Processing, 90343P (21 March 2014); doi: 10.1117/12.2044317
Show Author Affiliations
Eileen Hwuang, Rutgers, The State Univ. of New Jersey (United States)
Mirabela Rusu, Case Western Reserve Univ. (United States)
Sudha Karthigeyan, Duke Univ. (United States)
Shannon C. Agner, Washington Univ. School of Medicine in St. Louis (United States)
Rachel Sparks, Univ. College London (United Kingdom)
Natalie Shih, Univ. of Pennsylvania (United States)
John E. Tomaszewski, Univ. at Buffalo (United States)
Mark Rosen, Univ. of Pennsylvania (United States)
Michael Feldman, Univ. of Pennsylvania (United States)
Anant Madabhushi, Case Western Reserve Univ. (United States)

Published in SPIE Proceedings Vol. 9034:
Medical Imaging 2014: Image Processing
Sebastien Ourselin; Martin A. Styner, Editor(s)

© SPIE. Terms of Use
Back to Top