Share Email Print
cover

Proceedings Paper

Integration of multimodality images: success and future directions
Author(s): Chin-Tu Chen
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The concept of multi-modality image integration, in which images obtained from different sensors are co-registered spatially and various aspects of object characteristics revealed by individual imaging techniques are synergistically fused in order to yield new information, has received considerable attention in recent years. The initial success was made in visualizing integrated brain images which show the overlay of physiological information from PET or SPECT with anatomical information from CT or MRI, providing new knowledge of correlates of brain function and brain structure that was difficult to access previously. Extension of this concept to cardiac and pulmonary imaging is still in its infancy. One additional difficulty in dealing with cardiac/pulmonary data sets is the issue of motion. However, some features in periodic motion may offer additional information for the purpose of spatial co-registration. In addition to visualization of the fused image data in 2-D and 3-D, future directions in the arena of image integration from multiple modalities include multi-modal image reconstruction, multi-modal image segmentation and feature extraction, and other image analysis tasks that incorporate information available from multiple sources.

Paper Details

Date Published: 29 July 1993
PDF: 4 pages
Proc. SPIE 1905, Biomedical Image Processing and Biomedical Visualization, (29 July 1993); doi: 10.1117/12.148641
Show Author Affiliations
Chin-Tu Chen, Univ. of Chicago (United States)


Published in SPIE Proceedings Vol. 1905:
Biomedical Image Processing and Biomedical Visualization
Raj S. Acharya; Dmitry B. Goldgof, Editor(s)

© SPIE. Terms of Use
Back to Top