Share Email Print
cover

Proceedings Paper

A model of multimodal fusion for medical applications
Author(s): S. Yang; I. Atmosukarto; J. Franklin; J. F. Brinkley; D. Suciu; L. G. Shapiro
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Content-based image retrieval has been applied to many different biomedical applications1. In almost all cases, retrievals involve a single query image of a particular modality and retrieved images are from this same modality. For example, one system may retrieve color images from eye exams, while another retrieves fMRI images of the brain. Yet real patients often have had tests from multiple different modalities, and retrievals based on more than one modality could provide information that single modality searches fail to see. In this paper, we show medical image retrieval for two different single modalities and propose a model for multimodal fusion that will lead to improved capabilities for physicians and biomedical researchers. We describe a graphical user interface for multimodal retrieval that is being tested by real biomedical researchers in several different fields.

Paper Details

Date Published: 19 January 2009
PDF: 12 pages
Proc. SPIE 7255, Multimedia Content Access: Algorithms and Systems III, 72550H (19 January 2009); doi: 10.1117/12.805490
Show Author Affiliations
S. Yang, Univ. of Washington (United States)
I. Atmosukarto, Univ. of Washington (United States)
J. Franklin, Univ. of Washington (United States)
J. F. Brinkley, Univ. of Washington (United States)
D. Suciu, Univ. of Washington (United States)
L. G. Shapiro, Univ. of Washington (United States)


Published in SPIE Proceedings Vol. 7255:
Multimedia Content Access: Algorithms and Systems III
Raimondo Schettini; Ramesh C. Jain; Simone Santini, Editor(s)

© SPIE. Terms of Use
Back to Top