Share Email Print

Optical Engineering

Multimodal image fusion with joint sparsity model
Author(s): Shutao Li; Haitao Yin
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

Image fusion combines multiple images of the same scene into a single image which is suitable for human perception and practical applications. Different images of the same scene can be viewed as an ensemble of intercorrelated images. This paper proposes a novel multimodal image fusion scheme based on the joint sparsity model which is derived from the distributed compressed sensing. First, the source images are jointly sparsely represented as common and innovation components using an over-complete dictionary. Second, the common and innovations sparse coefficients are combined as the jointly sparse coefficients of the fused image. Finally, the fused result is reconstructed from the obtained sparse coefficients. Furthermore, the proposed method is compared with some popular image fusion methods, such as multiscale transform-based methods and simultaneous orthogonal matching pursuit-based method. The experimental results demonstrate the effectiveness of the proposed method in terms of visual effect and quantitative fusion evaluation indexes.

Paper Details

Date Published: 1 June 2011
PDF: 11 pages
Opt. Eng. 50(6) 067007 doi: 10.1117/1.3584840
Published in: Optical Engineering Volume 50, Issue 6
Show Author Affiliations
Shutao Li, Hunan Univ. (China)
Haitao Yin, Hunan Univ. (China)

© SPIE. Terms of Use
Back to Top