Share Email Print

Proceedings Paper

Infrared polarization image fusion via multi-scale sparse representation and pulse coupled neural network
Author(s): Jiajia Zhang; Huixin Zhou; Shun Wei; Wei Tan
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Both common information and unique information are included in the infrared polarization (IRP) images and infrared intensity (IRI) images. Aiming at the disadvantages of (1) loss of detail information; and (2) poor discrimination of fused image information, during fusion of IRP images and IRI images, a method of multi-scale sparse representation and pulse coupled neural network is proposed. A non-local means (NLM) fusion methods combined with sparse representation of image and adaptive Pulse coupled neural network (PCNN) is included in the method. Firstly, the non-local means filter is used to obtain the image information of the source image at different scales. Secondly, a non-subsampled directional filter bank (NSDFB) is used to decompose the high-frequency information of different scales into multiple highfrequency direction sub-bands. For multiple high-frequency directions, the spatial frequency (SF) transformation is first performed for multiple high frequency direction sub-bands, and the PCNN is used to obtain the high frequency subbands fused image according to its significance, where the link strength of PCNN is adaptively adjusted by region variance. Then, the joint matrix composed with the low-frequency components is trained by K-singular value decomposition method (K-SVD) to get the redundant dictionary. The common information and unique information are judged by the position information of non-zero values in the sparse coefficient, and are fused with different methods. Finally, the fused high and low frequency sub-bands are inversely transformed by a non-negative matrix to obtain a fused image. Experimental results demonstrate that the proposed fusion algorithm can not only highlight the common information of the source image, but also retain their unique information. Meanwhile, the fused image has higher contrast and detail information. In addition, the fused image performs well in terms of average gradient (AG), edge intensity (EI), information entropy (IE), standard deviation (STD), spatial frequency (SF) and image definition (IDEF).

Paper Details

Date Published: 18 December 2019
PDF: 11 pages
Proc. SPIE 11338, AOPC 2019: Optical Sensing and Imaging Technology, 113382A (18 December 2019); doi: 10.1117/12.2547563
Show Author Affiliations
Jiajia Zhang, Xidian Univ. (China)
Huixin Zhou, Xidian Univ. (China)
Shun Wei, Xidian Univ. (China)
Wei Tan, Xidian Univ. (China)

Published in SPIE Proceedings Vol. 11338:
AOPC 2019: Optical Sensing and Imaging Technology
John E. Greivenkamp; Jun Tanida; Yadong Jiang; HaiMei Gong; Jin Lu; Dong Liu, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?