Share Email Print
cover

Proceedings Paper • new

Visual and infrared image fusion algorithm based on adaptive PCNN
Author(s): Yajun Song; Chen Yang; Zhi Chai; Jinbao Yang
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

As the third generation artificial neural network, pulse coupled neural network (PCNN) which consider the characteristics of neurobiology of time coding and spatial accumulation, getting incomparable advantages comparing with the traditional artificial neural network, has broad application prospects in image fusion. In recent years, improving traditional model and adaptive adjustment of key parameters of the model have become major focuses gradually. In this paper, a novel visual and infrared image fusion algorithm is presented based on a new modified PCNN model. The key parameter of linking strength of the model is calculated with the character of the input images adaptively. Firstly, the modified PCNN employs index map and threshold look-up table to improve traditional PCNN model. Threshold look-up table records the thresholds which correspond to the different iteration layers of the modified PCNN model. To improve the computing speed of modified model, the thresholds could be calculated before the modified model starts to compute, which reduces the computing burden of traditional model to get the thresholds. Index map records the firing time of the input image’s pixels during modified PCNN model computing. The values of index map represent the integrating results of similar pixels in space neighborhood of the input image, which reflect the global visual features of the input image. Then, aiding method is used to compute the value of linking strength of modified PCNN model. The linking strength represents the degree that the linking input modulates the feeding input of the current neuron. If the value of linking strength can be decided in accordance with the specific characteristics of the input images, better fusion performance should be gotten in theory. Considering visual image has more detail information of target and infrared image has more energy character of target, local entropy and local energy are combined with the linking strength parameter of modified PCNN model for visual and infrared image separately in the proposed method of this paper. Finally, original visual and infrared image are processed with the modified PCNN model by calculating the linking strength using above procedure. The image fusion rules based on the index maps of visual and infrared image are used to calculate the fusion image. In order to evaluate the performance of the proposed method, a large number of experiments are made. In the experiments, the typical image sets which selected in many related papers are processed with the proposed method and wavelet transform separately. The different fusion images are evaluated with subjective and objective criteria, including the average, standard deviation and spatial frequency. Average stands for average value of pixel’s gray level. Standard deviation manifests that discrete situation for gray level related to average value. Spatial frequency could measure the image details information. The calculated results shows that, compared to methods like wavelet transform, the proposed method can improve the objective criteria values significantly.

Paper Details

Date Published: 24 October 2017
PDF: 9 pages
Proc. SPIE 10462, AOPC 2017: Optical Sensing and Imaging Technology and Applications, 1046239 (24 October 2017); doi: 10.1117/12.2285118
Show Author Affiliations
Yajun Song, Science and Technology on Optical Radiation Lab. (China)
Chen Yang, Science and Technology on Optical Radiation Lab. (China)
Zhi Chai, Science and Technology on Optical Radiation Lab. (China)
Jinbao Yang, Science and Technology on Optical Radiation Lab. (China)


Published in SPIE Proceedings Vol. 10462:
AOPC 2017: Optical Sensing and Imaging Technology and Applications
Yadong Jiang; Haimei Gong; Weibiao Chen; Jin Li, Editor(s)

© SPIE. Terms of Use
Back to Top