SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2018 | Call for Papers

SPIE Defense + Commercial Sensing 2018 | Call for Papers

SPIE Journals OPEN ACCESS

SPIE PRESS

SPIE PRESS

Print PageEmail PageView PDF

Electronic Imaging & Signal Processing

Training image-analysis bases improves ‘image fusion’

Application of independent-component analysis boosts the performance of current surveillance and defense systems.
13 January 2009, SPIE Newsroom. DOI: 10.1117/2.1200812.1443

Modern technology has enabled development of low-cost wireless imaging sensors of various modalities that can be deployed for monitoring scenes. This advance has been strongly motivated by both military and civilian applications, including those related to health care, battlefield surveillance, and environmental monitoring. Multimodal sensors provide diverse degradation, thermal, and visual characteristics. Image fusion combines visual information from various sources into a single representation to facilitate processing by an operator or computer-vision system. Fusion techniques can be divided into spatial and transform-domain methods.1 The latter enable efficient identification of an image's salient features. Transformations that have been suggested for image fusion include dual-tree wavelet transforms and pyramid decomposition.1 We recently proposed2,3 an image-fusion framework—featuring improved performance—that is based on image-analysis bases trained using independent-component analysis (ICA).

Receptive fields of simple cells in the mammalian primary visual cortex are usually spatially localized, oriented, and bandpass. Such filter responses can be derived from unsupervised learning of independent visual features or sparse linear codes for natural scenes.4 Basis training employing ICA4 for image de-noising through sparse-code shrinkage improved performance with respect to the use of wavelets. The bases were trained by extracting a population of local patches from similar-content images and then processed by the FastICA algorithm4 to estimate the transformation and its inverse. ICA bases are closely related to wavelets and Gabor functions because they represent localized-edge features. They have more degrees of freedom than wavelets, however, because they adapt to arbitrary orientations. Discrete and dual-tree wavelet transforms have only two and six distinct orientations, respectively.4 ICA bases do not offer a multilevel representation—as do wavelets or pyramid decomposition—nor are they shift invariant. This invariance can be tackled using the spin-cycling method, however.


Figure 1. Proposed image-fusion framework. T{ }and T-1{ }: Independent-component analysis (ICA)-trained transformations and their inverse. uk(t): Image coefficients in the ICA domain.

Image fusion conveys information of interest from all input sensors to a single composite ‘fused’ image. Since interesting information for image analysis usually implies edges or texture data, fusion techniques often employ transformations that excel in modeling edges.1. We proposed an ICA-based fusion framework with significantly improved edge-modeling performance (see Figure 1). Assume that the input sensor images are registered, and an ICA transformation has been trained. From each input sensor image, every possible patch is isolated and normalized to zero mean. The subtracted means are stored for eventual fused-image reconstruction. The patches are transformed to the ICA domain. (Optional de-noising—or sparse-code shrinkage—can also be performed.4) The coefficients from each input image in the ICA domain are combined to construct a fused image using ‘fusion rules.’ The ‘max-abs’ rule conveys the largest coefficients (in absolute value) to the fused image, the ‘mean’ rule averages the input coefficients, and the ‘weighted-combination’ rule weighs the input data by its contribution to the total patch energy. Application of a ‘regional rule’ segments the scene into active/non-active areas and uses different rules for each area. Finally, ‘adaptive rules’ estimate optimal weights assuming sparse priors for the coefficients. The fused image is then returned to the spatial domain using the inverse transformation and synthesized by spatially averaging the extracted image patches. To estimate the optimal local-patch means (intensity range) before averaging, we optimized the Piella fusion-quality index5 using the stored means from the input sensor patches3 (see Figure 2).


Figure 2. Improved fusion for the Octec and the MX-15 data set provided by Waterfall Solutions and QinetiQ. The ICA framework exhibits improved performance compared with the dual-tree wavelet-transform (DT-WT) scheme in terms of the Piella index, Q. LWIR: Long-wavelength infrared.

Application of trained ICA bases improves the performance of fusion algorithms compared with previous wavelet approaches at a minimum additional computational cost. We intend to develop and employ more sophisticated and detailed segmentation algorithms to fuse only selected areas using the most appropriate fusion rule.

This work has been funded by the UK's Data and Information Fusion-Defense Technology Centre Applied Multi-Dimensional Fusion cluster project.


Nikolaos Mitianoudis, Tania Stathaki  
Imperial College
London, UK