Share Email Print

Proceedings Paper

Fusion of multisensor passive and active 3D imagery
Author(s): David A. Fay; Jacques G. Verly; Michael I. Braun; Carl E. Frost; Joseph P. Racamato; Allen M. Waxman
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.

Paper Details

Date Published: 28 August 2001
PDF: 12 pages
Proc. SPIE 4363, Enhanced and Synthetic Vision 2001, (28 August 2001); doi: 10.1117/12.438025
Show Author Affiliations
David A. Fay, MIT Lincoln Lab. (United States)
Jacques G. Verly, MIT Lincoln Lab. (Belgium)
Michael I. Braun, MIT Lincoln Lab. (United States)
Carl E. Frost, MIT Lincoln Lab. (United States)
Joseph P. Racamato, MIT Lincoln Lab. (United States)
Allen M. Waxman, MIT Lincoln Lab. (United States)

Published in SPIE Proceedings Vol. 4363:
Enhanced and Synthetic Vision 2001
Jacques G. Verly, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?