Share Email Print
cover

Journal of Applied Remote Sensing

Fusion of multiple image types for the creation of radiometrically-accurate synthetic scenes
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multi-spectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG's ability to generate scenes "on demand." To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, we have developed a comprehensive workflow aimed at reducing the man-in-the-loop requirements for many aspects of synthetic hyperspectral scene construction. Through a fusion of 3D lidar data with passive imagery, we have been able to partially-automate many of the required tasks in the creation of high-resolution urban DIRSIG scenes. This paper presents a description of these techniques.

Paper Details

Date Published: 1 January 2009
PDF: 20 pages
J. Appl. Remote Sens. 3(1) 033501 doi: 10.1117/1.3075896
Published in: Journal of Applied Remote Sensing Volume 3, Issue 1
Show Author Affiliations
Stephen R. Lach, U.S. Air Force (United States)
John P. Kerekes, Rochester Institute of Technology (United States)
Xiaofeng Fan, Aptina Imaging (United States)


© SPIE. Terms of Use
Back to Top