Proceedings Volume 4049

Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VI

Sylvia S. Shen, Michael R. Descour
cover
Proceedings Volume 4049

Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VI

Sylvia S. Shen, Michael R. Descour
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 August 2000
Contents: 12 Sessions, 49 Papers, 0 Presentations
Conference: AeroSense 2000 2000
Volume Number: 4049

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Detection and Identification I
  • Modeling and Simulation
  • Landsat 7 Program Mission and Calibration
  • Atmospheric Characterization and Correction I
  • Imaging Spectrometry Projects
  • Spectral Applications and Methodology I
  • Landsat 7 Data Processing and Archive
  • Detection and Identification II
  • Atmospheric Characterization and Correction II
  • Spectral Applications and Methodology II
  • Spectral Feature Extraction and Compression
  • Poster Session
Detection and Identification I
icon_mobile_dropdown
Comparative analysis of hyperspectral adaptive matched filter detectors
Real-time detection and identification of military and civilian targets from airborne platforms using hyperspectral sensors is of great interest. Relative to multispectral sensing, hyperspectral sensing can increase the detectability of pixel and subpixel size targets by exploiting finer detail in the spectral signatures of targets and natural backgrounds. A multitude of adaptive detection algorithms for resolved or subpixel targets, with known or unknown spectral characterization, in a background with known or unknown statistics, theoretically justified or ad hoc, with low or high computational complexity, have appeared in the literature or have found their way into software packages and end-user systems. The purpose of this paper is threefold. First, we present a unified mathematical treatment of most adaptive matched filter detectors using common notation, and we state clearly the underlying theoretical assumptions. Whenever possible, we express existing ad hoc algorithms as computationally simpler versions of optimal methods. Second, we assess the computational complexity of the various algorithms. Finally, we present a comparative performance analysis of the basic algorithms using theoretically obtained performance characteristics. We focus on algorithms characterized by theoretically desirable properties, practically desired features, or implementation simplicity. Sufficient detail is provided for others to verify and expand this evaluation and framework. A primary goal is to identify best-of-class algorithms for detailed performance evaluation.
Material identification over variation of scene conditions and viewing geometry
Peihsiu Suen, Glenn Healey, David Slater
As the viewing angle and scene conditions change, the spectral appearance of a material also changes. We present a material identification algorithm for hyperspectral images that is invariant to these changes. Only the solar zenith angle, the viewing angle and sensor altitude, and the spectral reflectance function for the material are required by the algorithm. A material subspace model allows the algorithm to compute an error measure for a given pixel that indicates its similarity to the material. Classification results using USGS mineral reflectance functions and MODTRAN atmospheric functions are presented to demonstrate the performance of the algorithm. Recognition experiments using simulated off-nadir HYDICE images are also presented to demonstrate the use of the algorithm.
Performance analysis of the adaptive GMRF anomaly detector for hyperspectral imagery
Susan M. Thornton, Jose M. F. Moura
We have developed an adaptive anomaly detector based on a three-dimensional Gauss-Markov random field model (GMRF) for the background clutter in hyperspectral imagery. We have shown that incorporating the modeling framework into a single-hypothesis detection scheme leads to an efficient and effective algorithm for discriminating manmade objects (the anomalies) in real hyperspectral imagery. A major feature of our GMRF anomaly detector is that it adapts to the local statistics of the clutter through the use of an approximate maximum-likelihood (approximate-ML) estimation scheme. In this paper, we focus on a thorough performance evaluation of our Adaptive GMRF Anomaly detector for hyperspectral imagery. We evaluate the detector along three directions: estimation error performance, computational cost, and detection performance. In terms of estimation error, we derive the Cramer-Rao bounds and carry out Monte Carlo simulation studies that show that the approximate-ML estimation procedure performs quite well when the fields are highly correlated, as often the case with real hyperspectral imagery. Finally we test extensively the adaptive anomaly detector with real hyperspectral imagery from both the HYDICE and SEBASS sensors. The performance of our anomaly detector compares very favorably with that of the RX-algorithm, an alternative maximum-likelihood detector used with multispectral data, while reducing by up to an order of magnitude the associated computational cost.
Algorithm taxonomy for hyperspectral unmixing
In this paper, we introduce a set of taxonomies that hierarchically organize and specify algorithms associated with hyperspectral unmixing. Our motivation is to collectively organize and relate algorithms in order to assess the current state-of-the-art in the field and to facilitate objective comparisons between methods. The hyperspectral sensing community is populated by investigators with disparate scientific backgrounds and, speaking in their respective languages, efforts in spectral unmixing developed within disparate communities have inevitably led to duplication. We hope our analysis removes this ambiguity and redundancy by using a standard vocabulary, and that the presentation we provide clearly summarizes what has and has not been done. As we shall see, the framework for the taxonomies derives its organization from the fundamental, philosophical assumptions imposed on the problem, rather than the common calculations they perform, or the similar outputs they might yield.
Target detection enhancement using temporal signature propagation
Changes in atmospheric conditions and sensor response for successive imaging sessions have limited the use of fixed target hyperspectral libraries to help discriminate targets from cluttered backgrounds. The hyperspectral target signature instability and unpredictability have resulted in a dependence on anomaly detection algorithms in real time surveillance applications. However, the performance of these techniques fails to meet system requirements for many military applications. Our study examines temporal variations in the long-wave infrared spectra of man-made targets and natural backgrounds obtained with the SEBASS (8-12 µm) imager as part of the Dark HORSE 2 exercise during the HYDRA data collection in November, 1998. We examine the signature information propagated over various time intervals. In addition, the hyperspectral target signatures, taken over various time intervals, are transformed using image-based techniques and compared to those converted with methods based on atmospheric modeling. We show that detection performance can be dramatically improved by exploiting signature information propagated over various time intervals using image-based methods. Implications of this study for target detection enhancement using hyperspectral data derived from multiple flight missions and/or mathematical transformations of hyperspectral target signatures will be discussed.
Modeling and Simulation
icon_mobile_dropdown
Simulation of MSI imagery from HSI data
D. Scott Anderson, Brett L. Keaffaber, Raymond P. Wasky
In this paper, a process of converting hyperspectral (HSI) data to multispectral (MSI) data is described. Accomplishing this conversion involves several steps: (1) data-to-signal conversion using the modulation transfer function (MTF) and the Spectral Response of the HSI sensor, (2) removing atmospheric transmittance losses from the HSI data using the MODTRAN code, (3) reintroducing atmospheric transmittance effects for each HSI band over selected MSI slant ranges, (4) grouping appropriate HSI bands into the broader MSI bands, (5) applying a spatial sampling algorithm to simulate different MSI ground sample distances (GSDs), and (6) performing signal-to-data conversion using the MTF and Spectral Response of the MSI sensor. Results are presented for two different slant ranges (GSD levels), each with three meteorological ranges.
Modeling the spectral variability of ground irradiance functions
Zhihong Pan, Glenn Healey, David Slater
We analyze a set of 7,258 0.4-2.2 micron ground spectral irradiance functions measured on different days over a wide range of conditions. We show that a low-dimensional linear model can be used to capture the variability in these measurements. Using this linear model, we compare the data with a previous empirical study. We also examine the agreement of the data with spectra generated by MODTRAN 4.0. Using a database of 224 materials, we consider the implications of the observed spectral variability for hyperspectral material discrimination using subspace projection techniques.
Comprehensive hyperspectral system simulation: I. Integrated sensor scene modeling and the simulation architecture
Bruce V. Shetler, Daniel Mergens, Chia Chang, et al.
A comprehensive hyperspectral system simulation with applications to space and airborne sensors has been developed. A companion paper presents results for the hyperspectral payload simulation and the end-to-end testing of the system. In this paper we discuss current and planned work in the development of a new, integrated scene modeling capability which combines the detailed simulation capabilities of the DIRSIG model with the wide area capabilities of the GENESSIS model. In addition, the overall integration architecture, which is based upon the System Simulation Toolkit (SST), and the associated simulation components including space and aircraft platform models, ground control elements and interfaces to exploitation elements is described.
Comprehensive hyperspectral system simulation: II. Hyperspectral sensor simulation and premliminary VNIR testing results
Richard J. Bartell, Craig R. Schwartz, Michael T. Eismann, et al.
An end-to-end hyperspectral system model with applications to space and airborne sensor platforms is under development and testing. In this paper we discuss current work in the development of the sensor model and the results of preliminary testing. It is capable of simulating collected hyperspectral imagery of the ground as sensors operating from space or airborne platforms would acquire it. Dispersive hyperspectral imaging sensors operating from the visible through the thermal infrared spectral regions can be modeled with actual hyperspectral imagery or simulated hyperspectral scenes used as inputs. In the sensor model portion, fore-optics (misalignment), dispersive spectrometer designs, degradations (platform motion, smile, keystone, misregistration), focal plane array (temperature drift, nonuniformity/nonlinearity), noise (shot, dark, Johnson, 1/f, RMS read, excess low frequency), analog-to-digital conversion, digital processing, and radiometric/temporal/wavelength calibration effects are included. The overall model includes a variety of processing algorithms including constant false alarm rate anomaly detection, spectral clustering of backgrounds for anomaly detection, atmospheric compensation, and pairwise adaptive linear matching for detection and classification. Results of preliminary testing using synthetic scene data in the visible/near infrared portion of the spectrum are discussed. Potential applications for this modeling capability include processing results performance prediction and sensor parameter specification trade studies.
Landsat 7 Program Mission and Calibration
icon_mobile_dropdown
Present and future of the Landsat program
James R. Irons
The Landsat 7 satellite system was designed to operate in a manner that will substantially advance the application of remote land observations to global change research. The Enhanced Thematic Mapper- Plus (ETM+) sensor aboard the spacecraft currently acquires multispectral digital image data of the Earth’s land surfaces on a routine basis. The quality of the ETM+ data is excellent, meeting or improving upon pre-launch specifications. The data are transmitted to a globally distributed set of ground stations including the primary U.S. ground station at the U.S. Geological Survey EROS Data Center in Sioux Falls, South Dakota. The U.S. Government manages the Landsat 7 satellite system A major program objective is to create an ETM+ data archive at the EROS Data Center that provides global coverage of the Earth’s continental and coastal surfaces on a seasonal basis. These data are available on a non-discriminatory basis at the incremental cost of fulfilling a user request. Once purchased from the EROS Data Center, no restrictions are placed on subsequent distribution of the data. This strategy fosters the operational applications of ETM+ data while advancing studies of the Earth as a system.
Early ground-reference calibration results for Landsat 7 ETM+ using small test sites
Kurtis J. Thome, Emily E. Whittington, John Henry LaMarr, et al.
Recent results of the vicarious calibration of the Landsat-7 ETM+ sensor are presented based on the reflectance-based vicarious method using results from a smaller test site local to the University of Arizona area. This test site is not as bright, nor as spatially-uniform and as large as typical sites. However, the proximity of the site allows for more frequent calibrations and hopefully a better understanding of the calibration as a function of time. The selection of the test site, its properties, and example results of calibrations at this site are presented. The results from seven dates are presented and show that the ETM+ sensor has been stable to better than 5% since launch. The results from these seven dates have larger variability than those from the large test sites, but agree for the most part to better than 5% with the large test sites.
Landsat 7 on-orbit geometric calibration and performance
James C. Storey, Michael Choate
The Landsat 7 Image Assessment System was developed to characterize and calibrate the radiometric and geometric performance of the Landsat 7 Enhanced Thematic Mapper Pius (ETM+) instrument. Algorithms and software assess the geometric performance of the Landsat 7 spacecraft and ETM+ sensor system and perform geometric calibration by estimating sensor and spacecraft geometric parameters. Following the initial on-orbit calibration, performed during the Landsat 7 on- orbit initialization and verification period, all geometric performance goals were met. Geometric characterization and calibration activities will continue for the life of the Landsat 7 mission.
Landsat sensor cross-calibration using nearly coincidental matching scenes
Phil M. Teillet, Brian L. Markham, John L. Barker, et al.
Early in its mission, the Landsat-7 spacecraft was temporarily placed in a “tandem” orbit very close to that of the Landsat-5 spacecraft in order to facilitate the establishment of sensor calibration continuity between the Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Landsat-5 Thematic Mapper (TM) sensors. The key period for the tandem configuration was June 1-4, 1999, during which hundreds of nearly-coincident matching scenes were recorded by both the Landsat-7 ETM+ and, in cooperation with Space Imaging and international ground stations, the Landsat-5 TM as well. The paper presents a methodology for Landsat-7 ETM+ and Landsat-5 TM cross-calibration and results based on analysis of three tandem image pairs. The approach incorporates adjustments for spectral band differences between the two sensors. With the well- calibrated ETM+ as a reference, the tandem-based cross-calibrations for the three image pairs yield TM responsivities that are consistent to each other to within a few percent or better depending on the spectral band. Comparisons with independent methods and results obtained by other groups indicate that the tandem-based cross-calibration is in close agreement with the independent results in spectral bands 1-3 but compares less favourably in the other bands.
Landsat 5/Landsat 7 underfly cross-calibration experiment
Grant R. Mah, James E. Vogelmann, Michael Choate
There was a one-time opportunity to obtain nearly coincident coverage from both Landsat 5 and Landsat 7 as Landsat 7 drifted to its final orbital position during the initialization and verification phase following launch. During the underfly period, Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data were collected using the U.S Landsat 7 ground station network and the solid-state recorder, while agreements were established with Space Imaging/EOSAT and various international ground stations to collect corresponding Landsat 5 Thematic Mapper (TM) data. Approximately 750 coincident scenes were collected during the underfly from 1-3 June 1999. Underfly data are intended to play a major role in developing cross-calibrations to bridge the results derived from historical Landsat 5 TM data with research performed with current Landsat 7 ETM+ data. The purpose of this paper is to provide an overview of the underfly experiment, and to provide some early comparative results between Landsat 5 and 7 data sets. Initial results indicate that products produced using TM and ETM+ data are very similar.
Atmospheric Characterization and Correction I
icon_mobile_dropdown
MODTRAN4: radiative transfer modeling for remote sensing
MODTRAN4, the newly released version of the U.S. Air Force atmospheric transmission, radiance and flux model is being developed jointly by the Air Force Research Laboratory / Space Vehicles Directorate (AFRL / VS) and Spectral Sciences, Inc. It is expected to provide the accuracy required for analyzing spectral data for both atmospheric and surface characterization. These two quantities are the subject of satellite and aircraft campaigns currently being developed and pursued by, for instance: NASA (Earth Observing System), NPOESS (National Polar Orbiting Environmental Satellite System), and the European Space Agency (GOME - Global Ozone Monitoring Experiment). Accuracy improvements in MODTRAN relate primarily to two major developments: (1) the multiple scattering algorithms have been made compatible with the spectroscopy by adopting a correlated-^ approach to describe the statistically expected transmittance properties for each spectral bin and atmospheric layer, and (2) radiative transfer calculations can be conducted with a Beer-Lambert formulation that improves the treatment of path inhomogeneities. Other code enhancements include the incorporation of solar azimuth dependence in the DISORT-based multiple scattering model, the introduction of surface BRDF (Bi-directional Radiance Distribution Functions) models and a 15 cm-1 band model for improved computational speed. Finally, recent changes to the HITRAN data base, relevant to the 0.94 and 1.13 um bands of water vapor, have been incorporated into the MODTRAN4 databases.
Atmospheric correction of spectral imagery data
Atmospheric emission, scattering, and photon absorption degrade spectral imagery data and reduce its utility. The Air Force Research Laboratory and Spectral Sciences, Inc. are developing a MODTRAN4-based 'atmospheric mitigation’ algorithm to support current and planned IR-visible-UV sensor spectral radiance imagery measurements. The intent is to provide surface reflectance and emissivity imagery data of sufficient accuracy for input into subsequent analyses of surface properties, effectively removing the atmospheric component. This report is the result of the application of the atmospheric mitigation algorithm to a NASA/JPL AVIRIS spectral image cube as a pre-processing step towards improving the performance of image categorization routines.
Reformulation of the MODTRAN band model for higher spectral resolution
Alexander Berk, Prabhat K. Acharya, Lawrence S. Bernstein, et al.
The MODTRAN 1 cm-1 band model has been reformulated for application to higher spectral resolution. Molecular line center absorption is still determined from finite spectral bin equivalent widths but is now partitioned between the bin containing the molecular transition and its nearest neighbor bin. Also, the equivalent width calculation has been upgraded to retain to maintain high accuracy at the increased spectral resolution. The MODTRAN Lorentz line tail spectral bin absorption coefficient data has been replaced by a more general and accurate Pad? approximant for Voigt line tails, and higher order pressure dependencies are now modeled. Initial comparisons to the FASE model and to measurement data are presented.
Status of atmospheric correction using a MODTRAN4-based algorithm
Michael W. Matthew, Steven M. Adler-Golden, Alexander Berk, et al.
This paper presents an overview of the latest version of a MODTRAN4-based atmospheric correction (or "compensation") algorithm developed by Spectral Sciences, Inc. and the Air Force Research Laboratory for spectral imaging sensors. New upgrades to the algorithm include automated aerosol retrieval, cloud masking, and speed improvements. In addition, MODTRAN4 has been updated to correct recently discovered errors in the HITRAN-96 water line parameters. Reflectance spectra retrieved from AVIRIS data are compared with "ground truth" measurements, and good agreement is found.
Imaging Spectrometry Projects
icon_mobile_dropdown
1999 AIG/HyVista HyMap group shoot: commercial hyperspectral sensing is here
Fred A. Kruse, Joseph W. Boardman, Adam B. Lefkoff, et al.
A hyperspectral “group shoot” was conducted during September 1999 by Analytical Imaging and Geophysics (AIG) in cooperation with HyVista Corporation, Sydney, Australia, utilizing HyMap, an advanced 126-band hyperspectral sensor covering the 0.4 - 2.5 micrometer region with 3-10 m spatial resolution. The purpose of this effort was to make high-quality hyperspectral data commercially available to customers for their sites of interest across the United States. Over 200 flightlines of HyMap data were collected throughout the United States for government, academic, and commercial customers. The standard product delivered by AIG/Hy Vista was fully-calibrated radiance data along with precision-geocoded, apparent reflectance spectral image data. This marks the first time that commercial hyperspectral data has been delivered in a standard map-referenced, reflectance-corrected, ready-to-use form. Preliminary work with these datasets has demonstrated unprecedented spectral mapping capabilities for a variety of disciplines, including geology, vegetation studies, environmental assessment, near-shore marine mapping, and military applications. Data were also acquired in support of data simulation efforts the proposed Warfighter and the Australian ARIES hyperspectral satellites. Examples of the 1999 data, a description of the production processing flow, and scientific analysis results are discussed here.
Night vision imaging spectrometer (NVIS) performance parameters and their impact on various detection algorithms
Christopher G. Simi, John Parish, Edwin M. Winter, et al.
In the past 3 years, US Army’s Night Vision and Electronic Sensors Directorate has worked in conjunction with Navy SPAWAR on DARPA's Adaptive Spectral Reconnaissance Program (ASRP). The Night Vision Imaging Spectrometer (NVIS), which is a solar reflective (0.4-2.35um) hyperspectral imaging device, has played a major role in the ASR Program. As with all spectral imaging devices, there exist a certain number of imperfections in the NVIS device. If not handled properly, these imperfections can have an impact upon the performance of certain detection algorithms. This paper will describe the overall measured sensor performance parameters of the NVIS, its imperfections and the effect they may have on algorithm performance. There will also be a discussion concerning the processing tools and methods that have been developed in the past year, and have allowed the imperfections to be removed to some level.
Real-time airborne hyperspectral detection systems
Michael Koligman, Anthony C. Copeland
PAR Government Systems Corporation (PGSC) is currently supporting several successful government sponsored airborne hyperspectral programs. The goals of these programs include the demonstration of airborne multisensor systems that can successfully detect and image targets in real time. Targets are detected using imaging hyperspectral sensors in the visible, near IR and short wave IR bands. System capabilities include flight data collection, calibration and data compensation, bad pixel detection, real time processing including anomallv detection, and post processing functions which include target identification. Detected target positions are also used to cue a high resolution camera system which provides images for offline analysis. Key features of these systems include a wide area hyperspectral sensor, high resolution EO/IR sensor, onboard detection, cueing and target recognition processing, and telemetry downlink selectable image data and target cues. The hardware systems for these programs include a variety of both custom built and COTS components. PC and SGI workstation based systems are networked together utilizing a variety of interface boards. The systems were integrated and supported by PAR Government Systems Corporation, La Jolla, CA. This paper will focus on real time hyperspectral hardware systems integration and will provide a general discussion of both real time (airborne) processing capabilities and off-line methods. Conclusions will be presented on current status and future program plans.
Comparison of AOTF, grating, and FTS imaging spectrometers for hyperspectral remote sensing applications
Lisa Bubion, Peter E. Miller, Andreas F. Hayden
In this paper we compare the performance for effluent gas detection of three types of imaging spectrometers. The spectrometers compared are the grating and Fourier Transform (FTS) for their wide implementation and the Acousto-Optic Tunable Filter (AOTF) for its unique feature of spectral tunability. The analysis is performed in the thermal emission region of spectra using Raytheon developed software simulation and modeling tools. The paper concludes with a proposed system design and architecture considerations for an AOTF-based hyperspectral and polarimetric imaging sensor for airborne and spaceborne platforms.
Lessons learned in the postprocessing of field spectroradiometric data covering the 0.4-2.5-um wavelength region
Terrence H. Hemmer, Todd L. Westphal
As the number of recognized applications for and acceptance of spectral imaging increases, the need for field spectral measurements also increases. The goal of this paper is to help ensure the quality and accuracy of field spectral measurements. Unlike laboratory measurements, where everything is controlled to meticulous detail, field measurements tend to suffer from an almost complete lack of control. Hence, assuring data quality of field measurements can be difficult. To help compensate for some of the problems that arise due to this lack of control, collection protocols are established. Even using collection protocols, sensor artifacts are not always apparent. In this paper, some of these sensor artifacts are presented and discussed. While this paper concentrates on a specific spectrometer, many of the issues, protocols and processing procedures should be generally applicable to most field spectrometers operating in this spectral region.
Spectral Applications and Methodology I
icon_mobile_dropdown
Extraction of compositional information for trafficability mapping from hyperspectral data
Fred A. Kruse, Joseph W. Boardman, Adam B. Lefkoff
Trafficability refers to the extent to which the terrain will permit continued movement of any and/or all types of traffic, an issue that ground forces must address in advance of military operations to ensure their success. Multispectral remote sensing technology is currently used by terrain analysts to help assess trafficability, but its utility in producing classical measures of trafficabilty has been limited. This paper describes a hyperspectral trafficability mapping methodology supported by a case history using Airborne Visible/Inffared Imaging Spectrometer (AVIRIS) data. The strong points of the hyperspectral data for trafficability mapping are detection, identification, and mapping of surface composition. Selected spectral libraries were reviewed in the context of trafficability to generate classes of materials with specific trafficability characteristics. These were used in conjunction with scene-based hyperspectral analysis methodologies to produce prototype trafficability products from AVIRIS data. The AVIRIS analysis illustrates that while considerable important information regarding trafficability can be extracted from hyperspectral data, analysis of these data alone can not produce the desired “classical” trafficability products and that data fusion is required to enhance information extracted from hyperspectral data. The principal limitations of hyperspectral data, are 1. That only certain materials have unique spectral features or character that can be detected, 2. that it measures only the very surface and may not be indicative of bulk materials, and 3. That it doesn’t provide any textural information, critical for determining classical trafficability measures. Specific additional required information includes terrain information related to topography, and surface texture. This information can be obtained from supporting datasets such as high resolution digital elevation models (DEM), and Synthetic Aperture Radar (SAR).
Characterization and delineation of plumes, clouds, and fires in hyperspectral images
Two approaches, one for discriminating features in a set of AVIRIS scenes dominated by areas of smoke, plumes, clouds and burning grassland as well as scarred (burned) areas and another for identifying those features are presented here. A semiautomated feature extraction approach using principal components analysis was used to separate the scenes into feature classes. Typically, only 3 component images were used to classify the image. A physics-based approach which utilized the spectral diversity of the features in the image was used to identify the nature of the classes produced in the component analysis. The results from this study show how the two approaches can be used in unison to fully characterize a smoke or cloud-filled scene.
Wavelength calibration and instrument line shape estimation of a LWIR hyperspectral sensor from in-scene data
Estimates have been made of wavelength calibration and instrument line shape (ILS) for the SEBASS hyperspectral sensor. FASCODE estimates of at-sensor radiance were made, convolved with gaussian or triangular ILS functions of various widths, had slight wavelength shifts introduced and were downsampled to SEBASS wavelengths. The downsampled FASCODE estimates were compared to measured data for 21 atmospheric absorption features between 8-13 um. The combination of ILS width and wavelength shift providing the best match was recorded. The technique was sensitive to wavelength calibration, but relatively insensitive to ILS width. Results demonstrated that SEBASS wavelength calibration was within +/- 0.5 channel widths.
Comparison of matrix factorization algorithms for band selection in hyperspectral imagery
Miguel Velez-Reyes, Luis O. Jimenez-Rodriguez, Daphnia M. Linares, et al.
Hyperspectral imaging sensors provide high-spectral resolution images about natural phenomena in hundreds of bands. High storage and transmission requirements, computational complexity, and statistical modeling problems motivate the idea of dimension reduction using band selection. The optimal band-selection problem can be formulated as a combinatorial optimization problem where p-bands from a set of n-bands are selected such that some measure of information content is maximized. Potential applications for automated band selection include classifier feature extraction, and band location in sensor design and in programming of reconfigurable sensors. The computational requirements for standard search algorithms to solve the optimal band selection problem are prohibitive. In this paper, we present the use of singular value and rank revealing QR matrix factorizations for band selection. These matrix factorizations can be use to determine the most independent columns of a matrix. The selected columns represent the most independent bands that contain most of the spatial information. It can be shown that under certain circumstances, the bands selected using these matrix factorizations are good approximations to the principal components explaining most of the image spatial variability. The advantage of matrix factorizations over the combinatorial optimization approach is that they take polynomial time and robust and proven numerical routines for their computation and are readily available from many sources. In the paper, we will present results comparing the performance of the algorithms using AVIRIS and LANDSAT imagery. Algorithms are compared in their computational requirements, their capacity to approximate the principal components, and their performance as an automated feature extraction processor in a classification algorithm. Preliminary results show that under certain circumstances selected bands can have over 90% correlation with principal components and classifiers using these algorithms in feature extraction can outperform spectral angle classifiers.
Landsat 7 Data Processing and Archive
icon_mobile_dropdown
Landsat 7 Science Data Processing: a systems overview
Robert J. Schweiss, Nathaniel E. Daniel, Deborah K. Derrick
The Landsat 7 Science Data Processing System, developed by NASA for the Landsat 7 Project, provides the science data handling infrastructure used at the Earth Resources Observation Systems (EROS) Data Center (EDC) Landsat Data Handling Facility (DHF) of the United States Department of Interior, United States Geological Survey (USGS) located in Sioux Falls, South Dakota. This paper presents an overview of the Landsat 7 Science Data Processing System and details of the design, architecture, concept of operation, and management aspects of systems used in the processing of the Landsat 7 Science Data.
Subsetting and formatting Landsat 7 L0R ETM+ data products
Michael R. Reid
The Landsat 7 Processing System (LPS) processes Landsat 7 Enhanced Thematic Mapper Plus (ETM-+) instrument data into large, contiguous segments called “subintervals.” The LPS-processed subinterval products must be subsetted and reformatted before the Level 1 processing systems can ingest them. The initial full subintervals produced by the LPS are stored mainly in HDF Earth Observing System (HDF-EOS) format, an extension to the Hierarchical Data Format (HDF). The final LOR products are stored in native HDF. The HDF and HDF-EOS application programming interfaces (APIs) can be used for extensive data subsetting and data reorganization. How the HDF and HDF-EOS APIs may be used to efficiently subset, format, and organize Landsat 7 LOR data to create various configurations of LOR products is discussed.
Landsat 7 processing on the EROS Data Center's National Land Archive Processing System (NLAPS)
Brenda K. Jones, LeAnn Dix
Staff at the U.S. Geological Survey (USGS) EROS Data Center (EDC) process Landsat 7 data for customer distribution on two distinct systems. The Distributed Active Archive Center (DAAC) processing is done on the Level -1 Product Generation System (LPGS), and the USGS processing is completed on the National Land Archive Processing System (NLAPS). The NLAPS is capable of processing data from Landsat 1-5 and 7 and SPOT 1-3. The data can be processed with correction levels ranging from Level-0 raw data to terrain-corrected precision products in a variety of output formats. The NLAPS processing system requires that different ordering procedures be used for standard recipe versus special order processing, and it makes available many processing and framing options. The NLAPS draws on a ground control point library whose sources range from paper maps to textual descriptions and registered source imagery, such as digital orthophoto quadrangles. The accuracy of the final products will vary depending on the level of correction applied and the source of the ground control points used for the precision corrections. An accuracy assessment of the LPGS systematic product versus the NLAPS systematic product demonstrates that both systems provide the same geometric and radiometric accuracy in systematically corrected products.
REALM: an image database for Landsat 7 data
Jeffrey G. Masek, Samuel N. Goward, Carter T. Shock
In order to fulfill the science mission of Landsat-7, users must explore advanced methodologies for analyzing large volumes of Landsat data. REALM (Research Environment for Advanced Landsat Monitoring) is a parallel database system implemented at University of Maryland to create on-demand Landsat analyses using 100s-1000s of scenes. REALM includes automated preprocessing modules, automated navigation and space/time indexing, and a query language that permits users to submit custom algorithms to the image database. A simple example, calculating forest-cover for the Eastern United States, illustrates the utility of this approach. Aggregate throughput for this query amounts to 6-8 MB/sec, sufficient to create a continental-scale forest-cover map in less than 12 hours.
Cloud cover avoidance in space-based remote sensing acquisition
John R. Gasch, Kenneth A. Campana
The Landsat 7 mission is the first of the Landsat series of remote sensing satellites to employ automated techniques of cloud cover avoidance in its mission of acquiring a global database of high resolution (15-30m) multi-spectral images. Cloud avoidance enables the mission to concentrate its limited assets toward the acquisition of higher quality scenes by repelling away from scenes where there is higher than nominal predicted cloud density. Thus, the mission has higher probability of acquiring scenes of greater value for land use studies. The timely availability of reliable global cloud cover forecasts from the National Centers for Environmental Prediction (NCEP) makes this operationally feasible. This paper will describe the general implementation and mission operational considerations for employing cloud avoidance in daily mission planning and scheduling. The algorithms employed in the scheduler’s priority computations will be described, along with the proof of concept in the form of modeled and actual results obtained by Landsat 7 against the historical cloud contamination statistics obtained by other remote sensing satellites in its class. The paper will also describe the cloud cover prediction methods currently employed by NCEP as well as plans for future enhancements to the cloud prediction model. In conclusion, the paper will explore the applicability of employing cloud avoidance in future, and possibly existing, remote sensing satellite missions.
Landsat 7 automatic cloud cover assessment
Richard R. Irish
An automatic cloud cover assessment algorithm was developed for the Landsat 7 ground system. A scene dependent approach that employs two passes through ETM+ data was developed. In pass one, the reflective and thermal properties of scene features are used to establish the presence or absence of clouds in a scene. If present, a scene-specific thermal profile for clouds is established. In pass two, a unique thermal signature for clouds is developed and used to identify the remaining clouds in a scene. The algorithm appears to be a good cloud discriminator for most areas of the Earth. Some difficulty has appeared in imagery over Antarctica, and snow at high illumination angles is occasionally mistaken for cloud.
Building a global, consistent, and meaningful Landsat 7 data archive
Theresa J. Arvidson, John R. Gasch, Samuel N. Goward
The mission of Landsat 7 is to acquire and periodically refresh a global archive of sun-lit, substantially cloud-free land scenes. For the U.S. archive, Landsat 7 is acquiring every land scene at least once every year, at an average rate of 250 scenes each day or 90,000 scenes each year. This is the first time in the 25 year history of Landsat data acquisitions that there is a deliberate goal to build this archive such that any data of interest to the majority of users will already be in the archive when they go looking for it - at the right gain setting, at the right time, substantially cloud-free, and at the right frequency of acquisition. Anticipating most users’ data needs is the key to achieving this lofty goal. The Long Term Acquisition Plan (LTAP) dictates the optimum acquisition refresh cycle for each scene, based on change detection and special interest inputs. The plan also specifies monthly optimum gain settings to maximize scene data return. Scheduling software automatically schedules acquisitions in accordance with this LTAP, making decisions as to acceptable cloud cover levels, urgency of acquisition, and availability of resources to fulfill the plan.
Detection and Identification II
icon_mobile_dropdown
Effect of the number of samples used in a leave-one-out covariance estimator
Larry Biehl, David A. Landgrebe
Some algorithms, such as Gaussian Maximum Likelihood require the use of the second order statistics, e.g. the covariance matrix, to help characterize the target in addition to the mean. Also, models such as FASSP require the second order statistics of targets to predict the performance of algorithms even though the algorithm may not use the covariance matrix directly. However, many times the number of samples available to make a good estimate of the covariance matrix is small. The Leave-One-Out Covariance (LOOC) estimator can be used to estimate the covariance matrix when the number of samples available is less than the normal minimum required. The normal minimum number of samples needed for a sample class covariance matrix is p+1 samples for p-dimensional data. For the LOOC estimator, in theory, as few as 3 samples are all that are needed. However, what are the affects of using such a low number in practice? This paper presents the results of an experiment that was conducted to measure what the affect may be in one specific instance. Sometimes as few as 0.1 p samples produce reasonably satisfactory results; other times 0.4p or more samples are needed.
Hidden Markov model-based spectral measure for hyperspectral image analysis
A Hidden Markov Model (HMM)-based spectral measure is proposed. The basic idea is to model a hyperspectral spectral vector as a stochastic process where the spectral correlation and band-to-band variability are modeled by a hidden Markov process with parameters determined by the spectrum of the vector that forms a sequence of observations. In order to evaluate the performance of this new measure, it is further compared to two commonly used spectral measures, Euclidean Distance (ED), Spectral Angle Mapper (SAM) and a recently proposed Spectral Information Divergence (SID). The experimental results show that the HMMID performs more effective than the other three measures in characterizing spectral information at the expense of computational complexity.
Detection and segmentation in hyperspectral imagery using discriminant analysis
Hyper-spectral imagery (HSI) contains significant spectral resolution that enables material identification. Typical methods of classification include various forms of matching sample image spectra to pure end-member sample spectra or mixtures of these end-members. Often, pure end-members are not available a-priori. We propose the use of HSI to complement other sensor modalities which are used to cue the end-member selection process for target detection. Multiple sensor modalities are frequently available and sensor fusion is exploited as demonstrated by the DARPA Dynamic Database (DDB) and Multisensor Exploitation Testbed (MSET) programs. Candidate target pixels, cued from other sensor modalities, are registered to the HSI and verified using local matched filters. Target identification is then performed using multiple methods including Euclidean distance, spectral angle mapping, anomaly detection, principal component analysis (PCA) decomposition and reconstruction, and linear discriminant analysis (LDA). The use of LDA for target identification as well as scene segmentation provides significant capabilities to HSI understanding.
Atmospheric Characterization and Correction II
icon_mobile_dropdown
Atmospheric compensation for surface temperature and emissivity separation
A new method for atmospheric compensation of longwave infrared (LWIR) hyperspectral images is presented. The technique exploits the large amount of data in hyperspectral images to obtain the most information about atmospheric and surface parameters of interest. This is done with Canonical Correlation Analysis (CCA) by casting the problem onto a multivariate framework. The procedure accounts for the joint effects of surface and atmospheric radiation, thus addressing the complex interaction between the Earth’s surface properties, thermodynamic state, and the atmosphere. After atmospheric compensation, the calculated surface radiance is used to estimate temperature and emissivity. The technique was tested with radiative transfer model simulations and airborne multispectral data. Results obtained from MODTRAN simulations and the MODerate resolution Imaging Spectrometer (MODIS) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) (MASTER) airborne sensor show that it is feasible to retrieve land surface temperature and emissivity with 1°K and 0.01 accuracies, respectively.
Atmospheric correction using imbedded models (ACUIM)
Anthony M. Sommese, Mark Essel, David S. Weiss, et al.
Many remote sensing applications rely on accurate spectral estimates of surface reflectance. To transform from measured irradiance to reflectance the contaminating effects of intervening atmosphere must be removed. A model-based algorithm is used to perform this transformation. It operates on hyperspectral data collected in the reflective wavelength region (0.4 to 2.5 pm) where the state of the atmosphere is described by a set of parameters (e.g. water vapor and aerosol content). The algorithm embeds an atmospheric model (or a derived database) into an estimation loop that sequentially solves for each parameter using measurements in pre-determined bands. Estimates of the atmospheric state are then used along with the atmospheric model to develop a set of correction terms, which are applied on a pixel-by-pixel basis across the entire spectrum to convert measured irradiance to reflectance. Emphasis is placed on automation requiring a unique approach to water vapor and visibility estimation. Algorithm performance is demonstrated against AVIRIS and HYDICE collections taken over the Atmospheric Radiation Measurement site near Lamont, OK. Estimates of total integrated water vapor and visibility (aerosol content) are compared to external measurements provided by meteorological instruments. Also reflectance estimates of the gray and spectral reflectance panels are compared to field measurements.
Performance assessment of atmospheric correction algorithms for Vis-SWIR hyperspectral data
Amy E. Stewart, Robert D. Kaiser
The performance of a variety of atmospheric correction algorithms applied to VIS-SWIR hyperspectral data are assessed quantitatively. Data from HYDICE obtained under desert, forest, tropical, and alpine conditions were tested. In addition to comparing retrieved reflectance spectra from ground truth, performance is assessed in terms of the impact of various atmospheric correction techniques on material identification.
Mitigation of atmospheric effects in hyperspectral data analysis
For hyperspectral data analysis, the general objective for atmospheric compensation algorithms is to remove solar illumination and atmospheric effects from the measured spectral data so that surface reflectance can be retrieved. This then allows for comparison with library data for target identification. Recent advances in spectral sensing capability have led to the development of a number of atmospheric compensation algorithms for hyperspectral data analysis. In this paper, three topics will be discussed: (1) algorithm evaluation of two physics-based approaches: ATREM and the AFRL model, (2) sensitivity analysis of the effects of various input parameters to surface reflectance retrieval, and (3) algorithm enhancements of how water vapor and aerosol retrievals can be better conducted than current algorithms. Examples using existing hyperspectral data, including those from HYDICE, AVIRIS will be discussed. Results will also be compared with truth information derived from ground and satellite based meteorological data.
Thin-cloud effects on spectral/spatial remote sensing and information content within Vis-SWIR hyperspectral imagery
Joseph G. Shanks, William A.M. Blumberg, Steven J. Heising, et al.
Modem optical sensors can provide high quality multi/hyperspectral data at high spatial resolution, permitting the application of diverse and sophisticated algorithms for remote sensing of the terrain and atmosphere. With global coverage of perceptible cloud exceeding seventy-five percent [Wylie & Menzel, 1999], it is important that the effects of intervening cloud be anticipated and minimized to realize the full potential of such systems. Cloud contamination also bears on the more general issue of "information content" in a HSI data stream. This paper will describe the application of the Vis-LWIR scene simulation tools CLDSIM / GENESSIS / MOSART for assessing spectral/spatial matched-filter algorithms for the detection and classification of features-of-interest against terrain, with and without thin clouds. Following a review of the methodology, the sensitivity of matched-filter SNR to cloud-cover, vs GSD, as captured in sequential subsets of the primary principal-components will be presented. The potential for mis-classification due to undetected thin-clouds will also be described.
Atmospheric correction including the BRDF influence of the target
Carmen Tornow
The atmospheric correction program ACUMAM (atmospheric correction using MODTRAN and more) considers the influence of the bi-directional reflectance distribution function (BRDF) for the corresponding surface target and is based on a semi-analytic solution of the radiative transfer equation. The key of the method is a special separation of the directly transmitted surface radiance from the total measured one. It takes advantage of the statistical BRDF features derived from a number of BRDF models and field measurements. The path scattered radiance of the atmosphere and the various atmospheric transmission functions which are used in ACUMAM do not depend on the actual BRDF of the surface. They are calculated from the output radiances of MODTRAN and from the results of the radiative transfer code which was written by Diner and Martonchik, 1984. Their code works for Lambertian as well as non-Lambertian surface BRDFs. The presented atmospheric correction program can be adapted to any passive remote sensing sensor which operates in the solar spectral range. In order to evaluate the accuracy of ACUMAM it was tested with simulated radiances and real air and space borne measurements. The simulation procedure was restricted to the technical parameters of the wide angle optical spectral sensor (WAOSS). This sensor has the capability to perform multi-angular measurements in the near infrared spectral region. It will be mounted on the bi-spectral infrared detection (BIRD) micro-satellite which will fly next year.
Spectral Applications and Methodology II
icon_mobile_dropdown
Cross-sensor registration optimization study
Nga Nguyen, Jay K. Hackett
Multi-sensor and multi-spectral data fusion is becoming a very useful technology to solve a host of defense and commercial imaging and computer vision problems. Many of the techniques that can be used to fuse multi-sensor image data require coregistration or alignment of pixels between image bands. We have performed a non-parametric study to determine which multi-spectral bands should be chosen for optimum pixel level alignment. The data used during this study is composed of two aerial multi-spectral sensors (one with 3 visible bands and one with 5 bands in the visible and short wave infrared and one synthetic aperture radar sensor in the X-band. The study is presented in a scientific manner to allow for objective analysis of the results. A similarity measure and normalization approach was developed to allow for direct comparison between all combinations of visible, short wave infrared, and SAR phenomenology. All combinations of data alignment are performed and analytical results are extracted, analyzed, and statistically plotted. Variations in time of day of collection, atmospheric transmission, and collection path length are investigated. This approach has applicability for band selection in both manual and automatic registration techniques that are used to co-register multi-sensor data.
Genetic algorithm for combining new and existing image processing tools for multispectral imagery
We describe the implementation and performance of a genetic algorithm (GA) which evolves and combines image processing tools for multispectral imagery (MSI) datasets. Existing algorithms for particular features can also be “re-tuned” and combined with the newly evolved image processing tools to rapidly produce customized feature extraction tools. First results from our software system were presented previously. We now report on work extending our system to look for a range of broad-area features in MSI datasets. These features demand an integrated spatio- spectral approach, which our system is designed to use. We describe our chromosomal representation of candidate image processing algorithms, and discuss our set of image operators. Our application has been geospatial feature extraction using publicly available MSI and hyperspectral imagery (HSI). We demonstrate our system on NASA/Jet Propulsion Laboratory’s Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) HSI which has been processed to simulate MSI data from the Department of Energy’s Multispectral Thermal Imager (MTI) instrument. We exhibit some of our evolved algorithms, and discuss their operation and performance.
Advantages of BRDF information for the interpretation of reflectance measurements over vegetation
Maria von Schoenermark, Aslan Demircan, Bernhard Geiger, et al.
The interpretation of the reflected radiation measured by wide angle instruments or in off-nadir directions requires the knowledge of the bi-directional reflectance distribution function (BRDF). By using atmospheric radiative transfer calculations we demonstrate how several vegetation indices are influenced by the BRDF and by the atmosphere. We present two methods to retrieve the leaf area index (LAI) using bi-directional reflectance factors in the near infrared spectral domain. Firstly we use a newly defined Off-Nadir Vegetation Index (ONVI) and a multiple regression analysis. The method was tested on a synthetic data set with a LAI varying between 0 and 10. We achieved a root mean square error of 1.54. Secondly we trained a neural network with synthetic data computed with the BRDF model of Roujean et aL Using observations in backward scattering direction the root mean square error for LAI retrieval was 1.2. To obtain more comprehensive information on the characteristic and stochastic properties of the BRDF a new measuring method was developed. It employs a rotating CCD-line camera mounted on an extendible boom. The data of our field campaigns together with the measurements performed by other groups are arranged in a BRDF catalog.
Spectral Feature Extraction and Compression
icon_mobile_dropdown
Autonomous determination of endmembers in high spatial and spectral resolution data
Edwin M. Winter, Christopher G. Simi, Anthony B. Hill, et al.
Recently, new hyperspectral sensors have become available that provide both high spatial resolution and high spectral resolution. These characteristics combined with high signal to noise ratio allow the differentiation of vegetation or mineral types based upon the spectra of small patches of the surface. In this paper, automated endmember determination methods are applied to high spatial and spectral resolution data from two new sensors, TRWIS III and NVIS. Both of these sensors are high quality low noise pushbroom imaging spectrometers that acquire data at 5 to 6 nm resolution from 400 to 2450 nm. The data sets collected will be used for two different applications of the automated determination of endmembers: scene material classification and the detection of spectral anomalies. The NVIS hyperspectral data was collected from approximately 6000 ft above ground level over Cuprite, Nevada, resulting in a footprint of approximately two meters. The TRWIS III data was collected from 1500 meters altitude over mixed agriculture backgrounds in Ventura County, California, a largely agricultural area about 100 km from Los Angeles. After calibration and other preprocessing steps, the data in each case was processed using the N-FINDR algorithm, which extracts endmembers based upon the geometry of convex sets. Once these endmember spectra are found, the image cube can be "unmixed" into fractional abundances of each material in each pixel. The results of processing this high spatial and spectral resolution data for these two different applications will be presented.
Multibase transform coding for multispectral image compression
David Akopian, Jussi P. S. Parkkinen, Timo Jaaskelainen, et al.
This paper considers color image compression using multiple transforms, i.e. multibase transform coding (MTC). We present a block MTC method. We present a method that both aims to utilize the variations in correlation structure between differrent channels in a multispectral image irrespective of the basis representation of the multicolor space and to reduce blocking effects by combining properties of block and wavelet transforms.
Poster Session
icon_mobile_dropdown
Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges
George P. Lemeshewsky, Robert A. Schowengerdt
Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.