Proceedings Volume 4381

Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VII

Sylvia S. Shen, Michael R. Descour
cover
Proceedings Volume 4381

Algorithms for Multispectral, Hyperspectral, and Ultraspectral Imagery VII

Sylvia S. Shen, Michael R. Descour
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 20 August 2001
Contents: 15 Sessions, 59 Papers, 0 Presentations
Conference: Aerospace/Defense Sensing, Simulation, and Controls 2001
Volume Number: 4381

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Detection and Identification I
  • Band Selection
  • Imaging Spectrometry Projects I
  • Spectral Applications and Methodology I
  • Multispectral Thermal Imager I
  • Spectral Applications and Methodology II
  • Clustering and Classification
  • Spectral Applications and Methodology III
  • Detection and Identification II
  • Multispectral Thermal Imager II
  • Atmospheric Characterization and Correction
  • Spectral Applications and Methodology IV
  • Imaging Spectrometry Projects II
  • Detection and Identification III
  • Posters
Detection and Identification I
icon_mobile_dropdown
Spectral subspace matched filtering
The linear matched filter has long served as a workhorse algorithm for illustrating the promise of multispectral target detection. However, an accurate description of a target's distribution usually requires expanding the dimensionality of its intrinsic signature subspace beyond what is appropriate for the matched filter. Structured backgrounds also deviate from the matched filter paradigm and are often modeled as clusters. However, spectral clusters usually show evidence of mixing, which corresponds to the presence of different materials within a single pixel. This makes a subspace background model an attractive alternative to clustering. In this paper we present a new method for generating detection algorithms based on joint target/background subspace modeling. We use it first to derive an existing class of GLF detectors, in the process illustrating the nature of the real problems that these solve. Then natural symmetries expected to be characteristic of otherwise unknown target and background distributions are used to generate new algorithms. Currently employed detectors are also interpreted using the new approach, resulting in recommendations for improvements to them.
Hyperspectral adaptive matched-filter detectors: practical performance comparison
Dimitris G. Manolakis, Christina Siracusa, David Marden, et al.
The unified treatment of adaptive matched filter algorithms for target detection in hyperspectral imaging data included a theoretical analysis of their performance under a Gaussian noise plus interference model. The purpose of this paper is to provide empirical analysis of algorithm performance using HYDICE data sets. First, we provide a concise summary of adaptive matched filter detectors, including their key theoretical assumptions, design parameters, and computational complexity. The widely used generalized likelihood ratio detectors, adaptive subspace detectors, constrained energy minimization (CEM) and orthogonal subspace projection (OSP) algorithm are the focus of the analysis. Second, we investigate how well the signal models used for the development of detection algorithms characterize the HYDICE data. The accurate modeling of the background is crucial for the development of constant false alarm rate (CFAR) detectors. Finally, we compare the different algorithms with regard to two desirable performance properties: capacity to operate in CFAR mode and target visibility enhancement.
Automatic target-recognition system for hyperspectral imagery using ORASIS
David Gillis, Peter J. Palmadesso, Jeffrey H. Bowles
We present an automatic target recognition system (ATR) for hyperspectral imagery. The system has been designed to use the output from ORASIS (the Optical Real-time Adaptive Spectral Identification System), a hyperspectral analysis package designed at the Naval Research Laboratory. The ATR system is capable of performing both target recognition (including subpixel identification) and anomaly detection, in near real-time and with no a priori scene knowledge. In this paper we discuss the algorithms we use in the ATR and include experimental results using the HYDICE Forest Radiance data set.
Quantitative study of detection performance for LWIR hyperspectral imagers as a function of number of spectral bands
Remote passive sensors can collect data that depict both the spatial distribution of objects in the scene and the spectral distributions for those objects within the scene. Target search techniques, such as matched filter algorithms, use highly resolved wavelength spectra (large number of bands) to help detect fine features in the spectrum in order to discriminate objects from the background. The use of a large number of bands during the target search, however, significantly slows image collection and area coverage rates. This study quantitatively examines how binning or integrating bands can affect target detection. Our study examines the long-wave infrared spectra of man-made targets and natural backgrounds obtained with the SEBASS (8-12 micrometers ) imager as part of the Dark HORSE 2 exercise during the HYDRA data collection in November, 1998. In this collection, at least 30 bands of data were obtained, but they were then binned to as few as 2 bands. This study examines the effect on detection performance of reducing the number of bands, through computation of the signal to clutter ratio (SCR) for a variety of target types. In addition, this study examines how band reduction affects the receiver operator curves (ROC) i.e. the target detection probability versus false alarm rate, for matched filter algorithms using in-scene target signatures and hyperspectral images. Target detection, as measured by SCR, for a variety of target types, improves with increasing number of bands. The enhancement in SCR levels off at approximately 10 bands, with only a small increase in SCR obtained from 10 to 30 bands. Variable number of bands within a bin (for fixed number of bins), generated by a genetic algorithm, increases SCR and ROC curve performance for multi-temporal studies. Thus, optimal selection of bands derived from one mission, may be robust and stable, and provide enhanced target detection for data collected on subsequent days. This investigation is confined to the study of calibrated, LWIR image cubes where clutter, rather than sensor noise, limits target detection. Therefore, many of the conclusions in this study regarding band reduction and band binning may not apply to image cubes containing noisy data, where band reduction and averaging may help substantially reduce the noise.
Band Selection
icon_mobile_dropdown
Relationships between physical phenomena, distance metrics, and best-bands selection in hyperspectral processing
Nirmal Keshava, Peter W. Boettcher
The objective of hyperspectral processing algorithms is to efficiently capitalize on the wealth of information in the scene being imaged. Radiation collected in hundreds of contiguous electromagnetic channels and stored as data in a vector provides insight about the reflective and emissive properties of each pixel in the scene. However, it is not intuitively clear that for common applications such as estimation, classification, and detection that the best performance results from utilizing every measurement in the vector. In fact, it is quite easy to show that for some tasks, more data can degrade performance. In this paper, we explore the role of metrics and best bands algorithms in the context of maximizing the performance of hyperspectral algorithms. Specifically, we first focus on creating an intuitive framework for physical information measured by a sensor. Then, we examine how it is translated into numerical quantities by a distance metric. We discuss how two common distance metrics for hyperspectral signals, the Spectral Angle Mapper (SAM), and the Euclidean Minimum Distance (EMD), quantify the distance between two spectra. Focusing on the SAM metric, we demonstrate, in the context of target detection, how the separability of the two spectra can be increased by retaining only those bands that maximize the metric. Finally, this intuition about the best bands analysis for SAM is extended to the Generalize Likelihood Ratio Test (GLRT) for a practical target/background detection scenario. Results are shown for a scene imaged by the HYDICE sensor demonstrating that the separability of targets and background can be increased by carefully choosing the best bands for the test.
Automated optimal channel selection for spectral imaging sensors
A method of optimizing the selection of spectral channels in a spectral-spatial remote sensor has been developed that is applicable to the design of multispectral, hyperspectral and ultra spectral resolution sensors. The approach is based on an end member analysis technique that has been refined to select the most information dense channels. The algorithm operates sequentially and at any step in the sequence, the channel selected is the most independent form all previously selected channels. After the channel selection process, highly correlated channels, which are contiguous to those selected, can be merged to form bands. This process increases the signal to noise for the new broader spectral bands. The resulting bands, potentially of unequal width and spacing, collect the most uncorrelated spectral information present in the data. The band selection provides a physical interpretation of the data and has applications in spectral feature selection and data compression.
Band selection for lossless image compression
Lossless compression algorithms typically do not use spectral prediction, and typical algorithms that do, use only one adjacent band. Using one adjacent band has the disadvantage that if the last band compressed is needed, all previous bands must be decompressed. One way to avoid this is to use a few selected bands to predict the others. Exhaustive searches for band selection have a combinatorial problem, and are therefore not possible except in the simplest cases. To counter this, the use of a fast approximate method for band selection is proposed. The bands selected by this algorithm are a reasonable approximation to the principal components. Results are presented for exhaustive studies using entropy measures, sum of squared errors, and compared to the fast algorithm for simple cases. Also, it was found that using six bands selected by the fast algorithm produces comparable performance to one adjacent band.
Band selection from a hyperspectral data-cube for a real-time multispectral 3CCD camera
Paul J. Withagen, Eric den Breejen, Eric M. Franken, et al.
Given a specific task, like detection of hidden objects (i.e. vehicles and landmines) in a natural background, hyperspectral data gives a significant advantage over RGB- color or gray-value images. It introduces however, a trade- off between cost, speed, signal-to-noise ratio, spectral resolution, and spatial resolution. Our research concentrates on making an optimal choice for spectral bands in an imaging system with a high frame rate and spatial resolution. This can be done using a real-time multispectral 3CCD camera, which records a scene with three detectors, each accurately set to a wavelength by selected optical filters. This leads to the subject of this paper: how to select three optimal bands from hyperspectral data to perform a certain task. The choice of these bands includes two aspects, the center wavelength, and the spectral width. A band-selection and band-broadening procedure has been developed, based on statistical pattern recognition techniques. We will demonstrate our proposed band selection algorithm, and present its classification results compared to red- green-blue and red-green-near-infrared data for a military vehicle in a natural background and for surface laid landmines in vegetation.
Imaging Spectrometry Projects I
icon_mobile_dropdown
IKONOS technical performance assessment
Mark K. Cook, Brad A Peterson, Gene Dial, et al.
The world's first high-resolution commercial satellite, IKONOS, was launched by Lockheed Martin for Space Imaging in September of 1999. The IKONOS satellite contains both a 1-meter 11-bit panchromatic sensor and a 4-band 4-meter 11-bit multispectral sensor. After launch a detailed On-Orbit Product Verification program was conducted to verify the IKONOS satellite and ground station products met all design specifications. This paper shares the results of the On-Orbit Product Verification program. Descriptions of the image quality attributes and a comparison between system requirements and On-orbit performance are included. The verified attributes are the Signal to Noise Ratio (SNR), Modulation Transfer Function (MTF), Band to Band Registration, and Radiometric and Geometric Accuracy. The Geometric Accuracy is examined with respect to all ground processed product requirements to produce monoscopic, stereo, orthorectified, and digital terrain matrix products. The result of this On-orbit testing and subsequent analyses show that all IKONOS system requirements have been met or exceeded.
Night vision imaging spectrometer (NVIS) calibration and configuration: recent developments
Christopher G. Simi, Anthony B. Hill, Henry Kling, et al.
The Night Vision Imaging Spectrometer (NVIS) system has participated in a large variety of hyperspectral data collections for the Department of Defense. A large number of improvements to this system have been undertaken. They include the implementation of a calibration process that utilizes in-flight calibration units (IFCU). Other improvements include the completion and implementation of an updated laboratory wavelength assignments map which provide precise bandwidth profiles of every NVIS pixel. NVESD has recently incorporated a Boeing C-MIGITS II INS/DGPS system, which allows geo-rectification of every frame of NVIS data. A PC-based Dual Real Time Recorder DRTR was developed to extend the collection capability of the sensor and allow the concurrent collection of data from other devices. The DRTR collects data from the NVIS, a Dalsa imager, and data from the CMIGITS-II (C/A code Miniature Integrated GPS/INS Tactical System) which provides navigation information. The integration of the CIMGITS-II allows every data frame of both the NVIS and the DALSA to be stamped with INS/GPS information. The DRTR software can also provide real-time waterfall displays of the data being collected. This paper will review the recent improvements to the NVIS system.
Night vision imaging spectrometer (NVIS) processing and viewing tools
Christopher G. Simi, Roberta Dixon, Michael J. Schlangen, et al.
The US Army's Night Vision and Electronic Sensors Directorate (NVESD) has developed software tools for processing, viewing, and analyzing hyperspectral data. The tools were specifically developed for use with the U.S. Army's NVESD Night Vision Imaging Spectrometer (NVIS), but they can also be used to process hyperspectral data in a variety of other formats. The first of these tools is the NVESD Hyperspectral Data Processor, which is used to create a calibrated datacube from raw hyperspectral data files. It can calibrate raw NVIS data to spectral radiance units, perform spectral re-alignment, and can co-register imagery from NVIS's VNIR and SWIR subsystems. The second tool is the NVESD Hyperspectral Viewer, which can display focal plane data, generate images, and compute spatial and temporal statistics, produce data histograms, estimate spectral correlation, compute signal-to-clutter ratios, etc. Additionally, this software tool has recently been modified to utilize the INS/GPS data that is currently embedded into NVIS data as well as the high-resolution imagery (HRI) that is collected simultaneously. Furthering its capabilities, Technical Research Associates (TRA) has added the following detection algorithms to the Viewer: N-FINDR, PC and MNF Transformations, Spectral Angle Mapper, and R-X. The purpose of these software developments is to provide the DoD and other Government agencies with a variety of tools, which are not only applicable to NVIS data but also can be applied to other hyperspectral data.
Compact Airborne Spectral Sensor (COMPASS)
Christopher G. Simi, Edwin M. Winter, Mary M. Williams, et al.
The COMPACT Airborne Spectral Sensor (COMPASS) design is intended to demonstrate a new design concept for solar reflective hyper spectral systems for the Government. Capitalizing from recent focal plane developments, the COMPASS system utilizes a single FPA to cover the 0.4-2.35micrometers spectral region. This system also utilizes an Offner spectrometer design as well as an electron etched lithography curved grating technology pioneered by NASA/JPL. This paper also discusses the technical trades, which drove the design selection of COMPASS. When completed, the core COMPASS spectrometer design could be used in a large variety of configurations on a variety of aircraft.
On-board processing for the COMPASS
Christopher G. Simi, Edwin M. Winter, Michael J. Schlangen, et al.
The Compact Airborne Spectral Sensor (COMPASS) is a hyperspectral sensor covering the 400 to 2350 nm spectral region using a single focal plane and a very compact optical system. In addition, COMPASS will include a high-resolution panchromatic imager. With its compact design and its full spectral coverage throughout the visible, near infrared and SWIR, COMPASS represents a major step forward in the practical utilization of hyperspectral sensors for military operations. COMPASS will be deployed on a variety of airborne platforms for the detection of military objects of interest. There was considerable interest in the development of an on-board processor for COMPASS. The purpose of this processor is to calibrate the data and detect military targets in complex background clutter. Because of their ability to operate on truly hyperspectral data consisting of a hundred or more bands, linear unmixing algorithms were selected for the detection processor. The N-FINDR algorithm that automatically finds endmembers and then unmixes the scene was selected for real-time implementation. In addition, a recently developed detection algorithm, Stochastic Target Detection (STD), which was specifically designed for compatibility with linear unmixing algorithms, was chosen for the detection step. The N-FINDR/STD algorithm pair was first tested on a variety of hyperspectral data sets to determine its performance level relative to existing hyperspectral algorithms (such as RX) using Receiver Operator Curves (ROC) as the basis. Following completion of the testing, a hardware implementation of a real time processor for COMPASS using commercial off-the-shelf computer technology was designed. The COMPASS on-board processor will consist of the following elements: preprocessing, N-FINDR endmember determination and linear unmixing, the STD target detection step, and the selection of a High Resolution Image Chip covering the target area. Computer resource projections have shown that these functions, along with supporting interactive display functions, can operate in real-time on COMPASS data using multi-processor Pentium III class processors.
Chemical imaging system: current status and challenges
Agustin I. Ifarraguerri, James O. Jensen
The Chemical Imaging System (CIS) is a small, high-speed long-wave infrared (8-12 micrometers ) imaging spectrometer which is currently under development by the United States Army. The fielded system will operate at 360 scans per second with a large format focal-plane-array. The CIS, which is currently at the exploratory development stage, is scheduled for transition to engineering development in 2005. Currently, the CIS uses the TurboFT FTS in conjunction with a 16-pixel direct-wired HgCdTe detector array. The TurboFT spectrometer provides high-speed operation in a small, lightweight package. In parallel to the hardware development, an algorithm and software development effort is underway to address some unique features of the CIS. The TurboFT-based system requires a non-uniform sampling Fourier transform algorithm in order to preserve signal fidelity. Also, the availability of multiple pixels can be exploited in order to improve the interference suppression capabilities of the system by allowing the detection and identification algorithm to adapt its parameters to the changing background. Due to the enormous amount of data generated, the signal processing must proceed at very high rate. High-speed computers operating with a parallel architecture are required to process the data in real time. This paper describes the current CIS bread box system. It includes some field measurement results followed by a discussion of the issues and challenges associated with meeting the design goals set for the program.
Spectral Applications and Methodology I
icon_mobile_dropdown
Automated hyper/multispectral image analysis tool
John A. Conant, Kurt D Annen
Hyperspectral and multispectral imagery provide a powerful remote sensing tool. In applications for which the image is produced by gaseous emission, analysis of the image to obtain the concentration and temperature of the gas flow is difficult and computationally intensive. We have developed a solution to this analysis problem using a physics-based scene model coupled with an automatic solver, a nonlinear optimization algorithm. The image set is modeled using a parameterized description of the gaseous flowfield and a three-dimensional spectral gas radiance model. The solver algorithm generates a trial flowfield and runs the radiance model, iteratively varying the flow parameters until an optimum match is obtained between measured and modeled images. An example analysis is shown using multispectral images of a microgravity flame.
ENVI-based hyperspectral/high-resolution panchromatic pitch yaw roll and georectification algorithm development
The Spectral Information Technology Applications Center has developed software capability to perform roll correction and geo-rectification, using pitch, roll and yaw for data collected by the Night Vision Imaging Spectrometer as well as its high-resolution panchromatic camera. This paper describes the roll-correction algorithm and its software interface to the Boeing C-MIGITS II INS/GPS system for correction of pitch, yaw and roll. It also describes the geo-rectification algorithm and its interface to the ENVI geo-rectification software routines.
Efficient materials mapping for hyperspectral data
Hyperspectral data rates and volumes challenge analysis approaches that are not highly automated and efficient. Derived products from hyperspectral data, which are presented in units that are physically meaningful, have added value to analysts who are not spectral or statistical experts. The Efficient Materials Mapping project involves developing an approach that is both efficient in terms of processing time and analyzed data volume and produces outputs in terms of surface chemical or material composition. Our approach will exploit the typical redundancy inherent in hyperspectral data of natural scenes to reduce data volume. This data volume reduction is combined with an automated approach to extract chemical information from spectral data. The results will be a method to produce maps of chemical quantities that can be readily interpreted by analysts specializing in characteristics of terrains and targets rather than photons and spectra.
Multispectral Thermal Imager I
icon_mobile_dropdown
Multispectral Thermal Imager: overview
W. Randy Bell, Paul G. Weber
The Multispectral Thermal Imager, MTI, is a research and development project sponsored by the United States Department of Energy. The primary mission is to demonstrate advanced multispectral and thermal imaging from a satellite, including new technologies, data processing and analysis techniques. The MTI builds on the efforts of a number of earlier efforts, including Landsat, NASA remote sensing missions, and others, but the MTI incorporates a unique combination of attributes. The MTI satellite was launched on 12 March 2000 into a 580 km x 610 km, sun-synchronous orbit with nominal 1 am and 1 pm equatorial crossing times. The Air Force Space Test Program provided the Orbital Sciences Taurus launch vehicle. The satellite has a design lifetime of a year, with the goal of three years. The satellite and payload can typically observe six sites per day, with either one or two observations per site from nadir and off-nadir angles. Data are stored in the satellite memory and down-linked to a ground station at Sandia National Laboratory. Data are then forwarded to the Data Processing and Analysis Center at Los Alamos National Laboratory for processing, analysis and distribution to the MTI team and collaborators. We will provide an overview of the Project, a few examples of data products, and an introduction to more detailed presentations in this special session.
Multispectral Thermal Imager (MTI) satellite hardware status, tasking, and operations
Max L. Decker, R. Rex Kay, N Glenn Rackley
MTI is a comprehensive R&D project, featuring a single satellite in a sun-synchronous orbit designed to collect radiometrically accurate images of instrumented ground sites in 15 spectral bands ranging from visible to long-wave infrared. The satellite was launched from Vandenberg AFB on March 12, 2000 aboard an Orbital Sciences Corporation Taurus rocket. After launch, the operations team completed a 3- month turn-on, check out and alignment procedure, and declared the satellite ready for its R&D mission on June 12, 2000. The satellite is currently healthy, having collected over 1,100 images during its first nine months of operation. This paper presents a brief satellite overview and documents on-orbit status and operational experience, including anomalies and their resolution.
MTI science, data products, and ground-data processing overview
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.
Performance of the interactive procedures for daytime detection of dense clouds in the MTI pipeline
Charles A. Rohde, Karen Lewis Hirsch, Anthony B. Davis
Pixel-scale cloud detection relies on the simple fact that dense-enough clouds are generally brighter, whiter, and colder than the underlying surface. These plain-language statements are readily translated into threshold operations in the multispectral subspaces, thus providing a reasonable premise for searching data cubes for cloud signatures. To supplement this spectral input (VIS, NIR, and TIR channels), we remark that cloud tops are generally above most of the water vapor in the atmosphere column. An extra threshold in the MTI water vapor product can therefore be applied. This helps considerably in cases where one of the default cloud signatures becomes ambiguous. The resulting cloud mask is however still highly sensitive to the thresholds in brightness, whiteness, temperature, and column water content, especially since we also want to flag low-level clouds that are not-so-dense. Clouds are also generally spatially large. This implies that simple spatial morphological filters can be of use to remove false positives and for expansion of the cloud mask. A false positive is indeed preferable to a miss in the view of MTI's mission in support of nuclear non-proliferation; non-local cloud radiative effects can otherwise bias retrievals in adjacent cloud-free areas. Therefore we use a data analyst to ensure built in quality control for MTI cloud masks. When looking for low-level clouds, the analyst interacts with a GUI containing histograms, a customized RGB rendering of the input data, and an RGB diagnostic cloud mask for quick evaluation of all threshold values. We use MTI data to document the performance and analyst-sensitivity to this procedure.
Recipes for writing algorithms for atmospheric corrections and temperature/emissivity separations in the thermal regime for a multispectral sensor
This paper discusses the algorithms created for the Multi- spectral Thermal Imager (MTI) to retrieve temperatures and emissivities. Recipes to create the physics based water temperature retrieval, emissivity of water surfaces are described. A simple radiative transfer model for multi- spectral sensors is developed. A method to create look-up- tables and the criterion of finding the optimum water temperature are covered. Practical aspects such as conversion from band-averaged radiances to brightness temperatures and effects of variations in the spectral response on the atmospheric transmission are discussed. A recipe for a temperature/emissivity separation algorithm when water surfaces are present is given. Results of retrievals of skin water temperatures are compared with in- situ measurements of the bulk water temperature at two locations are shown.
Observations and model predictions of water skin temperatures at MTI core site lakes and reservoirs
Alfred J. Garrett, Robert J. Kurzeja, Byron Lance O'Steen, et al.
The Savannah River Technology Center (SRTC) measured water skin temperatures at four of the Multi-spectral Thermal Imager (MTI) core sites. The depression of the skin temperature relative to the bulk water temperature ((Delta) T) a few centimeters below the surface is a complex function of the weather conditions, turbulent mixing in the water and the bulk water temperature. Observed skin temperature depressions range from near zero to more than 1.0 degree(s)C. Skin temperature depressions tend to be larger when the bulk water temperature is high, but large depressions were also observed in cool bodies of water in calm conditions at night. We compared (Delta) T predictions from three models (SRTC, Schlussel and Wick) against measured (Delta) T's from 15 data sets taken at the MTI core sites. The SRTC and Wick models performed somewhat better than the Schlussel model, with RMSE and average absolute errors of about 0.2 degree(s)C, relative to 0.4 degree(s)C for the Schlussel model. The average observed (Delta) T for all 15 databases was -0.7 degree(s)C.
Spectral Applications and Methodology II
icon_mobile_dropdown
Evolving forest fire burn severity classification algorithms for multispectral imagery
Between May 6 and May 18, 2000, the Cerro Grande/Los Alamos wildfire burned approximately 43,000 acres (17,500 ha) and 235 residences in the town of Los Alamos, NM. Initial estimates of forest damage included 17,000 acres (6,900 ha) of 70-100% tree mortality. Restoration efforts following the fire were complicated by the large scale of the fire, and by the presence of extensive natural and man-made hazards. These conditions forced a reliance on remote sensing techniques for mapping and classifying the burn region. During and after the fire, remote-sensing data was acquired from a variety of aircraft-based and satellite-based sensors, including Landsat 7. We now report on the application of a machine learning technique, implemented in a software package called GENIE, to the classification of forest fire burn severity using Landsat 7 ETM+ multispectral imagery. The details of this automatic classification are compared to the manually produced burn classification, which was derived from field observations and manual interpretation of high-resolution aerial color/infrared photography.
Fusion of high-resolution lidar elevation data with hyperspectral data to characterize tree canopies
This paper describes a methodology developed at the Spectral Information Technology Applications Center (SITAC) to combine information derived from high resolution LIDAR elevation data with information derived form hyperspectral data to characterize tree canopies. High resolution elevation data are used to detect abrupt changes in elevation, indicative of man-made structures or certain natural features. The underlying elevation is estimated by first masking out the pertinent structures or features and then interpolating. Structure or feature height is then calculated as the difference between the original elevation and the interpolated elevation. This procedure is applied to a high resolution LIDAR elevation data set of an open forest scene to produce a tree height image. These tree height data are then combined with other tree information to infer trunk diameter. Hyperspectral data are employed to detect as well as characterize man-made and natural structures. Fusion of hyperspectral information with elevation information promises benefits to remote sensing applications.
Integration of high-resolution DTED, hyperspectral data, and hypermedia data using the terrain analysis system
Brian D. Leighty, Jack N Rinker
DTED provides three dimensional surface configuration information which is the critical identifier of landform type. The use of knowledge based, physiographic landform models and the application of various morphometric operators to the DTED can lead to the inference of landform type. Accurate identification of landform type leads to the prediction of probable composition and properties. Thus, knowing landform type should provide key information regarding terrain characteristics for military and civil applications. Hyperspectral data by itself, are of little use in landform identification because spectral characteristics relate to surface composition rather than shape. However, spectral information in conjunction with surface configuration can help to identify some landform types. In addition, the use of landform and hyperspectral information together can provide information on surface composition that can then be used to infer soil condition factors. In these situations the interpretation of hyperspectral signatures is significantly more constrained and thus should be more accurate. Landform inferences resulting from the integrated DTED and hyperspectral data are further integrated with hypermedia terrain data consisting of text and imagery. This allows additional inferences to be made regarding landform composition and properties. The integration of these forms of data is investigated using the DARPA funded, prototype Terrain Analysis System (TAS). Examples are presented using several types of landforms. This investigation has been sponsored by the Central MASINT Organization, Spectral Information Technology Applications Center.
Analytic registration of spatially and spectrally disparate co-located imaging sensors achieved by matching optical flow
Jonathon M. Schuler, J. Grant Howard, Dean A. Scribner, et al.
A multi-spectral imaging system can be defined as a combination of electro-optic imagers that are mechanically constrained to view the same scene. Subsequent processing of the output imagery invariably requires a spatial registration of one spectral band image to geometrically conform to the imagery from a different sensor. This paper outlines a procedure to leverage motion estimation of a pair of video sequences to determine a transformation that minimizes the disparity in optical flow between the sequences.
Clustering and Classification
icon_mobile_dropdown
Gibbs-based unsupervised segmentation approach to partitioning hyperspectral imagery for terrain applications
Robert S. Rand, Daniel M. Keenan
A Gibbs-based approach to partitioning hyperspectral imagery into homogeneous regions is investigated for terrain mapping applications. The form of Bayesian estimation, Maximum A Posteriori (MAP) estimation, is applied through the use of a Gibbs distribution defined over a neighborhood system and is implemented as a multi-grid process. Appropriate energy functions and neighborhood graph structures are investigated, which model spectral disparities in an image using spectral angle and/or Euclidean distance. Experiments are conducted on a HYDICE scene collected over an area adjacent to Fort Hood, Texas, that contains a diverse range of terrain features and that is supported with ground truth. Suitable parameter ranges are investigated, and the behavior of the algorithm is characterized using individual and combined measures of disparity within the context of a more general framework, one that supports mixed-pixel processing.
Support vector machines for broad-area feature classification in remotely sensed images
Classification of broad area features in satellite imagery is one of the most important applications of remote sensing. It is often difficult and time-consuming to develop classifiers by hand, so many researchers have turned to techniques from the fields of statistics and machine learning to automatically generate classifiers. Common techniques include Maximum Likelihood classifiers, neural networks and genetic algorithms. We present a new system called Afreet, which uses a recently developed machine learning paradigm called Support Vector Machines (SVMs). In contrast to other techniques, SVMs offer a solid mathematical foundation that provides a probabalistic guarantee on how well the classifier will generalize to unseen data. In addition the SVM training algorithm is guaranteed to converge to the globally optimal SVM classifier, can learn highly non-linear discrimination functions, copes extremely well with high-dimensional feature spaces (such as hyperspectral data), and scales well to large problem sizes. Afreet combines an SVM with a sophisticated spatio-spectral feature construction mechanism that allows it to classify spectrally ambiguous pixels. We demonstrate the effectiveness of the system by applying Afreet to several broad area classification problems in remote sensing, and provide a comparison with conventional Maximum Likelihood classification.
Evaluation of matrix factorization method for data reduction and the unsupervised clustering of hyperspectral data using second-order statistics
We investigate a hyperspectral data reduction technique based on a matrix factorization method using the notion of linear independence instead of information measure, as an alternative to Principal Component Analysis (PCA) or the Karhunen-Loeve Transform. The technique is applied to a hyperspectral database whose spectral samples are known. We proceed to cluster such dimension-reduced databases with an unsupervised second order statistics clustering method and we compare those results to those produced by first order statistics. We illustrate the above methodology by applying it to several spectral databases. Since we know the class to which each sample belongs to in the database, we can effectively assess the algorithms' clustering/classification accuracy. In addition to using unsupervised clustering of data for purposes of image segmentation, we investigate this algorithm as a means for improving the integrity of spectral databases by removing spurious samples.
Spectral Applications and Methodology III
icon_mobile_dropdown
Statistics of hyperspectral imaging data
Dimitris G. Manolakis, David Marden, John P. Kerekes, et al.
Characterization of the joint (among wavebands) probability density function (PDF) of hyperspectral imaging (HSI) data is crucial for several applications, including the design of constant false alarm rate (CFAR) detectors and statistical classifiers. HSI data are vector (or equivalently multivariate) data in a vector space with dimension equal to the number of spectral bands. As a result, the scalar statistics utilized by many detection and classification algorithms depend upon the joint pdf of the data and the vector-to-scalar mapping defining the specific algorithm. For reasons of analytical tractability, the multivariate Gaussian assumption has dominated the development and evaluation of algorithms for detection and classification in HSI data, although it is widely recognized that it does not always provide an accurate model for the data. The purpose of this paper is to provide a detailed investigation of the joint and marginal distributional properties of HSI data. To this end, we assess how well the multivariate Gaussian pdf describes HSI data using univariate techniques for evaluating marginal normality, and techniques that use unidimensional views (projections) of multivariate data. We show that the class of elliptically contoured distributions, which includes the multivariate normal distribution as a special case, provides a better characterization of the data. Finally, it is demonstrated that the class of univariate stable random variables provides a better model for the heavy-tailed output distribution of the well known matched filter target detection algorithm.
Models for recognizing faces in hyperspectral images
Hyperspectral sensors provide useful discriminant for human face identification that cannot be obtained by other sensing modalities. The spectral properties of human tissue vary significantly from person to person. While the visible spectral characteristics of a person's skin may change over time, near-infrared spectral measurements allow the sensing of subsurface tissue change over time, near-infrared spectral measurements allow the sensing of subsurface tissue structure that is difficult for a subject to modify. The high spectral dimensionality of hyper-spectral imagery provides the opportunity to recognize subpixel features which enables reliable identification at large distances. We propose methods for the identification of humans using properties of individual tissue types as well as combinations of tissue types. Intrinsic models for facial tissue types for a person can be constructed form a single hyperspectral image. These models can be used to generate spectral subspaces that model the set of spectra for a face over a range of facial orientations, environmental conditions, and spectral mixtures.
Examples of atmospheric characterization using hyperspectral data in the VNIR, SWIR and MWIR
A conventional approach to HSI processing and exploitation has been to first perform atmospheric compensation so that surface features can be properly characterized. In this paper, the application of visible and IR spectral information to atmospheric characterization is discussed and illustrated with hyperspectral data in the VNIR, SWIR and MWIR data. AVIRIS and ARES data are utilized. The Airborne Visible-InfraRed Imaging Spectrometer (AVIRIS) sensor contains 224 bands, each with a spectral bandwidth of approximately 10 nm, allowing it to cover the entire range between 4 and 2.5 mm. For a NASA ER-2 flight altitude of 20 km, each pixel is 20 m in size, yielding a ground swath width of approximately 10 km. The Airborne Remote Earth Sensing (ARES) sensor was flown on a NASA WB-57 aircraft operated from approximately 15 km altitude. Spectral radiance data from 2.0 to 6.0 micrometers in 75 contiguous bands were collected. Pixel resolution is approximately 17 by 4.5 m2 with a swath width of 800 m. Examples of data applications include atmospheric water vapor retrieval, aerosol characterization, delineation of natural and manmade clouds/plumes, and cloud depiction. It is illustrated that though each application may only require a few spectral bands, the ultimate strength of HSI exploitation lies in the simultaneous and adaptive retrievals of atmospheric and surface features. Inter-relationships among different bands are also demonstrated and these are the physical basis for the optimal exploitation of spectral information.
Finding the dimensionality of hyperspectral data
Sinthop Kaewpijit, Jacqueline Le Moigne, Tarek El-Ghazawi
Hyperspectral systems have significantly progressed through recent advancements in sensor technology, which have made it possible to collect data with several hundred channels. While these remote sensing technology developments hold great promise for new findings in the areas of Earth and space science, they also present many challenges. These include the need for methods of data reduction, and faster processing of such increased data volumes. Principal Component Analysis (PCA) is one such data reduction technique, which is often used when analyzing remotely sensed data. For example, with land cover classification, most conventional methods require the preprocessing step of dimension reduction, which can be seen as a transformation from a high order dimension to a low order dimension to conquer the so-called curse of the dimensionality. Scientists typically produce all principal components (PCs) and then select from among them those that have significant information, which could be error prone. Using the so-called power method, the algorithm finds the eigenvalues one by one, starting from the largest and stopping when a predetermined threshold is reached. This threshold represents the desired amount of information content that corresponds to the computed eigenvalues as a percentage of the overall information content of the image. It will be shown that the algorithm presented in this paper can select accurately and compute only the needed PCs in an automatic fashion. It will be also shown that this algorithm is far more computationally efficient than existing methods.
Detection and Identification II
icon_mobile_dropdown
Modeling of LWIR hyperspectral system performance for surface object and effluent detection applications
In support of hyperspectral sensor system design and parameter tradeoff investigations, an analytical end-to-end remote sensing system performance forecasting model has been extended to the longwave infrared (LWIR). The model uses statistical descriptions of surface emissivities and temperature variations in a scene and propagates them through the effects of the atmosphere, the sensor, and processing transformations. A resultant system performance metric is then calculated based on these propagated statistics. This paper presents the theory and operation of extensions made to the model to cover the LWIR. Theory is presented on combining both surface spectral emissivity variation with surface temperature variation on the upwelling radiance measured by a downward-looking LWIR hyperspectral sensor. Comparisons of the model predictions with measurements from an airborne LWIR hyperspectral sensor at the DoE ARM site are presented. Also discussed is the implementation of a plume model and radiative transfer equations used to incorporate a thin man-made effluent plume in the upwelling radiance. Example parameter trades are included to show the utility of the model for sensor design and operation applications.
Characterization of gaseous effluents from modeling of LWIR hyperspectral measurements
Michael K. Griffin, John P. Kerekes, Kristine E. Farrar, et al.
Longwave Infrared (LWIR) radiation comprising atmospheric and surface emissions provides information for a number of applications including atmospheric profiling, surface temperature and emissivity estimation, and cloud depiction and characterization. The LWIR spectrum also contains absorption lines for numerous molecular species which can be utilized in quantifying species amounts. Modeling the absorption and emission from gaseous species using various radiative transfer codes such as MODTRAN-4 and FASE (a follow-on to the line-by-line radiative transfer code FASCODE) provides insight into the radiative signature of these elements as viewed from an airborne or space-borne platform and provides a basis for analysis of LWIR hyperspectral measurements. In this study, a model platform was developed for the investigation of the passive outgoing radiance from a scene containing an effluent plume layer. The effects of various scene and model parameters including ambient and plume temperatures, plume concentration, as well as the surface temperature and emissivity on the outgoing radiance were estimated. A simple equation relating the various components of the outgoing radiance was used to study the scale of the component contributions. A number of examples were given depicting the spectral radiance from plumes composed of single or multiple effluent gases as would be observed by typical airborne sensors. The issue of detectability and spectral identification was also discussed.
Unsupervised target subpixel detection in hyperspectral imagery
Most subpixel detection approaches require either full or partial prior target knowledge. In many practical applications, such prior knowledge is generally very difficult to obtain, if not impossible. One way to remedy this situation is to obtain target information directly from the image data in an unsupervised manner. In this paper, unsupervised target subpixel detection is considered. Three unsupervised learning algorithms are proposed, which are the unsupervised vector quantization (UVQ) algorithm, unsupervised target generation process (UTGP) and unsupervised NCLS (UNCLS) algorithm. These algorithms produce necessary target information from the image data with no prior information required. Such generated target information is referred to as a posteriori target information and can be used to perform target detection.
Object-level processing of spectral imagery for detection of targets and changes using spatial-spectral-temporal techniques
Geoffrey G. Hazel
Automatic detection of ground targets and their movements is an important problem in military remote sensing. Much recent attention has been afforded the exploitation of spectral imagery for this application. However, current spectral detection algorithms yield inadequate performance in many demanding scenarios. The present work explores several techniques founded on the notion of object-level image processing. In object-level processing we seek to progress from a pixel-level image description to a description at the spatial scale of natural objects. The concept of a natural object is inspired by the human visual system. An important advancement of a recently reported spectral object extraction method is presented. This technique, Knowledge Based Object Reassembly, extracts objects based on the spectral similarity of their pixels and then merges spatially adjacent objects according to a maximum classification confidence criterion. The improvement in object extraction allows the accurate characterization of natural objects by a spatial-spectral feature set. This feature set then forms the basis of detection, classification and change detection algorithms. The performance of the technique is assessed and its impact on spectral object level change detection and spatial-spectral object-level target discrimination and classification is measured. Both multi-spectral and hyper-spectral imagery over several spectral regions is analyzed.
Hyperspectral materials detection/identification/quantification using the residual correlation method
Lonnie H. Hudgins, Joan Hayashi, Pamela L. Blake, et al.
Remote detection, identification, and quantification of materials is an important problem in earth resource assessment. Satellite-based hyperspectral imaging sensors currently being developed by government and industry partnerships (e.g. the Coastal Ocean Imaging Spectrometer aboard the Naval EarthMap Observer) appear to be uniquely qualified for this purpose. Obtaining accurate estimates of material abundance on a pixel-by-pixel basis poses many challenging algorithmic and computational difficulties. A significant issue that must be addressed is how to efficiently select endmembers from a library when that library is spectrally redundant. In this paper, we demonstrate how an improved version of the Residual Correlation Method (RCM+) can provide a flexible solution to this problem. The RCM+ offers a robust treatment for selecting endmembers from spectrally redundant libraries in a one-at-a-time fashion. We discuss alternative methods such as two-at-a-time, or more generally, N-at-a-time methods within a unified mathematical framework for analysis. Certain theorems apply to all such methods, and help to define a trade space for endmember selection methods in general. We demonstrate our results using synthetic test cases, and discuss how all endmember selection methods may be affected by redundancy within the library as well as specific properties of the data.
Multispectral Thermal Imager II
icon_mobile_dropdown
Recipes for writing algorithms to retrieve columnar water vapor for three-band multispectral data
Christoph C. Borel, Karen Lewis Hirsch, Lee K. Balick
Many papers have considered the theory of retrieving columnar water vapor using the continuum interpolated band ratio (CIBR) and a few the atmospherically pre-corrected differential absorption (APDA) methods. In this paper we aim at giving recipes to actually implement CIBR and APDA for the Multi-spectral Thermal Imager (MTI) with the hope that they can be easily adapted to other sensors such as MODIS, AVIRIS and HYDICE. The algorithms have the four following steps in common: (1) running a radiative transfer (RT) algorithm for a range of water vapor values and a particular observation geometry, (2) computation of sensor band-averaged radiances, (3) computation of a non-linear fit of channel ratios (CIBR or APDA) as a function of water vapor, (4) application of the inverse fit to retrieve columnar water vapor as a function of channel ratio.
Comparison of four methods for determining precipitable water vapor content from multispectral data
Karen Lewis Hirsch, Lee K. Balick, Christoph C. Borel, et al.
Determining columnar water vapor is a fundamental problem in remote sensing. This measurement is important both for understanding atmospheric variability and also for removing atmospheric effects from remotely sensed data. Therefore, discovering a reliable, and if possible, automated method for determining water vapor column abundance is important. There are two standard methods for determining precipitable water vapor during the daytime from multi-spectral data. The first method is the Continuum Interpolated Band Ratio (CIBR). This method assumes a baseline and measures the depth of a water vapor feature as compared to this baseline. The second method is the Atmospheric Pre-corrected Differential Absorption technique (APDA); this method accounts for the path radiance contribution to the top of atmosphere radiance measurement, which is increasingly important at lower and lower reflectance values. We have also developed two methods of modifying CIBR. We use a simple curve fitting procedure to account for and remove any systematic errors due to low reflectance while still preserving the random spread of the CIBR values as a function of surface reflectance. We also have developed a two-dimensional look-up table for CIBR; CIBR, using this technique, is a function of both water vapor (as with all CIBR techniques) and surface reflectance. Here we use data recently acquired with the Multi-spectral Thermal Imager spacecraft (MTI) to compare these four methods of determining columnar water vapor content.
Ground-truth collections at the MTI core sites
Alfred J. Garrett, Robert J. Kurzeja, Matthew J. Parker, et al.
The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation.
Multiscale thermal-infrared measurements of the Mauna Loa caldera, Hawaii
Lee K. Balick, Alan R. Gillespie, Elsa Abbott, et al.
Until recently, most thermal infrared measurements of natural scenes have been made at disparate scales, typically 10-3-10-2 (spectra) and 102-103m (satellite images), with occasional airborne images (101m) filling the gap. Temperature and emissivity fields are spatially heterogeneous over a similar range of scales, depending on scene composition. A common problem for the land surface, therefore, has been relating field spectral and temperature measurements to satellite data, yet in many cases this is necessary if satellite data are to be interpreted to yield meaningful information about the land surface. Recently, three new satellites with thermal imaging capability at the 101-102m scale have been launched: MTI, TERRA, and Landsat 7. MTI acquires multispectral images in the mid-infrared (3-5micrometers ) and longwave infrared (8-10micrometers ) with 20m resolution. ASTER and MODIS aboard TERRA acquire multispectral longwave images at 90m and 500-1000m, respectively, and MODIS also acquires multispectral mid-infrared images. Landsat 7 acquires broadband longwave images at 60m. As part of an experiment to validate the temperature and thermal emissivity values calculated from MTI and ASTER images, we have targeted the summit region of Mauna Loa for field characterization and near-simultaneous satellite imaging, both on daytime and nighttime overpasses, and compare the results to previously acquired 1--1m airborne images, ground-level multispectral FLIR images, and the field spectra. Mauna Loa was chosen in large part because the 4x6km summit caldera, flooded with fresh basalt in 1984, appears to be spectrally homogeneous at scales with 10-1 and 102m, facilitating the comparison of sensed temperature. The validation results suggest that, with careful atmospheric compensation, it is possible to match ground measurements with measurements from space, and to use the Mauna Loa validation site for cross-comparison of thermal infrared sensors and temperature/emissivity extraction algorithms.
Semi-autonomous registration of satellite imagery using feature fitting
Jody L. Smith, Sheila E. Motomatsu, John G. Taylor, et al.
Interband coregistration of multispectral satellite imagery is essential to exploiting the spectral information inherent in these data. A semi-automatic image registration method has been developed for Multispectral Thermal Imager (MTI) data. This registration method, based on feature fitting within the image, is applicable to the 14 MTI spectral bands that contain ground information; these spectral bands range from 0.45 to 10.7micrometers . The feature fitting registration method requires selection of an appropriate scene feature in the image, usually a crossroad or other feature with moderately high contrast to compute the required shift in x and y for each band. This paper describes the algorithm and provides examples of images registered using this method. Preliminary results show that for MTI image registration, feature fitting yields better results than cross-correlation. Results also show that this algorithm works well for a broad variety of scenes; this algorithm has been applied to images with scene content ranging from desert images with very little structure to heavily forested images. This method has been developed in support of the MTI mission, but may easily be extended for use on image data collected by other multispectral sensors.
Atmospheric Characterization and Correction
icon_mobile_dropdown
MODTRAN4 version 2: radiative transfer modeling
MODTRAN4, version 2, will soon be released by the U.S. Air Force Geophysics Laboratory; it is an extension of the MODTRAN4, v1, atmospheric transmission, radiance and flux model developed jointly by the Air Force Research Laboratory / Space Vehicles Directorate (AFRL / VS) and Spectral Sciences, Inc. The primary accuracy improvements in MODTRAN4 remain those previously published: (1) the multiple scattering correlated-k approach to describe the statistically expected transmittance properties for each spectral bin and atmospheric layer, and (2) the Beer-Lambert formulation that improves the treatment of path inhomogeneities. Version 2 code enhancements are expected to include: *pressure-dependent atmospheric profile input, as an auxiliary where the hydrostatic equation is integrated explicitly to compute the altitudes, *CFC cross-sections with band model parameters derived from pseudo lines, *additional pressure-induced absorption features from O2, and *a new 5 cm-1 band model option. Prior code enhancements include the incorporation of solar azimuth dependence in the DISORT-based multiple scattering model, the introduction of surface BRDF (Bi-directional Radiance Distribution Functions) models and a 15 cm-1 band model for improved computational speed. Last year's changes to the HITRAN database, relevant to the 0.94 and 1.13 micrometers bands of water vapor, have been maintained in the MODTRAN4,v2 databases.
Shadow-insensitive material detection/classification with atmospherically corrected hyperspectral imagery
Steven M. Adler-Golden, Robert Y. Levine, Michael W. Matthew, et al.
Shadow-insensitive detection or classification of surface materials in atmospherically corrected hyperspectral imagery can be achieved by expressing the reflectance spectrum as a linear combination of spectra that correspond to illumination by the direct sum and by the sky. Some specific algorithms and applications are illustrated using HYperspectral Digital Imagery Collection Experiment (HYDICE) data.
MOD3D: a model for incorporating MODTRAN radiative transfer into 3D simulations
Alexander Berk, Gail P. Anderson, Brett N. Gossage
MOD3D, a rapid and accurate radiative transport algorithm, is being developed for application to 3D simulations. MOD3D couples to optical property databases generated by the MODTRAN4 Correlated-k (CK) band model algorithm. The Beer's Law dependence of the CK algorithm provides for proper coupling of illumination and line-of-sight paths. Full 3D spatial effects are modeled by scaling and interpolating optical data to local conditions. A C++ version of MOD3D has been integrated into JMASS for calculation of path transmittances, thermal emission and single scatter solar radiation. Results from initial validation efforts are presented.
Spectral Applications and Methodology IV
icon_mobile_dropdown
Landcover change over central Virginia: comparison of endmember fractions in hyperspectral data
Stefanie Tompkins, Kellie McNaron-Brown, Jessica M. Sunshine, et al.
A spectral mixture analysis (SMA) based change detection approach has been applied to hyperspectral image (HSI) data collected by the HyMap sensor. As a first step in extending this approach from multispectral to HSI data, an HSI change pair featuring a forested region in central Virginia in the fall of 1999 and 2000 was modeled via SMA as a linear combination of three main endmember materials: green vegetation, non-photosynthetic vegetation, and shade. The fractional abundance images resulting from the SMA are compared quantitatively to assess the level of detail with which change can be detected and understood from the HSI data. Alternatives to the simple three SMA endmember solution are discussed as well, including the use of additional endmembers to account for seasonal change or multiple vegetation species. The utility of SMA-based change detection for mapping subpixel changes in materials is demonstrated, as is the increased interpretability over traditional change detection approaches.
Optical profiles for the lower James River estuary and nontidal headwater reaches of the James River
John E. Anderson, Melvin B. Satterwhite
Spectral reflectance measurements were acquired at various viewing angles at three sites along the James Riber representing both tidal and non-tidal waters. The upper James River reaches were characterized by optically clear waters and resulted in spectral measurements that represented bottom substrates. In contrast, the lower James River sites were characterized by turbid waters having high suspended sediment and algal chlorophyll. Concurrent pyranometer measurements showed the maximum downwelling radiation occurring from 1030 to 1330 local sun time. During this period, two strategies emerged for consideration when collecting water column reflectance data. In optically clear waters, statistical analysis using the variance of the 575 nm waveband (as a reference) showed a nadir viewing angle (90 degree(s)) and upsun (+30 degree(s)) off-axis viewing angle were the most effective at characterizing the bottom substrates. The conclusion drawn for optically clear waters was that (independent of sun angle), nadir position of the sensor optics is critical, but confident measurements can still be acquired up to +30 degree(s) off axis. In contrast, lower James River sites (turbid reaches) showed no correlation between the nadir and off-axis measurements using the variance of the 680 nm chlorophyll absorption line. Furthermore, the conclusions drawn from these reaches demonstrated that reflectance data are best acquired at nadir viewing angles for highly turbid waters. These measurements could have implications for both non-imaging and imaging remote sensor data.
Use of inherent optical properties for determination of water quality
Khiruddin Abdullah, Mohd Zubir Mat Jafri, Zubir Bin Din
An attempt to estimate water quality parameters using Thematic Mapper (TM) data has been carried out in the coastal waters of Penang. The water quality parameters selected were total suspended solids (TSS) and chlorophyll concentrations. The algorithm used is based on the reflectance model which is a function of the inherent optical properties of water which can be related to its constituents concentrations. A multiple regression algorithm was derived using multiband data for retrieval of each water constituent. The digital numbers coinciding with the sea truth locations were extracted and converted to radiance and exoatmospheric reflectance units. Solar angle and atmospheric corrections were performed on the data sets. These data were combined for multi-date regression analysis. The efficiency of the present algorithm versus other forms of algorithms was also investigated. Based on the observations of correlation coefficient and root-mean-square deviations with the sea-truth data, the results indicated the superiority of the proposed algorithm. The solar corrected data gave good results, and comparable accuracy was obtained with the atmospherically corrected data. The calibrated TSS and chlorophyll algorithms were employed to generate water quality maps.
Imaging Spectrometry Projects II
icon_mobile_dropdown
Optimization and characterization of an imaging Hadamard spectrometer
Christine M. Wehlburg, Joseph C. Wehlburg, Stephen M. Gentry, et al.
Hadamard Transform Spectrometer (HTS) approaches share the multiplexing advantages found in Fourier transform spectrometers. Interest in Hadamard systems has been limited due to data storage/computational limitations and the inability to perform accurate high order masking in a reasonable amount of time. Advances in digital micro-mirror array (DMA) technology have opened the door to implementing an HTS for a variety of applications including fluorescent microscope imaging and Raman imaging. A Hadamard transform spectral imager (HTSI) for remote sensing offers a variety of unique capabilities in one package such as variable spectral and temporal resolution, no moving parts (other than the micro-mirrors) and vibrational insensitivity. An HTSI for remote sensing using a Texas Instrument digital micro-mirror device (DMD) is being designed for use in the spectral region 1.25 - 2.5 micrometers . In an effort to optimize and characterize the system, an HTSI sensor system simulation has been concurrently developed. The design specifications and hardware components for the HTSI are presented together with results calculated by the HTSI simulation that include the effects of digital (vs. analog) scene data input, detector noise, DMD rejection ratios, multiple diffraction orders and multiple Hadamard mask orders.
Effects of temporally changing sources on Fourier transform spectrometers
A Michelson Fourier Transform Spectrometer senses an object/material in the time domain, producing an interferogram. To produce a spectrum, the interferogram is Fourier transformed into the spectral domain. Unless filtering is applied to the interferogram, all the time changing (AC) components of the interferogram contribute to the resulting spectrum. Aperiodic signals are not easily removed from the interferogram and, when transformed, result in false spectral features. Possible sources of real-world aperiodic signals are discussed and their effects on the resulting transformed spectra are demonstrated. Mitigation and avoidance techniques for some of the more common real- world aperiodic signals are discussed.
High-throughput dispersive imaging spectrometer for astronomy at visible wavelengths
In this paper the design of a high-throughput imaging spectrometer for use in astronomical applications is proposed and investigated. The method of spectral separation used in this new design does not rely on dispersion through a grating or path length difference in an interferometer. Instead, the chromatic aberration found in a common lens is used to process the input scene through many different spectral point-spread functions, which produce a collection of broadband images. The spectral separation is achieved through a spectral/spatial recovery algorithm. Because there is no grating to block the light or no beam-splitter to reflect it back out of the telescope, a system built on this principle of spectral separation can achieve a throughput in excess of 90%. The spectral diversity in the point-spread functions is achieved by changing the distance between the lenses and detector in the imaging system. The result of these changes is a different chromatic aberration for each wavelength. From a set of measurements taken with different chromatic aberrations, a spectral/spatial recovery algorithm is developed that is capable of extracting spectral information from a set of broadband images that have been processed through different realizations of chromatic aberration. A point design for this spectral separation concept is presented and tested through simulation. The algorithm is tested using simulated scene data from the point design of a small-extended object and a point source.
Methodologies and protocols for the collection of midwave and longwave infrared emissivity spectra using a portable field spectrometer
The development of highly portable field devices for measuring midwave and longwave infrared emissivity spectra has greatly enhanced the ability of scientists to develop and verify exploitation algorithms designed to operate in these spectral regions. These data, however, need to be collected properly in order to prove useful once the scientists return from the field. Attention to the removal of environmental factors such as reflected downwelling atmospheric and background radiance from the measured signal are of paramount importance. Proper separation of temperature and spectral emissivity is also a key factor in obtaining spectra of accurate shape and magnitude. A complete description of the physics governing the collection of field spectral emissivity data will be presented along with the assumptions necessary to obtain useful sample signatures. A detailed look at an example field collection device will be presented and the limitations and considerations when using such a device will be scrutinized. Attention will be drawn to the quality that can be expected from field measurements obtained and the limitations in their use that must be endured.
Comparison of field- and laboratory-collected midwave and longwave infrared emissivity spectra/data reduction techniques
Many targets that remote sensing scientists encounter when conducting their research experiments do not lend themselves to laboratory measurement of their surface optical properties. Removal of these targets from the field can change their biotic condition, disturb the surface composition, and change the moisture content of the sample. These parameters, as well as numerous others, have a marked influence on surface optical properties such as spectral and bi-directional emissivity. This necessitates the collection of emissivity spectra in the field. The propagation of numerous devices for the measurement of midwave and longwave emissivity in the field has occurred in recent years. How good are these devices and how does the accuracy of the spectra they produce compare to the tried and true laboratory devices that have been around for decades? A number of temperature/emissivity separation algorithms will be demonstrated on data collected with a field portable Fourier transform infrared (FTIR) spectrometer and the merits and resulting accuracy compared to laboratory spectra made of these identical samples. A brief look at off-nadir view geometries will also be presented to alert scientists to the possible sources of error in these spectra that may result when using sensing systems that do not look straight down on targets or when their nadir looking sensor is looking at a tilted target.
Detection and Identification III
icon_mobile_dropdown
Recognizing 3D objects in hyperspectral images under unknown conditions
We present models and algorithms for recognizing 3D objects in airborne 0.4-2.5 micron hyperspectral images acquired under unknown conditions. Objects of interest exhibit complex geometries with surfaces of different materials. The DIRSIG image generation software is used to build spatial/spectral surfaces of different materials. The DIRSIG image generation software is used to build spatial/spectral subspace models for the objects that capture a range of atmospheric and illumination conditions and viewing geometries. Since we consider scales for which multiple materials will mix in a pixel, the object subspace models also account for spectral mixing. An important aspect of the work is the use of methods for partitioning object subspaces to optimize performance. The new algorithms have been evaluated using hyperspectral data that has been synthesized for a range of conditions.
Interference subspace projection approach to subpixel target detection
A hyperspectral imaging spectrometer can reveal and uncover targets with a small range of diagnostic wavelengths. Unfortunately, it also extracts unknown targets such as background and natural signatures, interferers, which cannot be identified a priori. It has been shown that interference generally plays a more dominant role than noise. In order to resolve this issue, the standard signal/noise model is extended to a model that considers signals, interferers and noise as three separate information sources. Since the considered interferers can be interpreted as signal sources that are not targets of interest, they include undesired target signals, background sources. With this interpretation the recently reported signal/background/noise model can be treated as a special case of the proposed signal/interference/noise (SIN) model. Using this SIN model an interference subspace projection (ISP)-based detection method is developed along with generalized likelihood ratio test (GLRT) and constrained energy minimization (CEM) detector. A comparative study is conducted to evaluate their performance.
Detection of constrained signals
The problem considered in this paper is the detection of targets in a multispectral image. One of the difficulties encountered in this problem is the fact that the abundances of the observed signals are unknown. The generalized likelihood ratio test (GLRT) is often used in detection problems such as this one. The GLRT replaces the unknown parameters, in this case the signal abundances, with maximum likelihood estimates (MLEs) of those parameters. In general, the GLRT is not an optimal test. It is argued that for the signal model in this paper, constrained least squares (CLS) estimates of the unknown parameters are more appropriate than MLEs. A hypothesis test called the constrained multirank signal detector (CMSD) is derived using CLS estimates of the signal abundances. The performance of this test is calculated and is compared to the performance of the GLRT derived for the same signal model.
Posters
icon_mobile_dropdown
Band sharpening technique for multiresolution spectral data sets using regression residuals
Virgil S. Lewis
This paper proposes a band sharpening technique for data sets with multiple bands of data at a fine resolution and one or more bands of data at a coarse resolution. A linear prediction model of the coarse resolution data is calculated using the fine resolution data, along with it's associated residual data. A series of smoothing filters was applied to this residual data and added back into the output of the linear predictor for the final result, which was then compared to the original input data with preliminary exploratory analysis. The most effective smoothing filter appears to be a median filter of the order n+1 (with n being the nearest integer to the ratio of coarse resolution to fine resolution data). Initial radiometric comparisons are also presented here.