Proceedings Volume 4480

Imaging Spectrometry VII

Michael R. Descour, Sylvia S. Shen
cover
Proceedings Volume 4480

Imaging Spectrometry VII

Michael R. Descour, Sylvia S. Shen
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 17 January 2002
Contents: 10 Sessions, 43 Papers, 0 Presentations
Conference: International Symposium on Optical Science and Technology 2001
Volume Number: 4480

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Instrumentation Design Principles
  • Algorithms
  • Reconfigurable Computing in Imaging Spectrometry
  • Algorithms and Applications
  • Fourier-Transform Spectometers in Spectral Sensing
  • Hyperion Imaging Spectrometer Mission
  • Multispectral Thermal Imaging
  • Acquisition and Interpretation of Hyperspectral Data
  • Advanced and Unconventional Sensors
  • Poster Session
  • Advanced and Unconventional Sensors
Instrumentation Design Principles
icon_mobile_dropdown
Infrared hyperspectral imaging Fourier transform and dispersive spectrometers: comparison of signal-to-noise-based performance
Lee W. Schumann, Terrence S. Lomheim
We compare the signal-to-noise ratio (SNR) based performance of scanning dispersive and staring (Michelson interferometer based) Fourier transform spectrometer approaches to hyperspectral infrared imaging. The information collected by the focal planes of these two spectrometer types is very different. In the scanning dispersive spectrometer, the 2-D focal plane array (FPA) collects spatial information in one direction and spectral information in the other, while the second spatial dimension is swept out in time by the scanning mechanism. The Fourier transform spectrometer 2-D focal plane array acquires 2-D spatial information while the spectrum (interferogram) is swept out in time. The formation of the signal and propagation of noise from the focal plane data collection to the final hyperspectral data cube are significantly different for each type of instrument, and distinct signal and noise models apply to each. Using the noise equivalent spectral radiance (NESR)as a figure of merit, we compare the performance of the two types of instrument with the same optical system, equal spatial resolution, image area, spectral resolution, spectral span, and imaging time. Under photon noise limited conditions (for each instrument), which are made possible by the proper design of modern infrared focal plane arrays, the traditional multiplex (Fellgett) and throughput (Jacquinot) advantages associated with non-imaging Fourier transform spectrometers vanish for reasons that will be made clear in the paper. We discuss 2-D focal plane array characteristics for each instrument, such as frame rate, dynamic range, etc. and show why they are so different (by design) between these two instruments. Finally, the consequences of the signal processing requirements implied by the two instruments are summarized.
Processing inputs to the design and development of a hyperspectral sensor
Many new imaging spectrometers have been developed over the past several years that use a two-dimensional detector array to simultaneously record the spectra for a line of points on the ground. The second spatial dimension is built up over time by motion of the sensor. The motion of the sensor can be in the direction of the platform motion or at right angles to it. Since a large number of spectra are obtained simultaneously, the instantaneous data rate can be much higher than that achieved with a flying spot scanner. The result can be both greater angular coverage and higher spatial resolution at the same signal level. There are many consequences of using this type of sensor. The two dimensional design of the optical system and its effect on the data must be considered. In addition, since the same detector is not used to build up the image in each spectral band, there are new sources of pattern noise in the data that are not normally seen in data from a flying spot sensor. In this paper, the effects of some of these design considerations are discussed from the point of view of their impact on classification and anomaly detection. Design recommendations for sensor design are made from the processing point of view.
Performance requirements for airborne imaging spectrometers
We present an approach to translate scientific requirements into instrument specifications by using a forward model for generic airborne imaging spectrometers in earth remote sensing. Based on scientific requirements, for each relevant variable detectable using imaging spectroscopy, ground reflectance spectra have been provided by specialists in their field of expertise. Relevant changes to be detected in the observed variable are used to derive critical delta reflectances. Realistic mission scenarios are subsequently combined with theses delta reflectances and a radiative transfer code to determine spectral NedL values at the sensor level. The combination of various fields of application in terms of detectable variables and the use of realistic mission scenarios leads to the determination of various NedL levels that are determined at given at sensor radiances. Using this concept, manufacturable specifications can be derived from scientific requirements.
Technique for achieving high throughput with a pushbroom imaging spectrometer
Static Fourier transform spectrometers have the ability to combine the principle advantages of the two traditional techniques used for imaging spectrometry: the throughput advantage offered by Fourier transform spectrometers, and the advantage of no moving parts offered by dispersive spectrometers. The imaging versions of these spectrometers obtain both spectral information, and spatial information in one dimension, in a single exposure. The second spatial dimension may be obtained by sweeping a narrow field mask across the object while acquiring successive exposures. When employed as a pushbroom sensor from an aircraft or spacecraft, no moving parts are required, since the platform itself provides this motion. But the use of this narrow field mask to obtain the second spatial dimension prevents the throughput advantage from being realized. We present a technique that allows the use of a field stop that is wide in the along-track direction, while preserving the spatial resolution, and thus enables such an instrument to actually exploit the throughput advantage when used as a pushbroom sensor. The basis of this advance is a deconvolution technique we have developed to recover the spatial resolution in data acquired with a field stop that is wide in the along-track direction. The effectiveness is demonstrated by application of this deconvolution technique to simulated data.
Algorithms
icon_mobile_dropdown
Performance comparisons for spectral unmixing algorithms
Spectral unmixing has emerged as a key application arising from the wealth of spectral measurements in hyperspectral processing. Several communities have shown great interest in the decompositional analysis of mixed pixels. Unmixing provides the ability to decompose mixed pixels in terms of distinct, unique substances, and provide a foundation for doing sub-pixel material identification. We undertake this comparison of unmixing algorithm performance with the knowledge that many algorithms exist, and new methods are constantly being explored and tested. Several disciplines are participating in the attempt to perform unmixing, such as geology, geophysics, engineering, and analytical chemistry.
Stochastic compositional models applied to subpixel analysis of hyperspectral imagery
Hyperspectral data are often modeled using either a linear mixture or a statistical classification approach. The linear mixture model describes each spectral vector as a constrained linear combination of end-member spectra, whereas the classification approach models each spectra as a realization of a random vector having one of several normal distributions. In this work we describe a stochastic compositional model that synthesizes these two viewpoints and models each spectra as a constrained linear combination of random vectors. Maximum likelihood methods of estimating the parameters of the model, assuming normally distributed random vectors, are described, and anomaly and likelihood ratio detection statistics are defined. Detection algorithms derived from the classification, linear mixing, and stochastic compositional models are defined. Detection algorithms derived from the classification, linear mixing, and stochastic compositional models are compared using data consisting of ocean hyperspectral imagery to which the signature of a personal flotation device has been added at pixel fill fractions (PFF) of five and ten percent. These results show that detection algorithms based on the stochastic compositional model may significantly improve detection performance. For example, this study shows that, at a 5% PFF and a probability of detection of 0.8, the false alarm probabilities of anomaly and likelihood detection algorithms based on the stochastic compositional model are more than an order of magnitude lower than the false alarm probabilities of comparable algorithms based on either a linear unmixing algorithm or a Gaussian mixture model.
New approach for hyperspectral mineral exploitation
This paper's objective is to present a new, computationally efficient method for automatic exploration, detection and recognition. The automatic mineral homogeneous region separation algorithm developed by A.U.G. Signals in cooperation with the Canadian Space Agency (CSA) using AVIRIS data and mineral signatures from the Nevada's (U.S.) Cuprite site is described. The hyperspectral data and spectral signatures were provided by the Canada Center for Remote Sensing (CCRS). The algorithm is able to successfully divide the image in regions where the mineral composition remains constant. Hence, it can be used for reducing the noise is estimating the abundance parameters of the minerals on a pixel-by-pixel basis, for image region selection and hyperspectral image labeling for data storage and/or selective transmission. This may be another form of lossless hyperspectral image compression. Through the presented approach we are able to: a) divide a hyperspectral image into regions of adaptivity where pixel unmixing algorithms are able to extract the abundance parameters with higher degree of confidence, b) increase the signal to noise ration (SNR) of the present spectral signatures in a region and c) apply the proposed hyperspectral homogeneous region separation for data reduction (hyperspectral image compression). Experimental and theoretical results and comparisons/tradeoff studies are presented.
Automated clustering/segmentation of hyperspectral images based on histogram thresholding
A very simple and fast technique for clustering/segmenting hyperspectral images is described. The technique is based on the histogram of divergence images; namely, single image reductions of the hyperspectral data cube whose values reflect spectral differences. Multi-value thresholds are set from the local extrema of such a histogram. Two methods are identified for combining the information of a pair of divergence images: a dual method of combining thresholds generated from 1D histograms; and a true 2D histogram method. These histogram-based segmentations have a built-in fine to coarse clustering depending on the extent of smoothing of the histogram before determining the extrema. The technique is useful at the fine scale as a powerful single image display summary of a data cube or at the coarser scales as a quick unsupervised classification or a good starting point for an operator-controlled supervised classification. Results will be shown for visible, SWIR, and MWIR hyperspectral imagery.
Directionally constrained or constrained energy minimization adaptive matched filter: theory and practice
The constrained energy minimization (CEM) algorithm and the closely related matched filter processor have been widely used for target detection in hyperspectral data exploitation applications. In this paper, we look at the key assumptions underlying the derivation of each algorithm and the effect these assumptions have upon their performance. To illuminate and better understand their operation, we compare both algorithms to Fisher's linear discriminant and the quadratic Bayes classifier. Bayes classifier reduces to Fisher's linear discriminant when the target and background covariances are equal. Furthermore, Fisher's linear discriminant is reduced to the matched filter, when we look for low probability targets. These interrelations can be used to justify the use of the matched filter, which has been developed for the detection of known signals in additive noise, for hyperspectral target which are not corrupted by additive noise. Finally, we investigate under what conditions the output of the matched filter follows a normal distribution.
Reconfigurable Computing in Imaging Spectrometry
icon_mobile_dropdown
Co-design of software and hardware to implement remote sensing algorithms
James P. Theiler, Janette R. Frigo, Maya Gokhale, et al.
Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an ``inner loop'' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
Miriam E. Leeser, Pavle Belanovic, Michael Estlick, et al.
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
Evolving a spatio-spectral network on reconfigurable computers for multispectral feature identification
Reid B. Porter, Maya Gokhale, Neal R. Harvey, et al.
Feature identification attempts to find algorithms that can consistently separate a feature of interest from the background in the presence of noise and uncertain conditions. This paper describes the development of a high-throughput, reconfigurable computer based, feature identification system known as POOKA. POOKA is based on a novel spatio-spectral network, which can be optimized with an evolutionary algorithm on a problem-by-problem basis. The reconfigurable computer provides speed up in two places: 1) in the training environment to accelerate the computationally intensive search for new feature identification algorithms, and 2) in the application of trained networks to accelerate content based search in large multi-spectral image databases. The network is applied to several broad area features relevant to scene classification. The results are compared to those found with traditional remote sensing techniques as well as an advanced software system known as GENIE. The hardware efficiency and performance gains compared to software are also reported.
Evolving land cover classification algorithms for multispectral and multitemporal imagery
The Cerro Grande/Los Alamos forest fire devastated over 43,000 acres (17,500 ha) of forested land, and destroyed over 200 structures in the town of Los Alamos and the adjoining Los Alamos National Laboratory. The need to measure the continuing impact of the fire on the local environment has led to the application of a number of remote sensing technologies. During and after the fire, remote-sensing data was acquired from a variety of aircraft- and satellite-based sensors, including Landsat 7 Enhanced Thematic Mapper (ETM+). We now report on the application of a machine learning technique to the automated classification of land cover using multi-spectral and multi-temporal imagery. We apply a hybrid genetic programming/supervised classification technique to evolve automatic feature extraction algorithms. We use a software package we have developed at Los Alamos National Laboratory, called GENIE, to carry out this evolution. We use multispectral imagery from the Landsat 7 ETM+ instrument from before, during, and after the wildfire. Using an existing land cover classification based on a 1992 Landsat 5 TM scene for our training data, we evolve algorithms that distinguish a range of land cover categories, and an algorithm to mask out clouds and cloud shadows. We report preliminary results of combining individual classification results using a K-means clustering approach. The details of our evolved classification are compared to the manually produced land-cover classification.
Systolic array for computing the pixel purity index algorithm on hyperspectral images
Dominique Lavenier, Erwan Fabiani, Steven Derrien, et al.
The Pixel Purity Index (PPI) algorithm is used as a pre-processing to find end-members in a hyper spectral image. It tries to identify pure spectra by assigning a pixel purity index to each pixel in the image. The algorithm proceeds by generating a large number of random vectors through the hyper spectral image and by computing a dot-product between each vector and all the pixels. Since the number of random vectors is high (a few thousands), this algorithm may require hours of computation on standard computers. We present a systolic implementation of the PPI algorithm. It is based on a linear systolic array connected to a host processor through its external I/O bus system. In this scheme, the image is stored on the host processor memory and flushed several times through the array. The performance is mainly dictated by the I/O bus bandwidth and the ability to implement large systolic arrays: the fewer the passes needed through the array, the better the performance. The hardware implementation targets Xilinx Virtex boards, but the specification is independent of the platform: no external memories are required and the architecture works whatever the size of the linear systolic array. Experiments carried out on a low-cost reconfigurable board (a single Xilinx Virtex 800) show a speed-up of two orders of magnitude compared to a software implementation.
Algorithms and Applications
icon_mobile_dropdown
Automatic feature extraction for panchromatic Mars Global Surveyor Mars Orbiter camera imagery
Catherine S. Plesko, Steven P. Brumby, Conway B. Leovy
The Mars Global Surveyor Mars Orbiter Camera (MOC) has produced tens of thousands of images, which contain a wealth of information about the surface of the planet Mars. Current manual analysis techniques are inadequate for the comprehensive analysis of such a large dataset, while development of handwritten feature extraction algorithms is laborious and expensive. This project investigates application of an automatic feature extraction approach to analysis of the MOC narrow angle panchromatic dataset, using an evolutionary computation software package called GENIE. GENIE uses a genetic algorithm to assemble feature extraction tools from low-level image operators. Each generated tool is evaluated against training data provided by the user. The best tools in each generation are allowed to 'reproduce' to produce the next generation, and the population of tools is permitted to evolve until it converges to a solution or reaches a level of performance specified by the user. Craters are one of the most scientifically interesting and most numerous features in the MOC data set, and present a wide range of shapes at many spatial scales. We now describe preliminary results on development of a crater finder algorithm using the GENIE software.
Correlation analysis of hyperspectral absorption features with the water status of coast live oak leaves
Ruiliang Pu, Shaokui Ge, Nina Maggi Kelly, et al.
A total of 139 reflectance spectra (between 350 and 2500 nm) from coast live oak (Quercus Agrifolia) leaves were measured in the laboratory with a spectrometer FieldSpec«Pro FR. Correlation analysis was conducted between absorption features, three-band ratio indices derived from the spectra and corresponding relative water content (RWC, %) of oak leaves. The experimental results indicate that there exist linear relationships between the RWC of oak leaves and absorption feature parameters: wavelength position (WAVE), absorption feature depth (DEP), width (WID) and the multiplication of DEP and WID (AREA) at the 975 nm, 1200 nm and 1750 nm positions and two three-band ratio indices: RATIO975 and RATIO1200, derived at 975 nm and 1200 nm. AREA has a higher and more stable correlation with RWC compared to other features. It is worthy of noting that the two three-band ratio indices, RATIO975 and RATIO1200, may have potential application in assessing water status in vegetation.
New hyperspectral compression options in JPEG-2000 and their effects on exploitation
Sylvia S. Shen, James H. Kasner, Timothy S. Wilkinson
This paper describes a continuing study effort investigating the impact of hyperspectral compression on the utility of compressed and subsequently reconstructed data. The current study involved the application of new compression options in JPEG-2000 to hyperspectral data and the investigation of their effects on exploitation. Part II of the JPEG-2000 standard (ISO/IEC 15444-2) provides extensions to the baseline JPEG-2000 compression algorithms (ISO/IEC 15444-2) that allow for the compression of hyperspectral data. In this study, Karhunen-Loeve Transform (KLT) was used for spectral decorrelation along with wavelet compression and scalar quantization to encode two HYDICE scenes at five different average bit rates (4.0, 2.0, 1.0, 05., 02.5 bits/pixel/band). Part II of the JPEG- 2000 standard also introduces the notion of component collections, which may be used to spectrally segment (and spectrally permute) hyperspectral data. Component collections were used in conjunction with KLT to reduce computation complexity and improve numeric stability. Two exploitation tasks, anomaly detection and material identification, were performed on these compressed and reconstructed data. We report the conventional root-mean- square-error (RMSE) and peak signal-to-noise ration (PSNR) metrics. We also report the exploitation results to facilitate the determination of acceptable bit rate for each exploitation task and the comparison amongst different compression algorithms. Comparisons are also made with previously reported results using an earlier version of JPEG-2000 to compress the HYDICE data.
Application of principal-components-based invariant display strategy to wide-area hyperspectral data
J. Scott Tyo, Athanasios Konsolakis, David I. Dierson, et al.
A principal components (PC)-based transformation was previously introduced for mapping high-dimensional hyperspectral imagery (HSI) into 3-dimensional colorimetric displays [Tyo, Diersen, and Olsen, SPIE vol. 4132, Descour and Shen, Eds., 2001, pp. 147-156]. In this study, the previous work is extended to examine the conical nature of HSI data in the PC-based space. Picturing the data as conical provides insight as to the location of the origin of the cone (which might not be included in the data) and the point of shade. Once the origin of the cone is located, the PC-based color transformation is more stable with respect to hue constancy. Strategies are introduced to make the method invariant, i.e. to ensure that important scene constituents appear with consistent and intuitive presentations.
Stokes polarimetry at near-infrared 1.56 um for solar observation
FeI 1.56 micrometers Zeeman-sensitive lines are very important and potential to measure the magnetic field of the deepest layer of the solar photosphere. The new generation polarimeter is designed and manufactured in this wavelength range. By use of the polarimeter mounted on the vertical spectrograph of the 2m solar telescope at Kitt Peak, we can observe the Stokes I, Q, U, Vv parameters simultaneously. The paper presents the introduction of the near infrared polarimeter and the polarmetry of a sunspot group.
Fourier-Transform Spectometers in Spectral Sensing
icon_mobile_dropdown
MightySat II.1 hyperspectral imager: summary of on-orbit performance
Summer Yarbrough, Thomas R. Caudill, Eric T. Kouba, et al.
The primary payload on a small-satellite, the Air Force Research Laboratory's MightySat II.1, is a spatially modulated Fourier Transform Hyperspectral Imager (FTHSI) designed for terrain classification. The heart of this instrument is a solid block Sagnac interferometer with 85cm-1 spectral resolution over the 475nm to 1050nm bands and 30m spatial resolution. Coupled with this hyperspectral imager is a Quad-C40 card, used for on-orbit processing. The satellite was launched on 19 July 2000 into a 575km, 97.8 degree inclination, sun-synchronous orbit. The hyperspectral imager collected its first data set on 1 August 2000, and has been in continuous operation since that time. To the best of our knowledge, the MightySat II.1 sensor is the first true hyperspectral imager to be successfully operated in space. The paper will describe the satellite and instrument, pre-launch calibration results, on-orbit performance, and the calibration process used to characterize the sensor. We will also present data on the projected lifetime of the sensor along with samples of the types of data being collected.
Static Fourier-transform spectrometer based on Savart polariscope
Gao Zhan, Kazuhiko Oka, Tsuyoshi Ishigaki, et al.
To avoid the drawbacks of conventional Fourier transform spectrometer (CFTS) based on the Michelson interferometer which needs a scanning system to acquire the interferogram in the temporal domain, the static Fourier transform (StFT) spectrometer has been developed. A new static Fourier-transform imaging spectrometer based on the Sagnac polariscope is proposed. The novelty of this work comes from slitless in this instrument, which means high throughput. The throughput advantage of this instrument over StFT imaging spectrometer based on Sagnac interferometer is presented. The principle and the system configuration are described. Several preliminary experimental results are shown.
Calibration techniques for imaging FTIR data
Full utilization of hyper-spectral imagery in the thermal infrared region requires a high-quality calibration, and the calibration quality requirement increases with the power and sophistication of the retrieval algorithms to be employed. Here we examine calibration issues associated with an airborne imaging Michelson Fourier Transform Infrared (FTIR)Spectrometer operating in the long-wave infrared (LWIR) region. In addition to the fundamental challenge of extracting a weak signal of interest from a complex background, problems which arise in such an instrument include pointing jitter, detector non-linearity, sampling position errors and etalon effects within the focal plane array. In each case, a frequent, high-quality calibration can ameliorate these problems. We discuss several hyper-spectral data analysis techniques, and how our calibration strategy, incorporating both ground and on-board calibration systems, improves the sensitivity of the retrievals.
Using state-of-the-art hydrodynamic models to generate synthetic data cubes for imaging spectral sensing applications
Jim Kao, William S. Smith, Barham W. Smith, et al.
An airborne IR hyperspectral imaging sensor based on the Fourier transform spectrometer technique has been used for studying atmospheric gaseous plumes under the auspice of U.S. Department of Energy. Model generated synthetic data of spectral intensity associated withe the plume and the surface background is useful in terms of performing trade studies as well as testing new algorithms. To cope with the highly turbulent and transient atmospheric boundary layer where plume emissions and evolution are embedded, we have used a high-resolution (at the scale of 1 m) time-dependent Napier-Stoke atmospheric hydrodynamic code, HIGRAD, to replace the Gaussian/multi-fractal approach in the original package of the Los Alamos End-to-End Modeling of Imaging Spectral Sensing Applications (EMISSA). The output from HIGRAD is then used for calculations of radiance reaching the sensor through the Fast Atmospheric Signature Code (FASCODE) with a spectral resolution of 1 cm-1 or less. The modeled plume structure in concentrations and associated plume images in radiance bear great resemblance to the contrast model for its capability in quantifying the column densities of chemical species. The synthetic data produced through our approach proves to be effective in evaluation our understanding of the thermal infrared imaging process.
Hyperion Imaging Spectrometer Mission
icon_mobile_dropdown
On-orbit solar radiometric calibration of the Hyperion instrument
The end-to-end calibration plan for the Hyperion EO-1 hyperspectral payload is presented. The ground calibration is traceable to a set of three high quantum efficiency p-n silicon photodiode trap detectors the responsivities of which are traceable absolutely to solid state silicon diode physical laws. An independent crosscheck of the radiance of the Calibration Panel Assembly used to flood the Hyperion instrument in field and aperture was made with a transfer radiometer developed at TRW. On-orbit measurements of the sun's irradiance as it illuminates a painted panel inside the instrument cover are compared to the radiance scale developed during pre-flight calibration. In addition, an on-orbit calibration lamp source is observed to trace the pre-flight calibration constants determined on the ground to the solar calibration determination.
Hyperion on-orbit validation of spectral calibration using atmospheric lines and an on-board system
Pamela Barry, John Shepanski, Carol Segal
The Hyperion instrument mounted on the EO-1 spacecraft was launched November 21, 2000 into an orbit following LANDSAT-7 by 1 minute. Hyperion has a 7.5 km swath width, a 30 meter ground resolution and 220 spectral bands. Its spectral bands extend from 400 nm to 2500 nm with each band having about a 10 nm bandwidth. A unique process to validate the spectral calibration was developed. The process was based on an atmospheric limb collect and supported with a solar calibration collect. The data contained a collection of solar lines, atmospheric lines and absorption lines from the paint that coats the solar calibration reflectance panel. Correlating the positions of these lines with reference data, the center wavelength of each pixel across the field of view for the VNIR and SWIR spectral regions of the imaging spectrometer has been verified. In this paper we discuss the data collection and the technique applied to the VNIR and SWIR focal plane array.
Measurement of Hyperion MTF from on-orbit scenes
Neil R. Nelson, Pamela Barry
The Hyperion instrument was launched November 21, 2000 mounted on the EO-1 spacecraft into orbit 1 minute behind Landsat-7. Hyperion has a 7.5 km swath width, a 30 meter ground sample distance (GSD) and more than 220 spectral bands. Part of the on-orbit characterization involves MTF measurements from several ground scenes. These scenes included edges from the moon and glaciers as well as several bridges. The scenes were processed to determine the MTF for both the Visual Near InfraRed (VNIR) and Short-wave InfraRed (SWIR) imaging spectrometers and were compared to measurements made prior to launch.
Radiometric calibration validation of the Hyperion instrument using ground truth at a site in Lake Frome, Australia
Pamela Barry, Peter J. Jarecke, Jay Pearlman, et al.
The Hyperion instrument mounted on the EO-1 spacecraft was launched November 21, 2000 into an orbit following LANDSAT-7 by 1 minute. Hyperion has a 7.5 km swath width, a 30 meter ground resolution and 10 nm spectral resolution extending from 400 nm to 2500 nm. The first portion of the mission was used to measure and characterize the on-orbit radiometric, spectral, image quality and geometric performance of the instrument. Lake Frome, a dry salt lake in South Australia was chosen as a calibration site for Hyperion. Surface spectral data were collected along a transect through the center of the lake prior to the Hyperion overpass. This paper discusses the incorporation of the Lake Frome ground measurements and analysis into the performance verification of the instrument.
Initial lunar calibration observations by the EO-1 Hyperion imaging spectrometer
Hugh H. Kieffer, Peter J. Jarecke, Jay Pearlman
The Moon provides an exo-atmospheric radiance source that can be used to determine trends in instrument radiometric responsivity with high precision. Lunar observations can also be used for absolute radiometric calibration; knowledge of the radiometric scale will steadily improve through independent study of lunar spectral photometry and with sharing of the Moon as a calibration target by increasing numbers of spacecraft, each with its own calibration history. EO-1 calibration includes periodic observation of the Moon by all three of its instruments. Observations are normally made with a phase angle of about 7 degrees (or about 12 hours from the time of Full Moon). Also, SeaWiFS has been making observations at such phase angles for several years, and observations of the Moon by instrument pairs, even if at different times, can be used to transfer absolute calibration. A challenge for EO-1 is pointing to include the entire full Moon in the narrow Hyperion scan. Three Hyperion observations in early 2001 covering an order-of-magnitude difference in lunar irradiance show good agreement for responsivity; The SWIR detector has undergone some changes in responsivity. Small discrepancies of calibration with wavelength could be smoothed using the Moon as a source. Off-axis scattered light response and cross-track response variations can be assessed using the lunar image.
Aggregation of Hyperion hyperspectral spectral bands into Landsat-7 ETM+ spectral bands
Peter J. Jarecke, Pamela Barry, Jay Pearlman, et al.
The LANDSAT-7 ETM+ spectral bands centered at 479nm, 561 nm, 661 nm and 834 nm (bands 1, 2, 3, and 4) fall nicely across the Hyperion VNIR hyperspectral response region. They have bandwidths of 67nm, 78nm, 60 nm and 120 nm, respectively. The Hyperion spectral bandwidth of 10.2 nm results in 10 to 15 Hyperion spectral samples across each Landsat band in the VNIR. When the Hyperion spectral responses in the 10.2 nm bands are properly weighted to aggregate to a given Landsat band, the radiometric response of the Landsat band can be reproduced by Hyperion. Landsat bands 5 and 7 centered at 1650 and 2207 nm (with bandwidths of 190 and 250 nm respectively) fall in the Hyperion SWIR spectral response region. Hyperion spectral response for one area of a scene in Railroad Valley, NV on May 13, 2001 has been binned into Landsat bands and compared with Landsat values collected at the same time.
Multispectral Thermal Imaging
icon_mobile_dropdown
Analyses of MTI imagery of power plant thermal discharge
MTI images of thermal discharge from three power plants are analyzed in this paper with the aid of a 3_D hydrodynamic code. The power plants use different methods to dissipate waste heat in the environment: a cooling lake at Comanche Peak, ocean discharge at Pilgrim and cooling canals at Turkey Point. This paper show s that it is possible to reproduce the temperature distributions captured in MTI imagery with accurate code inputs, but the key parameters change from site to site. Wind direction and speed are the most important parameters at Pilgrim, whereas air temperatures are most important at Comanche Peak and Turkey Point. This paper also shows how the combination of high- resolution thermal imagery and hydrodynamic simulation lead to better understanding of the mechanisms by which waste heat is dissipated in the environment.
Comparison of MTI satellite-derived surface water temperatures and in-situ measurements
Robert J. Kurzeja, Malcolm M. Pendergast, Eliel Villa-Aleman, et al.
Temperatures of the water surface of a cold, mid-latitude lake and the tropical Pacific Ocean were determined from MTI images and from in situ concurrent measurements. In situ measurements were obtained at the time of the MTI image with a floating, anchored platform, which measured the surface and bulk water temperatures and relevant meteorological variables, and also from a boat moving across the target area. Atmospheric profiles were obtained from concurrent radiosonde soundings. Radiances at the satellite were calculated with the Modtran radiative transfer model. The MTI infrared radiances were within 1% of the calculated values at the Pacific Ocean site but were 1-2% different over the mid-latitude lake.
IR temperatures of Mauna Loa caldera obtained with multispectral thermal imager
Malcolm M. Pendergast, Byron Lance O'Steen, Robert J. Kurzeja
A survey of surface temperatures of the Mauna Loa caldera during 7/14/00 and 7/15/00 was made by SRTC in conjunction with a MTI satellite image collection. The general variation of surface temperature appears quite predictable responding to solar heating. The analysis of detailed times series of temperature indicates systematic variations in temperature of 5 C corresponding to time scales of 3-5 minutes and space scales of 10-20 m. The average temperature patterns are consistent with those predicted by the Regional Atmospheric Modeling System (RAMS).
MTI thermal bands calibration at Ivanpah Playa with a Fourier transform infrared spectrometer
Eliel Villa-Aleman, Robert J. Kurzeja, Malcolm M. Pendergast
The Savannah River Technology Center (SRTC) is currently calibrating the Multispectral Thermal Imager (MTI) satellite sponsored by the Department of Energy. The MTI is a research and development project with 15 wavebands in the 0.45-11.50 micrometers spectral range. The reflective bands of the MTI satellite are calibrated in desert playas such as Ivanpah Playa in the Nevada/California border. The five MTI thermal bands are calibrated with targets of know emissivity and temperature such as power plant heated lakes. In order to accomplish a full calibration at the desert playas, a Fourier transform infrared spectrometer was used to measure soil surface radiance and temperature during the satellite overpass. The results obtained with the mobile FTIR during the ground truth campaign at Ivanpah Playa will be presented.
Determination of the spatial variability of temperature and moisture near a tropical Pacific island with MTI satellite images
Robert J. Kurzeja, Byron Lance O'Steen, Malcolm M. Pendergast
The Tropical Pacific Island of Nauru is a US DOE ARM observation site that monitors tropical climate and atmospheric radiation. This observation site is ideal for validating MTI images because of the extensive deployment of continuously operating instruments. MTI images are also useful in assessing the effect of the island on the ocean climate and on the ARM data. An MTI image has been used to determine the spatial distribution of water vapor and sea-surface temperature near the island. The results are compared with a three-dimensional numerical model simulation.
Acquisition and Interpretation of Hyperspectral Data
icon_mobile_dropdown
Ozone monitoring instrument (OMI)
The Ozone Monitoring Instrument (OMI) is an UV-Visible imaging spectrograph using two dimensional CCD detectors to register both the spectrum and the swath perpendicular to the flight direction. This allows having a wide swath (114 degrees) combined with a small ground pixel (nominally 13 x 24 km). The instrument is planned for launch on NASA's EOS-AURA satellite in June 2003. Currently the OMI Flight Model is being build. This shortly follows the Instrument Development Model (DM) which was built to, next to engineering purposes, verify the instrument performance. The paper presents measured results from this DM for optical parameters such as distortion, optical efficiency, stray light and polarization sensitivity. Distortion in the spatial direction is shown to be on sub-pixel level and the stray light levels are very low and almost free from ghost peaks. The polarization sensitivity is presently demonstrated to be below 10-3 but we aim to lower the detection limit by an order of magnitude to make sure that spectral residuals do not mix with trace gas absorption spectra. Critical detector parameters are presented such as the very high UV quantum efficiency (60 % at 270 nm), dark current behavior and the sensitivity to radiation.
Optical sensor package for multiangle measurements of surface reflectance
The Remote Sensing Group of the Optical Sciences Center at the University of Arizona has performed the vicarious calibration of satellite sensors since the 1980's. Ground- based measurements of atmospheric and surface properties, including the surface bidirectional reflectance distribution function (BRDF), are conducted during a satellite or airborne sensor overpass and the at-sensor radiance is calculated using these properties as input to a radiative transfer code. Recently, the Remote Sensing Group has investigated an imaging radiometer based on an astronomical- grade 1024 x 1024-pixel silicon CCD array that was developed and calibrated fro ground-based measurements of BRDF. The results of that study have been used to examine the feasibility of a lightweight instrument package for measurement of surface BRDF based on a combination of nonimaging radiometers and inexpensive digital cameras. The current work presents a preliminary design of such a system including specifications for ground-based operations of the system to characterize the BRDF of test sites used by the Remote Sensing Group. Also included is a preliminary evaluation of a Nikon 990 digital camera coupled with a 1.7- mm focal length fisheye lens to determine the level of accuracy that can be obtained in surface BRDF.
Subpixel analysis of a double array grating spectrometer
High resolution images can be extracted from a set of differently sampled low resolution images. Usually such a set of images is generated by shifting a detector array device fractions of a pixel or by moving a whole optical system in an appropriate way. Subpixel information is then encoded in the set of taken images. Another way to generate encoded subpixel information is presented in this paper. Multiple fixed images are generated in the optical part of the detector device. The latter method is the method of choice for grating diode array spectrometers. A programmable entrance slit array (MEMS device, mechanical slit positioning system), which replaces the conventional single entrance slit, generates multiple undersampled images of the same spectrum. Every slit is imaged with a different, wavelength dependent imaging scale ratio and a different imaging scale ratio and a different wavelength-aberration-dependency. The subpixel analysis has to take this into account. It is accomplished by a shift-variant superresolution algorithm, a representation of the spectrometer's optical properties and a calibration algorithm which estimates these properties from measured known gas emission spectra. The superresolution algorithm itself is nonlinear and therefore capable of recovering data lost by aberration and pixel integration. An algorithm for subpixel analysis is developed and tested. Theoretical and experimental approaches of the subpixel analysis are presented. The method is proven experimentally on a double array spectrometer. The resolution can be increased up to the factor 7 with seven entrance slits.
Advanced and Unconventional Sensors
icon_mobile_dropdown
Hyperspectral sensor test bed for real-time algorithm evaluation
Hyperspectral imaging has proved to be a valuable tool for performing material based discrimination of targets in highly cluttered backgrounds. A next step for utilizing this technology is to integrate spectral and spatial discrimination algorithms for Autonomous Target Recognition (ATR) applications. This paper describes a hardware and software testbed system for performing spectral/spatial ATR and presents initial results from a field test in the Anza- Borrego desert.
Midwave-infrared snapshot imaging spectrometer
Curtis Earl Volin, John Phillips Garcia, Eustace L. Dereniak, et al.
We report results from a demonstration of a midwave-infrared non-scanning, high speed imaging spectrometer capable of simultaneously recording spatial and spectral data from a rapidly varying target scene. High speed spectral imaging was demonstrated by collecting spectral and spatial snapshots of blackbody targets and combustion products. The instrument is based on computed tomography concepts and operates in a mid-wave infrared band of 3.0 to 5.0 micrometers . Raw images were recorded at a frame rate of 60 fps using a 512 x 512 InSb focal plane array. Reconstructed object cube estimates were sampled at 46x46x21 (x, y,(lambda) ) elements, or 0.1 micrometers spectral sampling. Reconstructions of several objects are presented.
Integral field and multi-object spectrometry with MEMS
Multi object spectrometers measure spectra of multiple objects simultaneously. Besides others, e.g. fiber positioning systems, there is a class of multi object spectrometers which is based on a dispersing imaging optics in connection with a slit masks. Two considered approaches for reconfigurable slit masks are two-dimensional MEMS arrays, such as micro mirror or micro shutter arrays, and slit positioning devices. After an introduction to multi object spectrometry with dispersing imaging optics we calculate the effective multiplex capabilities of multi object spectrometers based on 2D MEMS and on slit positioning devices for randomly distributed objects. The observation efficiency of multi object spectrometers based on 2D MEMS is compared to integral field spectrometers and to multi object spectrometers based on slit positioning devices. We find that for typical applications the efficiency of the slit positioning approach is nearly as good as the efficiency of the 2D MEMS approach. This makes slit positioning systems a serious alternative solution to 2D MEMS devices as long as they are easier to get.
Design of broadband-optimized computer-generated hologram dispersers for the computed-tomography imaging spectrometer
This paper describes an algorithm based on the singular-value decomposition that converges to a solution for a computer-generated-hologram disperser from a random-phase starting diffuser. In this paper, we report on the application of this algorithm to the design of two-dimensional, surface-relief CGH dispersers for use in the Computed-Tomography Imaging Spectrometer (CTIS). The designed CGH's produce desired diffraction images at five wavelengths through a 1:1.67 wavelength band. Performance results are presented for a demonstration CGH designed by the SVD algorithm and fabricated in GaAs for use in the mid-wave infrared CTIS.
Poster Session
icon_mobile_dropdown
Fiber optic sensors for an in-situ monitoring of moisture and pH value in reinforced concrete
Walter Grahn, Pavel Makedonski, Juergen Wichern, et al.
Concrete structures such as social buildings and bridges are important economic goods. Thus, maintenance and preservation of these structures are of major interest. Buildings of reinforced concrete are exposed to a variety of damaging influences. In particular, moisture has an important influence on the lifetime of concrete structures. This is caused by the involvement of free water in corrosion of the steel, and the fact that water acts as transport medium for damaging ions such as chloride, sulfate, carbonate and ammonium. Thus, we designed and developed an integrated fiberoptical sensor system, which allows in-situ non- destructive long-term monitoring of concrete structures. As moisture indicator we use a pyridinium-N-phenolat betainital dye, which shows a strong solvatochromic behavior in the ultraviolet-visible spectral range (UV-VIS). The dye is embedded in a polymer matrix, whose moderate polarity is enhanced by free water diffusing into the sensor. This leads to a continuous hypsochromic shift of the absorption spectrum according to the water concetration. Another appropriate dye is 4-amino-N-methylphthalimid, which shows a similar behavior in its fluorescent spectra, and presently we are developing its derivatives and suitable polymer matrices. The determination of the pH-value of concrete is of major importance for the assessment of acidic attacks which may lead to serious damage in reinforced concrete, as the embedded steel structures exhibit long-term stability (i.e. resistance to corrosion) only at pH-values of 9 or higher. Therefore we have developed a fiberoptical sensor system for the measurement of pH-values in concrete consisting of pH- indicator dyes immobilized in a highly immobilized in a highly hydrophilic polymer matrix. Any change in pH-value of the wet concrete material is indicated by a color change of the dye/polymer system. The sensor system displays long term stability even in aggressive media of pH12 - 13.
Advanced and Unconventional Sensors
icon_mobile_dropdown
Multioctave spectral imaging in the infrared: a newly emerging approach
Paul D. LeVan, Tanya Diana Maestas
A new approach is described for obtaining spectral imagery over a broad range of infrared wavelengths, with high efficiency, and with a single grating element and focal plane array. The approach represents a simplification and mass reduction over the traditional approach involving multiple focal plane arrays, dispersing elements, and optical beamsplitters. The new approach has significant advantages for space-based hyperspectral imagers operating in the infrared over a broad range of wavelengths (e.g., MWIR & LWIR), where the reduction in cryo-cooled mass relative to the multi-channel approach translates into noteworthy savings in cryo-cooling requirements and launch costs. Overlapping grating orders are focused onto a multi-waveband focal plane array in order to create spectral images of a scene simultaneously in multiple wavelength regions. The blaze of the grating is chosen so that all spectral orders are dispersed with high grating efficiency. Such an approach extends the spectral range of dispersive spectrometers to several octaves of wavelength, while preserving the compact packaging and cryogenic requirements of conventional (one octave) instruments. We conclude with a description of a ground-based demonstration of a dual-octave embodiment of the concept.