Proceedings Volume 2821

Hyperspectral Remote Sensing and Applications

Sylvia S. Shen
cover
Proceedings Volume 2821

Hyperspectral Remote Sensing and Applications

Sylvia S. Shen
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 6 November 1996
Contents: 7 Sessions, 27 Papers, 0 Presentations
Conference: SPIE's 1996 International Symposium on Optical Science, Engineering, and Instrumentation 1996
Volume Number: 2821

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Nonliteral Exploitation Techniques and Applications I
  • Compression and Coding
  • Hyperspectral Sensor Characterization Techniques I
  • Hyperspectral System Analysis and Tradeoffs I
  • Hyperspectral System Analysis and Tradeoffs II
  • Nonliteral Exploitation Techniques and Applications II
  • Hyperspectral Sensor Characterization Techniques II
  • Nonliteral Exploitation Techniques and Applications II
Nonliteral Exploitation Techniques and Applications I
icon_mobile_dropdown
Constrained energy minimization applied to apparent reflectance and single-scattering albedo spectra: a comparison
Ronald G. Resmini, William R. Graver, Mary E. Kappus, et al.
Constrained energy minimization (CEM) has been applied to the mapping of the quantitative areal distribution of the mineral alunite in an approximately 1.8 km2 area of the Cuprite mining district, Nevada. CEM is a powerful technique for rapid quantitative mineral mapping which requires only the spectrum of the mineral to be mapped. A priori knowledge of background spectral signatures is not required. Our investigation applies CEM to calibrated radiance data converted to apparent reflectance (AR) and to single scattering albedo (SSA) spectra. The radiance data were acquired by the 210 channel, 0.4 micrometers to 2.5 micrometers airborne Hyperspectral Digital Imagery Collection Experiment sensor. CEM applied to AR spectra assumes linear mixing of the spectra of the materials exposed at the surface. This assumption is likely invalid as surface materials, which are often mixtures of particulates of different substances, are more properly modeled as intimate mixtures and thus spectral mixing analyses must take account of nonlinear effects. One technique for approximating nonlinear mixing requires the conversion of AR spectra to SSA spectra. The results of CEM applied to SSA spectra are compared to those of CEM applied to AR spectra. The occurrence of alunite is similar though not identical to mineral maps produced with both the SSA and AR spectra. Alunite is slightly more widespread based on processing with the SSA spectra. Further, fractional abundances derived from the SSA spectra are, in general, higher than those derived from AR spectra. Implications for the interpretation of quantitative mineral mapping with hyperspectral remote sensing data are discussed.
SVD method for spectral decomposition and classification of ARES data
James J. Lisowski, Christopher A. Cook
Use of the Short/Medium Wavelength InfraRed (S/MWIR) region of the electromagnetic spectrum for Hyperspectral Imaging (HSI) applications is a relatively recent occurrence. Unlike the Visible and Near IR, the S/MWIR region is sensitive to reflected sunlight, thermal and molecular emissions. The singular value decomposition technique, traditionally used for processing HSI data, has been employed to extract features from ARES observations of the Navajo Generating Station at Page, AZ. The results have been compared to more straightforward techniques for identifying narrow band molecular emission features. Ground truth measurements made by the Aerospace Corporation are used to verify the extracted features.
Application of spatial resolution enhancement and spectral mixture analysis to hyperspectral images
Spatial resolution enhancement and spectral mixture analysis are two of the most extensively used image analysis algorithms. This paper presents an algorithm that merges the best aspects of these two techniques while trying to preserve the spectral and spatial radiometric integrity of the data. With spectral mixture analysis, the fraction of each material (endmember) in every pixel is determined from hyperspectral data. This paper describes an improved unmixing algorithm based upon stepwise regression. The result is a set of material maps where the intensity corresponds to the percentage of a particular endmember within the pixel. The maps are constructed at the spatial resolution of the hyperspectral sensor. The spatial resolution of the material maps is then enhanced using one or more higher spatial resolution images. Similar to the unmixing approach, different endmember contributions to the pixel digital counts are distinguished by the endmember reflectances in the sharpening band(s). After unmixing, the fraction maps are sharpened with a constrained optimization algorithm. This paper presents the results of an image fusion algorithm that combines spectral unmixing and spatial sharpening. Quantifiable results are obtained through the use of synthetically generated imagery. Without synthetic images, a large amount of ground truth would be required in order to measure the accuracy of the material maps. Multiple band sharpening is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. The analysis includes an examination of the effects of constraints and texture variation on the material maps. The results show stepwise unmixing is an improvement over traditional unmixing algorithms. The results also indicate sharpening improves the material maps. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Compression and Coding
icon_mobile_dropdown
Application of video-based coding to hyperspectral imagery
Timothy S. Wilkinson, Val D. Vaughn
As sensor technology evolves into the 21st Century, the volume of data available from both airborne and spaceborne sources will increase rapidly. High resolution hyperspectral remote sensing systems may offer hundreds of bands of data. Efficient use, transmission, storage, and manipulation of such data will require some type of bandwidth compression. Current image compression standards are not specifically optimized to accommodate hyperspectral data. Nonetheless, users who require such compression will be driven toward current standards for reasons of simplicity, cost, and interoperability. This paper begins an examination of the performance of the MPEG-like video compression methodologies for hyperspectral image data. Compression fidelity is examined with an objective metric as a function of the achieved compression ratio. Comparisons are made between different coding configurations which resemble those available in the basic MPEG scheme. Observations regarding the use of standards-based compression of hyperspectral data are offered along with suggestions for future investigations.
Spectrally and spatially adaptive hyperspectral data compression
Bernard V. Brower, David H. Hadcock, Joseph P. Reitz, et al.
A hyperspectral data compression algorithm is presented that utilizes a modular approach of an adaptive spectral transform to decorrelate the spectral bands, which are then adaptively spatially compressed. The adaptivity in the spectral transform is dependent upon the spectral characteristics (spectral correlation) and the importance of the band. Correlation is very high between most bands of hyperspectral data, which suggests a large amount of redundant information. The bands with less correlation indicate either a significant amount of non-redundant information or poor signal-to-noise characteristics. These spectral characteristics have been shown to be very dependent on the imaging system and atmospheric conditions of the hyperspectral image. The importance of any given band is dependent upon the user's needs, exploitation task and the imaging system. This leads to a spatial compression technique that is selected dependent upon the expected spatial correlation.
Hyperspectral compression using spectral signature matching with error encoding
Joseph P. Reitz, Bernard V. Brower, Austin Lan
Hyperspectral image data present increasing challenges to current transmission bandwidth and storage capabilities. The large amounts of spectrally redundant information present in these data make hyperspectral compression techniques extremely attractive. This paper presents a hyperspectral compression algorithm which was designed to maintain the spectral accuracy needed for standard hyperspectral analytical techniques. Spectral accuracy is maintained through an approach that extracts and separately codes the hyperspectral signatures present in each pixel.
Hyperspectral Sensor Characterization Techniques I
icon_mobile_dropdown
HYDICE system performance: an update
Robert W. Basedow, William S. Aldrich, John E. Colwell, et al.
The HYDICE instrument began flying at the beginning of 1995. Since then a large body of data has been acquired--on the ground, from characterization flights and from operational missions. In combination with laboratory data, this has been used to conduct an evaluation of the full system. Overall, performance has matched design predictions quite closely, both with respect to technical specifications and operational characteristics. Some anomalies have been identified. Their causes, the impact they have on data quality and methods of correcting them have been assessed. This paper reports on these findings, provides an updated status of the system, and identifies possible hardware upgrades.
Full-scene subnanometer HYDICE wavelength calibration
Alexander F. H. Goetz, Kathleen B. Heidebrecht
The Hyperspectral Data and Information Collection Experiment (HYDICE) collected data during 1995 for purposes of assessing and verifying performance. Accurate wavelength calibration to +/- 0.1 nm is essential for proper application of atmospheric models in order to derive apparent surface reflectance. A method for precise wavelength calibration is described that makes use of the narrow atmospheric oxygen band absorption feature at 762 nm. Precision of +/- 0.1 nm is feasible despite variable surface cover if the signal-to-noise ratio is greater than 100. Resampling the HYDICE images to a common wavelength results in reduced spectral resolution and adversely affects the radiometric accuracy of narrow features such as the oxygen band.
Detection and correction of bad pixels in hyperspectral sensors
Hugh H. Kieffer
Hyperspectral sensors may use a 2D array such that one direction across the array is spatial and the other direction is spectral. Any pixels therein having very poor signal-to-noise performance must have their values replaced. Because of the anisotropic nature of information at the array, common image processing techniques should not be used. A bad-pixel replacement algorithm has been developed which uses the information closest in both spectral and spatial sense to obtain a value which has both the spectral and reflectance properties of the adjacent terrain in the image. A simple and fast implementation that `repairs' individual bad pixels or clusters of bad pixels has three steps; the first two steps are done only once: (1) Pixels are flagged as `bad' if their noise level or responsivity fall outside acceptable limits for their spectral channel. (2) For each bad pixel, the minimum-sized surrounding rectangle is determined that has good pixels at all 4 corners and at the 4 edge-points where the row/column of the bad pixel intersect the rectangle boundary (five cases are possible due to bad pixels near an edge or corner of the detector array); the specifications of this rectangle are saved. (3) After a detector data frame has been radiometrically corrected (dark subtraction and gain corrections), the spectral shapes represented by the rectangle edges extending in the dispersion direction are averaged; this shape is then interpolated through the two pixels in the other edges of the rectangle. This algorithm has been implemented for HYDICE.
Characterization and reduction of stochastic and periodic anomalies in a hyperspectral imaging sensor system
Bruce V. Shetler, Hugh H. Kieffer
HYDICE, the HYperspectral Digital Imagery Collection Experiment, is an airborne hyperspectral imaging sensor operating in a pushbroom mode. HYDICE collects data simultaneously in 210 wavelength bands from 0.4 to 2.5 micrometers using a prism as the dispersing element. While the overall quality of HYDICE data is excellent, certain data stream anomalies have been identified, among which are a periodic offset in DN level related to the operation of the system cryocooler and a quasi-random variation in the spectral alignment between the dispersed image and the focal plane. In this paper we report on an investigation into the above two effects and the development of algorithms and software to correct or minimize their impact in a production data processing system. We find the periodic variation to have unexpected time and band-dependent characteristics which argues against the possibility of correction in post- processing, but to be relatively insensitive to signal and consequently of low impact on the operation of the system. We investigate spectral jitter through an algorithm which performs a least squares fit to several atmospheric spectral features to characterize both the time-dependent jitter motion and systematic spectral mis-registration. This method is also implemented to correct the anomalies in the production data stream. A comprehensive set of hyperspectral sensor calibration and correction algorithm is also presented.
Measurement of the HYDICE system MTF from flight imagery
Robert A. Schowengerdt, Robert W. Basedow, John E. Colwell
The sensor MTF is not only an important descriptor of image quality, but also determines the potential for spatial mixing of spectral signatures in hyperspectral systems. The HYDICE total system MTF has been measured from flight imagery of targets of opportunity, including the Mackinaw Straits Bridge in Michigan and the Chesapeake Bay Bridge at Annapolis, Maryland. The procedures used to derive the MTF are described, and the results are compared to pre-flight laboratory measurements made at Hughes-Danbury Optical Systems. The inflight MTF measurements at 0.5 cycles/pixel are in the range 0.5 to 0.65 in-track, and 0.4 cross-track, which are consistent with the pre-flight measurements.
Hyperspectral System Analysis and Tradeoffs I
icon_mobile_dropdown
System considerations for hyper/ultra spectroradiometric sensors
Harold Gumbel
A system analysis of spectroradiometric sensors applied to the characterization of gaseous emanations from smoke stacks or fugitive leaks explores the consequences of certain phenomenological aspects and their impact on the performance of dispersive gratings and Fourier transform spectrometers. Specific design issues relating to the two technologies are explored in some detail. However, it was not the purpose to resolve phenomenological issues but to highlight their impact on signal to noise ratio performance. It is shown that an adequate basis for a definitive optimized design is not yet available. A program to remedy this deficiency is discussed.
Effect of spectral resolution and number of wavelength bands in analysis of a hyperspectral data set using NRL's ORASIS algorithm
Jeffrey H. Bowles, Peter J. Palmadesso, John A. Antoniades, et al.
We report the results of a tradeoff study for the selection of the number of wavelength bands and resolution needed in a hyperspectral data set in order to separate a scene into its constituent features. This separation is accomplished by finding approximate endmembers using convex mixing and shrink-wrapping techniques. This and related techniques are referred to as NRL's Optical Real-time Adaptive Spectral Identification System (ORASIS). ORASIS's algorithms will be briefly described. Once endmembers are found, matched filters are calculated which can then be used to separate (or demix) the scene. We have analyzed synthetic cubes, cubes acquired by NRL's Portable Hyperspectral Imager for Low Light Spectroscopy (PHILLS) sensor, and cubes from other sensors. PHILLS consists of multiple hyperspectral sensors that operate in pushbroom mode. PHILLS has been deployed from aircraft and on the ground in a variety of terrains from the polar icecap to the Florida Keys. The majority of the data were recorded with a 16-bit thermo-electrically cooled camera which records 1024 wavelengths over the range of 200 to 1100 nm. Major features of the scene can be successfully demixed using fewer than 1024 wavelength bands. However, preliminary evidence suggests that finer features require the full wavelength range and resolution.
Airborne Remote Earth Sensing (ARES) Program: an operational airborne MWIR imaging spectrometer and applications
Kevin D. Bishop, Michael J. Diestel
Since 1993, the Airborne Remote Earth Sensing (ARES) Program has collected a wide variety of mid-wave infrared hyperspectral data on an interesting assortment of atmospheric, geologic, urban and chemical emission/absorption features. Flown in NASA's high altitude WB-57F aircraft, the ARES sensor is a 75 channel cryo-cooled prism spectrometer covering the 2 - 6 micrometers spectral region, and is capable of up or down-looking measurements over a wide range of collection geometries. Sensor characteristics, pointing capabilities, and overall performance are discussed. Highlights from some of the recent data collections, such as the 1993 and 95 thermal mapping of the active lava flow areas from the Kilauea volcano; the 1993 collection of the direct solar specular reflection off high altitude (ice) cloud layers over West Texas; upper atmospheric H2O vapor sounding using the 6 micrometers solar absorption spectra; Sulfur Dioxide detection from a coal burning power plant in Page, AZ (SO2 in emission) and from the Pu'u O'o vent of the Kilauea volcano (SO2 in absorption); and MWIR imagery from various terrestrial and urban background scenes, including West Los Angeles, and the Capitol area of Washington, D.C. Supporting spectral analysis and radiometric modeling are presented.
Parameter impacts on hyperspectral remote sensing system performance
The design and use of hyperspectral imaging remote sensing systems involve the selection of a large number of parameters in particular due to the richness of the data. Many of these parameters interrelate in their effect on system performance. The tasks of optimizing parameter values that one has control over and understanding the impact of those that are uncontrollable are important in system design and use. A computational model of relevant system components and parameter values for a hyperspectral remote sensing system has been developed an used to explore their relative impact on system performance. The hyperspectral remote sensing system is defined by considering the scene and all contributions to the upwelling radiance, the sensor and all effects leading to measured data, and the subsequent processing applied to extract the desired information which is then used as the metric for system performance. The relative contribution to system performance is studied by defining a nominal configuration for the system and then perturbing individual parameters and examining the impact on system performance that results from these perturbations. By considering the effect of parameter values one at a time, the relative impact can be studied. However, since the entire system is still considered in the analysis, the constraining interrelationships are still present thus providing a more relevant indication of impact. Results will be presented for a canonical scenario where an airborne hyperspectral sensor observes a scene where an unresolved object is arbitrarily located and the performance metric is detection accuracy. Significant effects in system performance are seen to be attributable to spectral channel selection and the object spectral characteristics and size, while factors such as instrument noise and calibration error play relatively minor roles in system performance.
Hyperspectral System Analysis and Tradeoffs II
icon_mobile_dropdown
Spectrometer channel characterization for the Airborne Remote Earth Sensor (ARES)
James J. Lisowski, Michelle A. Najarian
The ARES (Airborne Remote Earth Sensing) program employs a dual mode instrument for the collection of calibrate IR data. The instrument can be used as either an imaging radiometer (Staring Radiometer), or as an imaging spectrometer (Spatially Scanning Dispersive Spectrometer (SSDS)). In the SSDS mode, the instrument is capable of resolving 75 bands in the 2 micrometers to 6 micrometers region. Spectral separation is achieved by spatial dispersion of incoming radiation across the FPA using a bi-prism. As with other instruments of this type, mechanical and optical imperfections produce errors in spectral registration and radiometric calibrations; non-uniform dispersion by optics and a finite size for the FPA elements produce imperfect bandpass characteristics, most notably channel crosstalk. The methods developed at Lockheed-Martin Advanced Technology Center, the sensor contractors, and at SciTec to characterize the spectral response of each spectrometer channel using a scanned monochromator and normalization algorithms will be discussed.
Introduction to analysis of errors inherent in multispectral imaging through the sea surface 1: Target and media effects
Multispectral and hyperspectral imaging of submerged objects is a key technique in the airborne detection of environmental degradation in marine structures such as reefs, shellfish habitats, and seagrass beds. Additional applications involve detection of submerged ordnance in nearshore reef and surf zones, as well as contraband detection in drug interdiction operations. Unfortunately, the spectral response variability of targets, media (atmosphere and seawater), and camera optical components (e.g., intensifier and dielectric filter coatings) currently obviates accurate model-based detection of submerged objects from airborne imagery, based on spectral signature alone. In this paper, the first of a two-part series, we discuss and quantify various sources of target, media, and sensor errors that confound the signature-based recognition of submerged objects under realistic conditions. Analyses are based on data published primarily in the open literature, and are couched in terms of a model of signature recognition in which optical path perturbations are referred to the focal plane. Given such a model, the spatial and spectral detectability of submerged objects can be more readily predicted within the limits of accuracy incurred by standard measurements of ocean optical parameters.
Introduction to analysis of errors inherent in multispectral imaging through the sea surface 2: sensor and interfacial effects
Multispectral and hyperspectral sensing are key techniques in the airborne detection of submerged objects such as jettisoned contraband, unexploded ordnance, and sea mines. Unfortunately, the errors incurred by imaging through the sea surface (also called imaging trans-MBL or through the marine boundary layer) include spatial, spectral, and temporal distortions. In Part I of this two-part series, we showed that spatial distortions derive primarily from projection errors resulting from refraction at the air-sea interface. In practice, the variability and uncertainty of in situ measurements of aqueous refractive index pose a problem for accurate prediction of such projection errors. We also discussed problems that arise due to target variability, surface contamination, and target surface corruption due to corrosion. In this paper, we summarize spatial distortions due to imaging with intensified cameras, as well as spectral errors that result from interaction of sensor optics with the environment (e.g., thermal drift in spectral filter frequency response). We discuss the spatial effects of detector noise and error, and summarize anomalies observed in our recent study of intensified multispectral camera circuitry. We briefly review the effects of interfacial refraction and in-water scattering upon the spatial clarity and resolution of submerged target imagery. We conclude our development with an error budget that is configured for several common sensing scenarios (e.g., staring-array and pushbroom sensors), then discuss effects of decorrelation between spectral, spatial, and temporal variables on target detection accuracy.
Nonliteral Exploitation Techniques and Applications II
icon_mobile_dropdown
Multilevel lines of communication extraction
Aleksandar Zavaljevski, Atam P. Dhawan, Alok Sarwal, et al.
In this paper, a novel multi-level adaptive lines of communication extraction method for multispectral images is presented. The method takes into account both spectral and spatial characteristics of the data on different levels of processing. The principal background classes are obtained first using K-means clustering. Each pixel is examined next for classification using a minimum distance classifier with principal class signatures obtained in the previous level. In the next level, the neighborhood of each unclassified pixel is analyzed for inclusion of candidate classes for use as endmembers in a spectral unmixing model. If the list of candidate background classes is empty, the conditions for their inclusion are relaxed. The fractions of backgrounds and lines of communication signatures for the unclassified pixels are determined by means of linear least-squares method. If the results of unmixing are not satisfactory, the candidate clusters list is renewed, and unmixing is repeated. The lines of communication detection within each pixel is performed next. The line segments detection parameters are initialized, directional confidence is calculated, and line segment tracking is initialized. The line segments are incremented until the composite confidence becomes too low. At the end, segment connection, and lines of communications identification is performed. The proposed method was successfully applied to both synthetic and AVIRIS hyperspectral data sets.
Relative utility of HYDICE and multispectral data for object detection, identification, and abundance estimation
Sylvia S. Shen
Land cover and land use classification, and area estimation of spatially resolved objects have been successfully derived from remotely sensed imagery such as Landsat multispectral data for over 20 years. For subpixel objects, multispectral instruments may not provide sufficient distinct spectral information to reliably decompose a pixel into substances which make up that pixel. Hyperspectral imaging instruments with their large number of registered spectral bands may provide a capability to produce improved detection and classification of spatially resolved objects. These instruments are also designed to allow more reliable spectral decomposition of a pixel into pure substances, therefore permitting subpixel target material detection and abundance estimation. This paper describes an on-going study effort whose objective is to demonstrate the unique attributes and added contributions of hyperspectral data in comparison with simulated multispectral data sets for detection, discrimination, material identification, functional identification and abundance estimation problems. Methodologies used to perform these nonliteral exploitation tasks are described in this paper. Also presented in this paper are results obtained from applying these exploitation techniques to the Hyperspectral Digital Imagery Collection Experiment (HYDICE) sensor data and the simulated multispectral data sets for small to subpixel targets against a desert background. Performance comparison is made in terms of detection success rate, false alarms, and the number of correctly identified targets. These performance measures are also presented in this paper. This study is currently being extended to data collected by the HYDICE sensor in other types of background environment such as the forests to allow an assessment of the sensitivity of complex background on the relative utility of hyperspectral and multispectral data for key exploitation tasks.
Material taxonomy for object identification in HYDICE imagery
Edward M. Bassett, Linda S. Kalman
This paper summarizes investigations made at The Aerospace Corporation, during the past year, in the field of hyperspectral material identification. A spectral feature metric which makes use of visible features in the `graph' of a reference spectrum or acquired hyperspectral sample was developed and refined. Using this, and other well-known spectral similarity metrics, a mechanism for creating material taxonomies using clustering techniques was developed. The taxonomies and metrics were combined to create an innovative means for identifying materials and objects in hyperspectral scenes from the HYDICE sensor.
Advanced techniques for interferometric synthetic aperture radar data exploitation
Paul L. Poehler, Arthur W. Mansfield, Nils N. Haag, et al.
Recent advances in the areas of phase history processing, interferometry, and radargrammetric adjustment have made possible extremely accurate data extraction from Synthetic Aperture Radar (SAR) imagery data. The potential gain from interferometric exploitation is significant since accuracy of measurements can theoretically be determined to within a resolution element of wavelength dimension. A unique combination of advanced techniques is described which has made possible more accurate extraction of metric position and elevation model data from multiple pass SAR. The unique approach outlined can be implemented on conventional stereo extraction workstations with appropriate refinements. This paper addresses the accuracy achievable from repeat passes of the ERS-1 platform. Experimental results are provided.
Hyperspectral Sensor Characterization Techniques II
icon_mobile_dropdown
In-flight radiometric stability of HYDICE for large and small uniform reflectance targets under various conditions
Philip N. Slater, Robert W. Basedow, William S. Aldrich, et al.
The in-flight radiometric stability of images formed in a single spectral band of HYDICE has been examined under various conditions. In the first, the stability of the combined response of the on-board calibrator and HYDICE was checked by comparing repeated image acquisitions over small targets, a few pixels in size, and then over a uniform, extended target. For the second condition, a new flat-field calibration of the focal plane was used which improved the radiometric stability. It is shown from these results that radiometric stability and accuracy are closely related to target contrast and size. This has important consequences for the empirical line approach to calibration or reflectance retrieval. The third condition included a pitch in the attitude of the aircraft that introduced a marked banding of the image in the vicinity of the 1.38-micrometers band, which is very sensitive to the presence of cirrus-cloud ice crystals. This is believed to be another form of the `spectral jitter' described in other papers in these Proceedings.
In-flight radiometric calibration of HYDICE using a reflectance-based approach
Kurtis J. Thome, Christine Gustafson-Bold, Philip N. Slater, et al.
The reflectance-based method is used to determine an absolute radiometric calibration of the HYDICE sensor. Results are given for data collected at Ivanpah Playa, California on June 20, 1995. This paper describes the reflectance-based method as applied to the hyperspectral case of HYDICE. The method uses a modified version of a Gauss-Seidel radiative transfer code to predict the at- sensor radiances used to compute the calibration coefficients. Coefficients were obtained from several overflights of the target area. The results from this work show that calibration coefficients for several of the overflights agreed to better than 10% in all bands not affected by strong gaseous absorption, and better than 5% in portions of the visible and near-infrared.
Estimation of a sensor's ground sample profile with imagery collected at small and large spatial scales
Many applications require a knowledge of the two-dimensional ground spot profile or its Fourier transform, the transfer function. The two-dimensional ground spot profile is needed to analyze image data, to sharpen images and to perform data fusion. In the field of imaging spectrometry, the ground spot profiles in each spectral band are needed to analyze the results of spectral unmixing. The two-dimensional ground spot profile and transfer function carry more information about an imaging system than the modulation transfer function (MTF) in two orthogonal directions, for instance, the crosstrack and downtrack directions of a pushbroom or whiskbroom imaging system. This fact is apparent from the Nyquist sampling theorem and the projection-slice theorem (Mersereau and Oppenheim, 1974) which imply that N projections (in this instance, line spread functions) at angular increments of yr/N are needed to reconstruct a two-dimensional function over an N by N grid. Therefore, unless the two-dimensional ground spot profile is a separable function in the crosstrack and downtrack directions, these two one-dimensional transfer functions carry less information than the two-dimensional ground spot profile or two-dimensional transfer function. A method of estimating the two-dimensional ground spot profile from flight data is desired because the flight performance of a sensor is difficult to assess from laboratory measurements. A zeroth order approximation to the ground spot profile can be obtained from images of the same scene collected at sufficiently different spatial scales. The idea is to use the image of the scene collected at low altitude as a reference image, and to determine the blurring of this reference needed to match the high altitude image. The resultant blur function provides only a zeroth order approximation to the ground spot profile because the low altitude image is itself blurred. If certain conditions are satisfIed by the data, this zeroth order approximation can be improved by correcting for the use of the low altitude image as the reference image. The ground spot profile is defined by the the angular distribution of radiation collected by a single detector and the blurring from downtrack motion during the integration period. The ground spot profile is a function of the crosstrack and downtrack coordinates c and d, the sensor height h above the ground, and the distance in the sensor moves along the downtrack direction during the detector integration period. Using v to denote the sensor surface velocity along the downtrack direction and t to denote the detector integration time, in = The contribution to the ground spot profile that originates from angular spreading depends on the geometric parameters of the collection and is represented by g(c/h, d/h). The crosstrack and downtrack coordinates are divided by h because the width of g( )should scale linearly with the height of the sensor from the ground if the sensor is a pushbroom imager with field-of-view centered at the nadir. The list of contributions to g( )includes the detector element dimensions, diffraction, optical aberrations and high frequency pointing jitter. The contribution from downtrack motion does not scale with height and is represented by a rect function. rect(d/m). The rect(x) function equals 1 for IxI< 1/2, and 0 otherwise. The combined spreading from the angular spread and downtrack motion is expressed as a convolution: p(c, d, h, m) = Jg(c/h, v/h)rect((d —v)/m)dv (1) where p( ) denotes the ground spot profile. If the sensor is operated too close to the ground, defocusing will change the shape of the ground spot profile, and our assumption of linear scaling with h would be invalid. If the image data is corrected to reflectance (Section 2.1), the reflectance samples in a given spectral band are related to the unknown scene reflectance by a convolution with the ground spot profile: r(c, d, h, m) =ffu(, )p(Cj —, d — v,h, m)ddv + noise(c, di). (2) The scene is denoted by u( ) to emphasize that the scene reflectance is unknown, except for the reflectance samples r(c ,d, , h, in) derived from the sensor data. The spatially discrete sample coordinates c and d are defined in terms of the sample intervals Lcand d in the crosstrack and downtrack directions: c =izc and d =jzd, with i and j running over the pixel counts in the crosstrack and downtrack directions. Our goal is to estimate p(c, d, h, ni) from an extensive set of reflectance estimates r(c ,d, h, m) taken at heights h1 and h2, with h1 < h2. The data consists of HYDICE collections over mowed fields and old growth woods from altitudes of 1,500 and 6,000 m. The basic idea outlined above for obtaining a zeroth order approximation of the ground spot profile can be carried out three different ways; one approach is iterative, the other two, non-iterative. The iterative approach starts with a trial ground spot profile and uses it to blur the imagery collected at low altitude. The blurred imagery is compared to the imagery collected at the higher altitude. Different trial ground spot profiles are tried, and the ground spot profile is estimated by the trial profile that achieves the closest match between the low and high altitude images. The direct approaches apply either in the spatial domain or in the Fourier domain. The spatial domain calculation uses least-squares to estimate parameters of the ground spot profile. The Fourier domain approach estimates the Fourier transform of the ground spot profile as the ratio of the Fourier transform of the high altitude image to the Fourier transform of the low altitude image. These approaches to estimating the ground sample profile of HYDICE do not use overpasses of test targets or linear features such as bridges, and oniy require repeated collections of a scene at low and high altitude. On the other hand, the analysis of aerial imagery to find the two-dimensional ground spot profile involves solving new data processing problems that are not encountered in MTF estimates from images of high contrast linear features such as bridges. For example, it is necessary to cope with any geometric distortions in the imagery. HYDICE imagery contains geometric distortions from several sources: 1 . pointing instability of the stabilized platform, 2. wander of the airplane ground track, 3. imperfect alignment of the pushbroom with the ground track, 4. curvature of the spectrometer slit as projected on the earth surface (the spectrometer slit is slightly curved to reduce an optical aberration in the spectrometer known as "smile."), and 5. different perspectives of the surface relief at the two different heights above the surface. To cope with these distortions, the imagery cannot be treated as a monolithic array, but must be processed in small blocks that are nearly distortion free. The residual distortion internal to these image blocks will introduce errors, but unless there is a systematic bias in the residual distortion, the errors will average out. The HYDICE ground spot profile is a slowly varying function of the field angle and the spectral channel. The dependence on field angle is preserved by processing the images in small blocks, and the dependence on spectral channel is preserved by processing each spectral channel separately.
Nonliteral Exploitation Techniques and Applications II
icon_mobile_dropdown
Progress of the Wyoming Hyperspectral Imagery Pilot Project: analysis of AVIRIS data for rangeland assessment
E. Raymond Hunt Jr., M. M. Barlow, C. L. Mahelona, et al.
Management of semi-arid rangelands in the western United States for sustainability requires objective methods for monitoring large areas; the goal of the Wyoming Hyperspectral Imagery Pilot Project was to determine if hyperspectral remote sensing can provide the capability for rangeland assessment. Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) data obtained during the Geology Group Shoot over the Nature Conservancy's Red Canyon Ranch near Lander, WY, were compared with Landsat Thematic Mapper data for classification of vegetation communities. Using the same training areas, supervised classification from the two sensors were significantly different. The amount of vegetation cover from unconstrained linear spectral unmixing was highly correlated to normalized difference vegetation index. The flight lines were east-west, vegetation on the north side of the image had significantly higher reflectances compared to similar vegetation on the south side of the image, possibly due to differences in the bidirectional reflectance distribution function. These results indicate hyperspectral imagery can provide better data on community composition, but equivalent information on the amount of vegetation. Thus, infrequent collection of AVIRIS data combined with other sensors provides an optimal solution for monitoring rangelands.