Proceedings Volume 2758

Algorithms for Multispectral and Hyperspectral Imagery II

A. Evan Iverson
cover
Proceedings Volume 2758

Algorithms for Multispectral and Hyperspectral Imagery II

A. Evan Iverson
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 17 June 1996
Contents: 7 Sessions, 37 Papers, 0 Presentations
Conference: Aerospace/Defense Sensing and Controls 1996
Volume Number: 2758

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Detection and Classification I
  • Detection and Classification II
  • Fusion and Sharpening
  • Techniques and Applications
  • Atmospheric Correction and Radiometric Calibration
  • Sensors and Sensor Data Processing
  • Detection and Classification II
  • Poster Session
  • Detection and Classification II
  • Poster Session
Detection and Classification I
icon_mobile_dropdown
Improving the performance of genetic algorithms for terrain categorization of multispectral images
David E. Larch
A method that uses a genetic algorithm (GA) to optimize rules for categorizing terrain as depicted in multispectral data has been developed by us. A variety of multispectral data have been used in the work. Linear techniques have not separated terrain categories with sufficient accuracy so that genetic algorithms have been applied to the problem. Genetic algorithms, in general, are a nonlinear optimization technique based on the biological ideas of natural selection and survival of the fittest. For the work presented here, the genetic algorithm optimizes rules for the categorization of terrain. The genetic algorithm produced promising results for terrain categorization; however, work continues with efforts to improve classification accuracy. As part of this effort, new rule types have been added to the genetic algorithm's repertoire. These new rule types include the clustering of data, the ratio of bands, the linear combination of bands, boxes in spectral space, and the second order combination of three bands. Improved performance of the rules is demonstrated.
Segmentation of multiband imagery using minimum spanning trees
James R. Lersch, A. Evan Iverson, Brian N. Webb, et al.
We present a new technique for the automatic segmentation of multiband imagery. Our approach is based on the computation of a minimum spanning tree over a graph derived from the image. We use a fast graph-search algorithm and a custom tree-splitting algorithm to provide a high level of performance. This approach captures some of the gestalt characteristics of human perceptual grouping and is easily adaptable to a variety of spectral and spatial criteria. We demonstrate our technique on multispectral and hyperspectral imagery of the earth's surface.
Spectral shape classification system for Landsat thematic mapper
A multispectral classification system based on an alternative spectral representation is described and its performance over a full landsat thematic mapper (TM) scene evaluated. Spectral classes are represented by their spectral shape -- a vector of binary features that describes the relative values between spectral bands. An algorithm for segmenting or clustering TM data based on this representation is described. After classes have been assigned to a subset of spectral shapes within training areas, the remaining spectral shapes are classified according to their Hamming distance to those that have already been classified. The performance of the spectral shape classifier is compared to a maximum likelihood classifier over five sites that are fairly representative of the full landsat scene considered. Although the performance of the two classifiers is not significantly different within a site, the performance of the spectral shape classifier is significantly better than the maximum likelihood classifier across sites. A full-scene spectral shape classifier is then described which combines spectral signature files that associate classes with spectral shapes derived over the five sites into a single file that is used to classify the full scene. The classification accuracy of the full-scene spectral shape classifier is shown to be superior to that of a stratified maximum-likelihood classifier. The spectral shape classifier is implemented in C and is able to process an entire landsat TM scene in about one hour on a single processor SUN SPARC 10 workstation with 128 megabytes of RAM.
Nonparametric classification of subpixel materials in multispectral imagery
Eric R. Boudreau, Robert L. Huguenin, Mark A. Karaska
An effective process for the automatic classification of subpixel materials in multispectral imagery has been developed. The applied analysis spectral analytical process (AASAP) isolates the contribution of specific materials of interest (MOI) within mixed pixels. AASAP consists of a suite of algorithms that perform environmental correction, signature derivation, and subpixel classification. Atmospheric and sun angle correction factors are extracted directly from imagery, allowing signatures produced from a given image to be applied to other images. AASAP signature derivation extracts a component of the pixel spectra that is most common to the training set to produce a signature spectrum and nonparametric feature space. The subpixel classifier applies a background estimation technique to a given pixel under test to produce a residual. A detection occurs when the residual falls within the signature feature space. AASAP was employed to detect stands of Loblolly Pine in a landsat TM scene that contained a variety of species of southern yellow pine. An independent field evaluation indicated that 85% of the detections contained over 20% Loblolly, and that 91% of the known Loblolly stands were detected. For another application, a crop signature derived from a scene in Texas detected occurrences of the same crop in scenes from Kansas and Mexico. AASAP has also been used to locate subpixel occurrences of soil contamination, wetlands species, and lines of communications.
Using maps to automate the classification of remotely sensed imagery
The accurate classification of remotely sensed imagery usually requires some form of ground truth data. Maps are potentially a valuable source of ground truth but have several problems (e.g., they are usually out-dated, features are generalized, and thematic categories in the map often do not correspond to distinct clusters or segments in the imagery). We describe several methods for using maps to automate the classification of remotely sensed data, specifically landsat thematic mapper imagery. In each, map data are co-registered to all or a part of the image to be classified. A probability model relating spectral clusters derived from the imagery to thematic categories contained in the map is then estimated. This model is computed globally and adjusted locally based on context. By computing the probability model over a large area (e.g., the full landsat scene) general relationships between spectral categories and clusters are captured even though there are differences between the image and the map. Then, by adjusting and applying the model locally, new features can be extracted from the image that are not contained in the map and, in certain cases, different classes can be assigned to the same cluster in different parts of the image based on context. Experimental results are presented for several landsat scenes. Several of the methods produced results that were more accurate than the map. We show that these methods are able to enhance the spatial detail of features contained in the map, identify new features not present in the map, and fill in areas in which map coverage does not exist.
Automatic neural network-based cloud detection/classification scheme using multispectral and textural features
Mukhtiar Ahmed Shaikh, Bin Tian, Mahmood R. Azimi-Sadjadi, et al.
In this paper, efficient and robust neural network-based schemes are introduced to perform automatic cloud detection and classification exploiting textural, spectral and temporal features. An unsupervised Kohonen neural network was used to classify the cloud contents of an image into ten different cloud classes. In the first approach, image was segmented into small blocks of size 8 by 8. Inputs to the network consisted of textural features extracted from each block obtained using the wavelet transform (WT). To improve the detection rate and reduce the false positive rate, a multi-channel fusion system was constructed to combine the results of different optical bands. In the second approach, the inputs to the network was a vector consisting of four values of the corresponding pixels in the four bands/channels. In order to keep track of the spectral changes over time, a temporal-based neural network adaptation scheme is also introduced. The simulation results show that the neural network with temporal adaptation can follow the variations of the spectral features and thus achieve high accuracy in cloud detection/classification task at different times. The results using high resolution GOES 8 data show the promise of the Kohonen neural network when used in conjunction with textural and spectral features for cloud detection/classification.
Detection and Classification II
icon_mobile_dropdown
Small-target detection in multispectral imagery with cyclic overlay processing
The detection of small, hostile targets in multispectral imagery is generally complicated by sensor noise, atmospheric obscurants, and spatial distortions induced by the point-spread function (PSF). Traditional methods for multispectral detection of small targets, such as signature- based discrimination predicated upon deterministic physical models, have not proven robust in the presence of camera noise at low light levels. The additional problem of target variability also confounds a signature-based approach. Due to time-dependent variations in illuminant spectra response, targets can appear to have different spectral properties at different times of day and under various weather conditions. In this paper, we discuss computationally efficient methods for locating targets that differ spectrally from their spatially adjacent backgrounds but are similar to features located elsewhere in the source image. In particular, we note that flicker effects can be produced in which target intensity appears to vary differently with background intensity. Such effects are produced computationally by cyclic overlay processing (COP), which sequentially displays monospectral band images to achieve different perceived flicker rates of target and background. When combined with knowledge about the human visual system (HVS), COP can be successfully used in conjunction with neighborhood operations to segment target regions in highly cluttered imagery. We emphasize the role of target-background contrast in potentiating flicker effects, and discuss algorithms for computing COP. Analyses emphasize computational cost and effectiveness of various COP filter configurations for detecting targets that are similar to, or partially obscured by, surrounding cover or earth. We also discuss the implementation of our algorithms on parallel processors, in particular, the parallel algebraic logic (PAC) architecture currently being implemented in cooperation with Lockheed- Martin and USAF Wright Laboratory. Our algorithms are written in image algebra, a rigorous, concise, inherently parallel notation that unifies linear and nonlinear mathematics in the image domain and has been implemented on a variety of parallel processors. Thus, our algorithms are both feasible and portable.
Subpixel material identification by residual correlation
Pamela L. Blake, Gerald Pellegrini, Mark Richard Vriesenga
The recognition of subpixel signatures is critical to realizing the full detection potential of multispectral and hyperspectral sensors. No approach has been developed that optimizes and fully characterizes the subpixel spectral components independently for every pixel in a data set. Such a full characterization is important because a target or material of interest may appear against a variety of background types in the same scene, and will undoubtedly be more distinguishable against some background types than others. Further, characterization of ground reflectance on a pixel-by-pixel basis is important for validating the quality of the atmospheric calibration results. We have developed an approach called the residual correlation method (RCM) for performing a full decomposition of each pixel into its component spectral elements. In this paper we describe preliminary results for the application of the RCM to hyperspectral pixel data. The work reported in this paper is from the first phase of a three phase research project. In this phase we develop the basic methodology for subpixel material identification and test it against hyperspectral data for a well-known area. The RCM determines the presence of minerals and gives a linear approximation of the abundances of the minerals in each pixel. PHase one performs a nominal atmospheric calibration using a simple normalization technique. The second phase will be to determine more precise mineral abundances using a nonlinear demixing approach based on the band shape of relevant absorption features. Phase two will also explore various methods of presenting the results of a full demixing for each pixel. Phase three of this research will be to perform a more rigorous atmospheric calibration and to include that approach as an intrinsic part of the RCM.
Automated map generation and update from high-resolution multispectral imagery
Michael E. Bullock, Scott R. Fairchild, Tim J. Patterson, et al.
This paper describes the LOCATE TNG system, which generates map products directly from multispectral imagery in an automated fashion. The LOCATE TNG system uses spectral and spatial feature information to extract various types of man- made lines of communication (LOCs) from imagery and generate them in the form of digital vector maps. The generated maps may be compared against reference digital maps to automatically find new or changed LOCs. The original LOCATE (lines of communication apparent from thematic mapper evidence) system was designed and developed to use landsat thematic mapper imagery having a resolution of 30 m. LOCATE TNG (the next generation) has been redesigned to also have the capability to use high resolution multispectral imagery to be available from the next generation of commercial satellites. These satellites will provide multispectral and panchromatic imagery having resolutions down to 4 m and 1 m, respectively, thus dramatically improving the information available for exploitation. LOCATE TNG employs a hierarchical algorithmic approach to extracting layers of LOCs (primary roads, secondary roads, etc.) that may be used for GIS applications.
Automatic extraction and identification of lines of communication from high-resolution multispectral imagery
Karen F. West, James R. Lersch, A. Evan Iverson, et al.
Use of remotely sensed imagery to map lines of communication or revise existing maps is currently a labor-intensive process. In this paper, we present a system for automatically extracting lines of communication from high- resolution (less than 5-meter spatial resolution) multispectral imagery. Positive systems ADAR 5500 imagery is used to demonstrate system functionality. Our system includes automatic detection and identification algorithms, a geospatial database for storage and retrieval of results, a change detection component that compares newly detected lines of communication against stored database information, and a user interface that allows operator review and editing of automatically extracted results.
Toward automation of the extraction of lines of communication from multispectral images using a spatiospectral extraction technique
Adequate imagery for automated mapping of large areas became available with the successful launch of the 30-meter 7-band thematic mapper (TM) on Landsat 4 in 1982. Yet an adequate approach to automated line-of-communication (LOC) extraction continues to elude the remote sensing community. Perhaps the single biggest complicating factor is the inherently subpixel nature of the problem; almost all LOCs are narrower than current commercial sensor resolutions. Other complications include: spatial and temporal variability of LOC surface spectra, proximity to and abundance of spectrally similar materials, and atmospheric effects. We describe progress towards the detection and identification of LOCs using a technique that simultaneously extracts both spatial and spectral information. The approach currently uses a linear mixture model for simultaneously decomposing the image into fractional compositions and corresponding spectra using physical constraints. The algorithm differs from other approaches in that no traditional preprocessing or prior spatial or spectral information is required to extract the LOCs and their spectra. The algorithm has been successfully applied to TM and M-7 data. Results are presented.
Fusion and Sharpening
icon_mobile_dropdown
Evaluating an image-fusion algorithm with synthetic image generation tools
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Fusion of airborne hyperspectral and multispectral images
Boris Zhukov, Dieter Oertel, Peter Strobl, et al.
The multi-sensor multi-resolution technique (MMT) was applied to fuse a multispectral image obtained by the multispectral scanner DAEDALUS-1268 with the resolution of 6 m and a hyperspectral image obtained by the imaging spectrometer DAIS-7915. The spatial resolution of the DAIS- 7915 image was additionally degraded to 24 m in order to simulate multi-sensor data fusion with a very different sensor resolution, as is typical for satellite sensors. Both sensors had been operated simultaneously on one aircraft. The MMT algorithm includes: (1) (unsupervised) classification of the multispectral image and mapping the classes with the high resolution of the multispectral scanner, (2) retrieval of the hyperspectral signatures of these classes from the hyperspectral image, and (3) generation of the merged image which combines the pixel size of the multispectral scanner and the spectral bands of the imaging spectrometer. Additional low-pass correction of the merged image allowed us to increase significantly its accuracy. The minimal pixel error of 6.9% was obtained when the classification was performed with 256 spectral classes.
Multispectral image resolution enhancement to improve efficiency of spectral-analysis algorithms
Roberto Aloisi, Yvan Grabit
The ARSIS method (ARSIS: French acronym for 'spatial resolution enhancement by injection of structures') is based on multiresolution analysis techniques, especially on the wavelet transform. Its goal is to increase the spatial resolution of an image using the geometric structures extracted from a higher resolution image, given a sufficient level of spatial correlation between these two images. The method was developed on SPOT data for the processing of 10 meter resolution multispectral images. This paper presents the studies conducted for the adaptation of ARSIS to allow the fusion of SPOT XS data with high resolution panchromatic aerial imagery. The models used in the original method revealed themselves to be ineffective for important resolution differences between the images and various methods were tested to obtain acceptable results.
Quantitative comparison of multispectral image-sharpening algorithms
Tim J. Patterson, Robert Stu Haxton, Michael E. Bullock, et al.
This paper presents a quantitative comparison of a multispectral sharpening algorithm, which was introduced previously, with other standard techniques. The sharpening algorithms combine a high spatial resolution panchromatic image with a lower resolution multispectral image. The combination is based on a pseudo inverse of the image formation equations. The paper begins with an introduction motivating the technique. Previous approaches to the sharpening problem are then outlined. This is followed by a description of two new approaches. The first is an improvement on the standard intensity saturation hue (ISH) transform and the second is a sharpener which was introduced in previous papers. After descriptions of the sharpeners, the more important results of a series of experiments to evaluate the sharpener performance are presented. A full series of tests in which low resolution multispectral data was synthesized from a high resolution scene, sharpened with various techniques and compared to the original high resolution imagery was conducted. The most significant results are presented in this paper. A second test was conducted using both high and low resolution images collected of the same area. Sharpened low resolution multispectral images were compared to actual high resolution imagery of the same area.
Improved cross-sensor resolution enhancement for landcover products
Todd A. Jamison, Ernest A. Carroll
The overall accuracy of digital landcover products can often be improved through the use of fused imagery products generated by good cross-sensor resolution enhancement algorithms. This paper describes a process for fusing medium resolution multi-spectral data, such as Landsat and SPOT, with National Aerial Photographic Program (NAPP) photographs. The NAPP has a goal of providing coverage of the 48 contiguous United States every 10 years at high spatial resolution [i.e., 2 meter ground resolving distance (GRD)]. NAPP and National High Altitude Photography (NHAP) provide a wealth of current and historic high resolution data for environmental and natural resource studies. Despite their comprehensive coverage and high spatial resolution, these images are often overlooked for use in large-scale computerized classification problems, because: (1) they are photographic 'analog' data stored on film, not digital data on magnetic media; (2) comprehensive support data (e.g., aircraft x, y, z, roll, pitch, and yaw) is lacking; (3) they are not geocoded or orthorectified and random aircraft motion combined with sensor projection make it difficult to georegister; and (4) their radiometric quality varies both within and between images. This paper describes a technique for merging NAPP/NHAP data with lower resolution satellite data such as Landsat and SPOT which results in a fused image product that has the high spatial resolution of the NAPP/NHAP data and the spectral quality of the satellite data. The technique permits the user to utilize this higher resolution data to improve the quality and accuracy of their landcover, change detection, stress analysis, or other remote sensing products. Specific published results show an improvement in the overall accuracy from 79.4% correct classification using Landsat TM (25 meter GSD) alone to over 94.2% correct classification using higher resolution (5 meter GSD) data. We also discuss our future plans related to these techniques and their applications.
Cross-sensor resolution enhancement of hyperspectral images using wavelet decomposition
Laurent Peytavin
In the satellite remote-sensing domain, some technological and physical constraints are not in favor of acquiring high spatial resolution hyperspectral images when there is a need to use high resolution panchromatic images as complementary data to the hyperspectral images. If the data are taken nearly at the same time, some cross-sensor resolution enhancement techniques are able to produce a merged image as close as possible of what would be a high spatial resolution hyperspectral image. Multiresolution wavelet decomposition is the most interesting tool to perform this process. Some merging experiments on AVIRIS images (pixel size 20 meters) and USGS digital aerial images (pixel size 1 meter) have been conducted using this technique. They produced simulated AVIRIS images having a spatial resolution of 5 meters. If the preliminary co-registration step remains critical, global spectral statistics and visual qualitative experimentations show that the result is a non-disturbed hyperspectral image, good candidate for straight exploitation.
Techniques and Applications
icon_mobile_dropdown
Nonlinear mean-square estimation with applications in remote sensing
An approach to image modeling based on nonlinear mean-square estimation that does not assume a functional form for the model is described. The relationship between input and output images is represented in the form of a lookup table that can be efficiently computed from, and applied to images. Three applications are presented to illustrate the utility of the technique in remote sensing. The first illustrates how the method can be used to estimate the values of physical parameters from imagery. Specifically we estimate the topographic component (i.e., the variation in brightness caused by the shape of the surface) from multispectral imagery. The second application is a nonlinear change detection algorithm which predicts one image as a nonlinear function of another. In cases where the frequency of change is large (e.g., due to atmospheric and environmental differences), the algorithm is shown to be superior in performance to linear change detection. In the last application, a technique for removing wavelength- dependent space-varying haze from multispectral imagery is presented. The technique uses the IR bands, which are not affected significantly by haze, to predict the visible bands. Results show a significant reduction in haze over the area considered. Additional application areas are also discussed.
Water vapor retrieval over many surface types
In this paper we present a study of the water vapor retrieval for many natural surface types which could be valuable for multi-spectral instruments using the existing continuum interpolated band ratio (CIBR) for the 940 nm water vapor absorption feature. An atmospheric code (6S) and 562 spectra were used to compute the top of the atmosphere radiance near the 940 nm water vapor absorption feature in steps of 2.5 nm as a function of precipitable water (PW). We derive a novel technique called 'atmospheric pre-corrected differential absorption' (APDA) and show that APDA performs better than the CIBR over many surface types.
Multisensor evaluation research: enhancement techniques and sensor evaluation results
Gary A. Duncan, William H. Heidbreder, James Hammack, et al.
This paper compares imagery from four mapping sensors and evaluates the utility of the imagery to support the function of cartographic feature analysis. The four sensors examined are: Landsat TM, SPOT, 5 M Landsat (simulated), and 1 M electro-optical. The feature analysis process is described, and a proposed experiment designed to compare feature analysis utility is discussed. The proposed experiment includes the use of both monoscopic and stereo imagery, as well as application of visual image enhancement techniques and supporting algorithms that facilitate image interpretation. The described techniques represent an initial basis for study of more automated multispectral and multisensor techniques. Also, the applicability of using multiresolution and multisensor techniques is discussed.
Accurate top-of-the-atmosphere albedo determination from multiple views of the MISR instrument
Christoph C. Borel, Siegfried A. W. Gerstl, Carmen Tornow
Changes in the Earth's surface albedo impact the atmospheric and global energy budget and contribute to global climate change. It is now recognized that multi-spectral and multi- angular views of the Earth's top of the atmosphere (TOA) albedo are necessary to provide information on albedo changes. In this paper we describe four semi-empirical bidirectional reflectance factor (BRF) models which are inverted for two and three unknowns. The retrieved BRF parameters are then used to compute the TOA spectral albedo for clear sky conditions. Using this approach we find that the albedo can be computed with better than 1% error in the visible and 1.5% in the near infrared (NIR) for most surface types.
Atmospheric Correction and Radiometric Calibration
icon_mobile_dropdown
Atmospheric correction with haze removal including a haze/clear transition region
Rudolf Richter
High spatial resolution satellite imagery like Landsat TM and SPOT HRV often contain large regions covered with haze. By applying suitable thresholds clear areas as well as haze and cloud regions can be separated and stored as binary images. Cloud areas are identified in the method presented here, but will not be included in the haze correction, since the ground information cannot be retrieved in optically thick regions. For each spectral band, the haze removal algorithm matches the histogram of the haze areas to the histogram of the clear part of the scene. Due to the binary nature of thresholding this algorithm often causes sharply defined edges at the borders of haze regions. Therefore, a haze boundary region is introduced to generate a smoother transition from haze to clear areas. Two transition methods with a leveled boundary are presented: with fixed weighting and with histogram-dependent weighting. Additionally, the image is atmospherically corrected to remove the remaining atmospheric influence and obtain ground reflectance data.
Adaptive multispectral normalization system
Mary R. Lawler-Covell, Karen F. West, Michael W. Kiefer, et al.
A multispectral normalization processing system has been developed to produce percent reflectance maps from multispectral imagery (MSI) in the .4 to 2.5 micron wavelength range. It is adaptive to multiple spatial resolutions, supporting resolutions in the .25 meter to 30 meter range. The normalization process takes advantage of known naturally occurring and man-made materials in the image to remove the effects of atmospheric haze and sensor gain contributions for each multispectral band. The output product is a percent reflectance map for each multispectral band. Although the normalization technique is well known, the MSI normalization system (MSINS) provides a simple, adaptive, robust graphical user interface for normalizing multispectral imagery from various sensor platforms. Over 130 different surface material spectra have been collected from reputable sources in literature and other spectral material libraries and installed in the MSINS Materials Spectral Information Database (MSID). The MSID has been designed to allow the addition of new material spectra into the system via a menu interface. A neural-net-based region grower has been developed to minimize user interaction and increase the robustness and repeatability of the normalization. New multispectral sensor platforms can be introduced into the system quickly via a menu interface. The current system was developed and tested using Landsat Thematic Mapper, Erim M7 Mapper, Positive Systems ADAR 5500, and ITRES casi multispectral imagery.
Radiometric calibration archive for Landsat Thematic Mapper (TM)
For more than ten years, Landsat Thematic Mapper (TM) data has been collected of the Earth's surface. Although equipped with an internal calibrator, routine temporal analysis of the instrument's radiometry has not been performed. Recently, a project has been initiated to recover TM radiometric calibration data from the Landsat archive to form a TM calibration archive. This archive consists of calibration data collected at several different time scales: single orbit, instrument lifetime, and an intermediate scale. Analysis of these data is providing insights into how detector response, as well as internal calibrator performance, has changed over time. Initial results are shown that indicate changes in detector gain and characterization of instrument anomalies within a single orbit. Information obtained from this project will allow a more accurate calibration of data that is already in the Landsat archive. Additionally, it will directly impact development of correction algorithms for Landsat 7.
Fractal geometry for atmospheric correction and canopy simulation
Carmen Tornow
Global climate modeling needs a good parameterization of the vegetative surface. Two of the main important parameters are the leaf area index (LAI) and the fraction of absorbed photosynthetically active radiation (FPAR). In order to derive these values from space and airborne spectral radiance measurements one needs information on the actual atmospheric state as well as good canopy models. First we have developed a retrieval method for the optical depth to perform an atmospheric correction of remote sensing data. The atmospheric influence reduces the global image contrast and acts as a low pass filter. We found that the autocorrelation function [ACF(lambda )(h)] of the image depends on the global image contrast C and on the fractal dimension s. Using multiple regression the spectral optical depth in the visible range can be estimated from C and s with an absolute accuracy of 0.021. This method was applied and tested for a number of rural TM scenes. Atmospheric correction allows us to calculate the canopy reflectance from the image data. The relationships between the canopy reflectance and LAI or FPAR can be determined from canopy radiative transfer modeling. Row and shadowing effects influence the bi-directional reflectance distribution function (BRDF) since the leaves and stems are real 3D objects. In order to use a ray tracer for 3D radiative transfer simulation the canopy should be described by simple shapes (discs, cylinders) and polygones. Lindenmayer systems which are based on the ideas of fractal geometry allow the construction of plants and trees in this way. We have created simple artificial plants and arranged them into rows to study shadowing and row effects and compute the BRDF in various spectral channels.
Sensors and Sensor Data Processing
icon_mobile_dropdown
In-flight refocusing of the SPOT-1 HRV cameras
Aime Meygret, Dominique Leger
A successful in-flight refocusing experiment based on image processing is described. Each of the two SPOT1 HRV cameras was refocused with respect to the other by analyzing the image spectrum taken simultaneously by both cameras. The experiment was carried out during the autumn of 1994 and its results are also presented: (1) the estimated optimal position for the rear corrective lens of the camera, (2) MTF improvement and its relative quality in the field of view in terms of homogeneity and astigmatism, (3) the validation of a theoretical geometric defocusing model that gives the changes in MTF as a function of corrective lens position. We conclude with the high accuracy of our method (2%) and its sensitivity to the spectral content of the viewed landscapes.
POLDER level-1 processing algorithms
Olivier Hagolle, Agnes Guerry, Laurent Cunin, et al.
POLDER (polarization and directionality of the Earth reflectances) is a French instrument that will be flown on- board ADEOS (advanced earth observing satellite) polar orbiting satellite, scheduled to be launched in August 1996. POLDER is a multispectral imaging radiometer/polarimeter designed to collect global and repetitive observations of the solar radiation reflected by the Earth/atmosphere system, with a wide field of view (2400 km) and a moderate geometric resolution (6 km). The instrument concept is based on telecentric optics, on a rotating wheel carrying 15 spectral filters and polarizers, and on a bidimensional CCD detector array. In addition to the classical measurement and mapping characteristics of a narrow-band imaging radiometer, POLDER has a unique ability to measure polarized reflectances at three different polarization angles (for three of its eight visible and near-infrared spectral bands), and to observe target reflectances from 14 different viewing directions during a single satellite pass. All the data transmitted by POLDER are processed in the POLDER Processing Centre. Level 1 products include geometrically and radiometrically corrected data, level 2 products are elementary geophysical products created from a single satellite pass, and level 3 products are geophysical synthesis from several passes of the satellite. This paper presents the radiometrical and geometrical algorithms of the level 1 processing: new algorithms developed for the removal of sensor artefacts (smearing, stray light), for the radiometrical mode inversion (normalized radiance and polarization parameter extraction), and for the geometrical projection of the data on a unique grid are explained.
Real-time processing of midwave-infrared imaging spectrometer data
Richard Preston, Mark C. Norton, Robert W. Crow
A custom six channel digital processing card has been built for the mid-wave infrared spectral imager (MIRSI) for spectrally classifying objects in the 2.9 - 4.9 micron spectral region. The card, called the real time processor, fits in a standard ISA PC slot and operates on the 12 bit data from the MIRSI InSb camera, performing six spectrally weighted sums on each focal plane array row (i.e., spectrum of a spatial pixel). Objects in the scene are classified according to the amplitudes of the weighted sums, where the weighting functions are obtained using a least squares technique. An optional analog interface has been added to the real time processor to allow RS-170 input of HSI data from a video tape or standard video camera output. A preprocessing card is being designed to provide bad pixel substitution and to apply gains and offsets independent to each pixel prior to the spectral classification to provide calibrated data to the real time processor.
Evaluation of onboard hyperspectral-image-compression techniques for a parallel push-broom sensor
Scott D. Briles
A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation-coefficient error. Implementation issues are considered in algorithm development.
Image reconstruction algorithms for DOIS: a diffractive optic image spectrometer
Denise M. Lyons, Kevin J. Whitcomb
The diffractive optic imaging spectrometer, DOIS, is a compact, economical, rugged, programmable, multi-spectral imager. The design implements a conventional CCD camera and emerging diffractive optical element (DOE) technology in an elegant configuration, adding spectroscopy capabilities to current imaging systems (Lyons 1995). This paper reports on the visible prototype DOIS that was designed, fabricated and characterized. Algorithms are presented for simulation and post-detection processing with digital image processing techniques. This improves the spectral resolution by removing unwanted blurred components from the spectral images. DOIS is a practical image spectrometer that can be built to operate at ultraviolet, visible or infrared wavelengths for applications in surveillance, remote sensing, law enforcement, environmental monitoring, laser communications, and laser counter intelligence.
HYDICE postflight data processing
William S. Aldrich, Mary E. Kappus, Ronald G. Resmini, et al.
The hyperspectral digital imagery collection experiment (HYDICE) sensor records instrument counts for scene data, in-flight spectral and radiometric calibration sequences, and dark current levels onto an AMPEX DCRsi data tape. Following flight, the HYDICE ground data processing subsystem (GDPS) transforms selected scene data from digital numbers (DN) to calibrated radiance levels at the sensor aperture. This processing includes: dark current correction, spectral and radiometric calibration, conversion to radiance, and replacement of bad detector elements. A description of the algorithms for post-flight data processing is presented. A brief analysis of the original radiometric calibration procedure is given, along with a description of the development of the modified procedure currently used. Example data collected during the 1995 flight season, but uncorrected and processed, are shown to demonstrate the removal of apparent sensor artifacts (e.g., non-uniformities in detector response over the array) as a result of this transformation.
Detection and Classification II
icon_mobile_dropdown
Nonlinear unmixing of simulated MightySat FTHSI data for target detection limits in a humid tropical forest scene
Frederick P. Portigal, Leonard John Otten III
In this paper we demonstrate the detection limits of the Kestrel Fourier transform hyper-spectral imager (FTHSI) on the MightySat II.I to detect target spectra in a complex natural scene. We simulate the MightySat II.I FTHSI data using a combination of landsat TM based endmember spectra derived from a scene of La Mosquitia, Honduras and library spectra measured in the field at 3 nm spectral resolution. The TM endmember images define the mixing space to produce a simulated hyper-spectral reflectance image. Fractions define how the field measured endmember spectra are mixed in order to produce the simulated hyper-cube. The HIMP model is used to predict the radiance as observed by the FTHSI. Results indicate that this technique allows the detection of tropical camouflage in a natural tropical background when the camouflage is mixed at one tenth of one percent with an accuracy of 95.7 percent. At one percent mixing ratio the detection accuracy rises to 99.7 percent. At five percent and beyond the detection accuracy is one hundred percent. This physically-based non-linear unmixing technique is two orders of magnitude more sensitive than traditional linear unmixing or matched filtering.
Poster Session
icon_mobile_dropdown
Calibration of MODTRAN3 with PGAMS observational data for atmospheric correction applications
Stephen Schiller, Jeffery C. Luvall, Jere Justus
The portable ground-based atmospheric monitoring system (PGAMS) is a spectroradiometer system that provides a set of in situ solar and hemispherical sky irradiance, path radiance, and surface reflectance measurements. The observations provide input parameters for the calibration of atmospheric algorithms applied to multispectral and hyperspectral images in the visible and near infrared spectrum. Presented in this paper are the results of comparing hyperspectral surface radiances calculated using MODTRAN3 with PGAMS field measurements for a blue tarp and grass surface targets. Good agreement was obtained by constraining MODTRAN3 to only a rural atmospheric model with a calibrated visibility and surface reflectance from PGAMS observations. This was accomplished even though the sky conditions were unsteady as indicated by a varying aerosol extinction. Average absolute differences of 11.3 and 7.4 percent over the wavelength range from 400 to 1000 nm were obtained for the grass and blue tarp surfaces respectively. However, transformation to at-sensor radiances require additional constraints on the single-scattering albedo and scattering phase function so that they exhibit the specific real-time aerosol properties rather than a seasonal average model.
Preprocessing for the digital airborne imaging spectrometer DAIS 7915
Peter Strobl, Rudolf Richter, Frank Lehmann, et al.
The digital airborne imaging spectrometer DAIS 7915 is a new hyperspectral scanner developed for scientific and commercial applications. The design of the sensor makes a dedicated preprocessing necessary, prior to any data evaluation. Therefore, a facility is being developed at DLR to fulfill the needs of operational preprocessing. Besides that this facility is used for continuous quality control to support the hardware team in improving the performance of the instrument. The implementation of the software and the algorithms currently used are presented in this paper.
Detection and Classification II
icon_mobile_dropdown
Analysis of the computed-tomography imaging spectrometer by singular-value decomposition
Michael R. Descour, Robert A. Schowengerdt, Eustace L. Dereniak
A linear spatially variant imaging system, such as the computed-tomography imaging spectrometer (CTIS), is naturally described by means of a system matrix that represents the mapping from object space (three dimensional, in this case) to image space (two-dimensional). Such a matrix can be analyzed to reveal a set of vectors {u} that span the object space. In addition, a spectrum of singular values is obtained that defines the contribution of each vector u to the image space. We present the results of such an analysis for two simulated CTIS systems, each with a different number of dispersed images, and an experimental CTIS. The structure of the vectors is consistent with expectations due to the central-slice theorem.
Poster Session
icon_mobile_dropdown
Image correction and data quality control for the modular optoelectronic multispectral/stereo scanner MOMS 02
Peter Strobl
During the German D2 Mission (STS 55) on the Space Shuttle in May 1993 the modular optoelectronic multispectral/stereo scanner MOMS 02 acquired more than 1000 scenes within 48 data takes. This sensor is a push broom instrument with 8 CCD arrays for various panchromatic and multispectral bands. Because laboratory calibration was not available at that time, a statistical approach was chosen to correct for system artifacts. This paper presents the correction technique and its implementation. To account for the scene adaptive character of the algorithm the improvement achieved for each individual image had to be verified. Therefore a one page image quality protocol was developed which presents a set of relevant parameters in a combination of image, graph and text displays. The collected pages were suitable to serve as a catalog for access and recall of the correction results at any time without saving the actual image data. The concept of automatically generated and printed data quality protocols proved to be an efficient tool in surveying the processing of large amounts of data. Therefore it will be integrated in the data processing for the MOMS PRIRODA mission which will start in mid 1996.
Atmospheric correction of DAIS hyperspectral image data
Rudolf Richter
A software package for the atmospheric correction of airborne hyperspectral and multispectral image data has been developed at DLR. Reflective and thermal spectral channels are taken into account employing the MODTRAN code to calculate the radiative transfer. A list of four airborne sensors is currently included in a menu-driven user-friendly environment. New instruments may easily be added. This paper focuses on the algorithms employed for the processing of DAIS imagery. The DAIS is a digital airborne imaging spectrometer with 79 spectral bands covering the optical spectrum from the visible to the thermal infrared region.