Proceedings Volume 9472

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXI

cover
Proceedings Volume 9472

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXI

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 11 June 2015
Contents: 13 Sessions, 56 Papers, 0 Presentations
Conference: SPIE Defense + Security 2015
Volume Number: 9472

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9472
  • Spectral Detection, Identification, and Quantification
  • Spectral Data Compression and Dimensionality Reduction
  • Spectral Signature Modeling, Measurements, and Applications I
  • SHARE 2012 Analysis Results
  • Hyperspectral Target Detection
  • Novel Mathematically-Inspired Methods of Processing Hyperspectral Airborne and Satellite Imagery: Novel Mathematics Algorithms I
  • Novel Mathematically-Inspired Methods of Processing Hyperspectral Airborne and Satellite Imagery: Novel Mathematics Algorithms II
  • Spectral Signature Modeling, Measurements, and Applications II
  • Spectral Sensor Design, Development, and Characterization
  • Data Fusion and Multiple Modality Spectral Applications
  • Multispectral Applications
  • Poster Session
Front Matter: Volume 9472
icon_mobile_dropdown
Front Matter: Volume 9472
This PDF file contains the front matter associated with SPIE Proceedings Volume 9472, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Spectral Detection, Identification, and Quantification
icon_mobile_dropdown
Chemical agent resistant coating (CARC) detection using Hyper-Spectral Imager (HSI) and a newly developed Feature Transformation (FT) detection method
Hai-Wen Chen, Mike McGurr, Mark Brickhouse
Chemical Agent Resistant Coating (CARC) is the term for the paint commonly applied to military vehicles which provides protection against chemical and biological weapons. In this paper, we present results for detecting CARC with two different colors (Green and Beige). A High-Fidelity Target Insertion Method has been developed. This method allows one to insert the target radiance into any HSI sensor scene, while still to preserve the sensor spatio-spectral noise at all the pixel positions. We show that the reduced (400-1,000nm) spectral range is good enough for Beige CARC, and Green CARC Type I and II detection using several current state-of-the-art HSI target detection methods. Furthermore, we will present a newly developed Feature Transformation (FT) algorithm. In essence, the FT method, by transforming the original features to a different feature domain (e.g., the Fourier, wavelet packets, and local cosine domains), may considerably increase the statistic separation between the target and background probability density functions, and thus may significantly improve the target detection and identification performance, as evidenced by the test results in this paper. We show that by differentiating the original spectral features (this operation can be considered as the 1st level Haar wavelet high-pass filtering), we can completely separate Beige CARC from the background using a single band at 650nm, and completely separate Green CARC from the background using a single band at 1180nm, leading to perfect detection results. We have developed an automated best spectral band selection process that can rank the available spectral bands from the best to the worst for target detection. Finally, we have also developed an automated crossspectrum fusion process to further improve the detection performance in lower spectral range (<1,000nm) by selecting the best spectral band pair.
Metrics for the comparative evaluation of chemical plume identification algorithms
E. Truslow, S. Golowich, D. Manolakis, et al.
The detection of chemical agents with hyperspectral longwave infrared sensors is a difficult problem with many civilian and military applications. System performance can be evaluated by comparing the detected gases in each pixel with the ground truth for each pixel using a confusion matrix. In the presence of chemical mixtures the confusion matrix becomes extremely large and difficult to interpret due to its size. We propose summarizing the confusion matrix using simple scalar metrics tailored for specific applications. Ideally, an identifier should determine exactly which chemicals are in each pixel, but in many applications it is acceptable for the output to contain additional chemicals or lack some constituent chemicals. A performance metric for identification problems should give partially correct results a lower weight than completely correct results. The metric we propose using, the Dice metric, weighs each output by its similarity with the truth for each pixel, thereby giving less importance to partially correct outputs, while still giving full scores only to exactly correct results. Using the Dice metric we evaluated the performance of two identification algorithms: an adaptive cosine estimator (ACE) detector bank approach, and Bayesian model averaging (BMA). Both algorithms were tested individually on real background data with synthetically embedded plumes; performance was evaluated using standard detection performance metrics, and then using the proposed identification metric. We show that ACE performed well as a detector but poorly as an identifier; however, BMA performed poorly as a detector but well as an identifier. Cascading the two algorithms should lead to a system with a substantially lower false alarm rate than using BMA alone, and much better identification performance than the ACE detector bank alone.
Pattern recognition in hyperspectral persistent imaging
We give updates on a persistent imaging experiment dataset, being considered for public release in a foreseeable future, and present additional observations analyzing a subset of the dataset. The experiment is a long-term collaborative effort among the Army Research Laboratory, Army Armament RDEC, and Air Force Institute of Technology that focuses on the collection and exploitation of longwave infrared (LWIR) hyperspectral imagery. We emphasize the inherent challenges associated with using remotely sensed LWIR hyperspectral imagery for material recognition, and show that this data type violates key data assumptions conventionally used in the scientific community to develop detection/ID algorithms, i.e., normality, independence, identical distribution. We treat LWIR hyperspectral imagery as Longitudinal Data and aim at proposing a more realistic framework for material recognition as a function of spectral evolution through time, and discuss limitations. The defining characteristic of a longitudinal study is that objects are measured repeatedly through time and, as a result, data are dependent. This is in contrast to cross-sectional studies in which the outcomes of a specific event are observed by randomly sampling from a large population of relevant objects in which data are assumed independent. Researchers in the remote sensing community generally assume the problem of object recognition to be cross-sectional. But through a longitudinal analysis of a fixed site with multiple material types, we quantify and argue that, as data evolve through a full diurnal cycle, pattern recognition problems are longitudinal in nature and that by applying this knowledge may lead to better algorithms.
Hyperspectral image-based methods for spectral diversity
Alejandro Sotomayor, Ollantay Medina, J. Danilo Chinea, et al.
Hyperspectral images are an important tool to assess ecosystem biodiversity. To obtain more precise analysis of biodiversity indicators that agree with indicators obtained using field data, analysis of spectral diversity calculated from images have to be validated with field based diversity estimates. The plant species richness is one of the most important indicators of biodiversity. This indicator can be measured in hyperspectral images considering the Spectral Variation Hypothesis (SVH) which states that the spectral heterogeneity is related to spatial heterogeneity and thus to species richness. The goal of this research is to capture spectral heterogeneity from hyperspectral images for a terrestrial neo tropical forest site using Vector Quantization (VQ) method and then use the result for prediction of plant species richness. The results are compared with that of Hierarchical Agglomerative Clustering (HAC). The validation of the process index is done calculating the Pearson correlation coefficient between the Shannon entropy from actual field data and the Shannon entropy computed in the images. One of the advantages of developing more accurate analysis tools would be the extension of the analysis to larger zones. Multispectral image with a lower spatial resolution has been evaluated as a prospective tool for spectral diversity.
Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal
Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm’s ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.
Person detection in hyperspectral images via skin segmentation using an active learning approach
Ion Marqués, Manuel Graña, Stephanie M. Sanchez, et al.
Human skin detection is a computer vision problem that has been widely researched in color images. In this article we deal with this task as an interactive segmentation problem in hyperspectral outdoor images. We have focused on the problem of skin identification in hyperspectral cameras allowing a fine sampling of the light spectrum, so that the information gathered at each pixel is a high dimensional vector. The problem is treated as a classification problem, where we make use of active learning strategies to provide an interactive robust solution reaching high accuracy in a short training/testing cycle.
Spectral Data Compression and Dimensionality Reduction
icon_mobile_dropdown
Multi-pass encoding of hyperspectral imagery with spectral quality control
Steven Wasson, William Walker
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
SLIC superpixels for efficient graph-based dimensionality reduction of hyperspectral imagery
Xuewen Zhang, Selene E. Chew, Zhenlin Xu, et al.
Nonlinear graph-based dimensionality reduction algorithms such as Laplacian Eigenmaps (LE) and Schroedinger Eigenmaps (SE) have been shown to be very effective at yielding low-dimensional representations of hyperspectral image data. However, the steps of graph construction and eigenvector computation required by LE and SE can be prohibitively costly as the number of image pixels grows. In this paper, we propose pre-clustering the hyperspectral image into Simple Linear Iterative Clustering (SLIC) superpixels and then performing LE- or SE-based dimensionality reduction with the superpixels as input. We then investigate how different superpixel size and regularity choices yield trade-offs between improvements in computational efficiency and accuracy of subsequent classification using the low-dimensional representations.
A concept for hyperspectral imaging with compressive sampling and dictionary recovery
We postulate an optical configuration which takes a multispectral/hyperspectral scene and collects a multiplexed spectral sample on the Focal Plane Array (FPA). From such a measurement paradigm, the data is then processed with compressive imaging techniques and we recover the full multispectral cube from a single frame of imagery. We use a trained dictionary prior assumption along with a greedy reconstruction algorithm for local multispectral reconstruction.
Spectral Signature Modeling, Measurements, and Applications I
icon_mobile_dropdown
Calculation of electronic-excited-state absorption spectra of water clusters using time-dependent density functional theory
L. Huang, S. G. Lambrakos, A. Shabaev, et al.
Calculations are presented of electronic-excited-state absorption spectra for molecular clusters of H2O using time-dependent density functional theory (TD-DFT). Calculation of excited state resonance structure using TD-DFT can provide interpretation of absorption spectra with respect to molecular structure for excitation by electromagnetic waves at frequencies within the UV-visible range. The absorption spectrum corresponding to electronic excitation states of a molecular cluster consisting of a relatively small number of water molecules should be associated with response features that are intermediate between that of isolated molecules and that of a bulk lattice. TD-DFT calculated absorption spectra represent quantitative estimates that can be correlated with additional information obtained from laboratory measurements and other types of theory based calculations. The DFT software GAUSSIAN was used for the calculations of electronic excitation states presented here.
Comparison of microfacet BRDF model elements to diffraction BRDF model elements
Samuel D. Butler, Stephen E. Nauyoks, Michael A. Marciniak
A popular class of BRDF models is the microfacet model, where geometric optics is assumed, but where physical optics effects such as accurate wavelength scaling, important to Hyperspectral Imagery, are lost. More complex physical optics models may more accurately predict the BRDF, but the calculation is time-consuming. These seemingly disparate approaches are compared in detail. The linear systems direction cosine space is compared to microfacet coordinates, and the microfacet models Fresnel reflection in microfacet coordinates is compared to diffraction theory’s Fresnel-like term. Similarities and differences between these terms are highlighted to merge these two approaches to the BRDF.
Development of land surface reflectance models based on multiscale simulation
Modeling and simulation of Earth imaging sensors with large spatial coverage necessitates an understanding of how photons interact with individual land surface processes at an aggregate level. For example, the leaf angle distribution of a deciduous forest canopy has a significant impact on the path of a single photon as it is scattered among the leaves and, consequently, a significant impact on the observed bidirectional reflectance distribution function (BRDF) of the canopy as a whole. In particular, simulation of imagery of heterogeneous scenes for many multispectral/hyperspectral applications requires detailed modeling of regions of the spectrum where many orders of scattering are required due to both high reflectance and transmittance. Radiative transfer modeling based on ray tracing, hybrid Monte Carlo techniques and detailed geometric and optical models of land cover means that it is possible to build effective, aggregate optical models with parameters such as species, spatial distribution, and underlying terrain variation. This paper examines the capability of the Digital Image and Remote Sensing Image Generation (DIRSIG) model to generate BRDF data representing land surfaces at large scale from modeling at a much smaller scale. We describe robust methods for generating optical property models effectively in DIRSIG and present new tools for facilitating the process. The methods and results for forest canopies are described relative to the RAdiation transfer Model Intercomparison (RAMI) benchmark scenes, which also forms the basis for an evaluation of the approach. Additional applications and examples are presented, representing different types of land cover.
Advances in simulating radiance signatures for dynamic air/water interfaces
The air-water interface poses a number of problems for both collecting and simulating imagery. At the surface, the magnitude of observed radiance can change by multiple orders of magnitude at high spatiotemporal frequency due to glinting effects. In the volume, similarly high frequency focusing of photons by a dynamic wave surface significantly changes the reflected radiance of in-water objects and the scattered return of the volume itself. These phenomena are often manifest as saturated pixels and artifacts in collected imagery (often enhanced by time delays between neighboring pixels or interpolation between adjacent filters) and as noise and greater required computation times in simulated imagery. This paper describes recent advances made to the Digital Image and Remote Sensing Image Generation (DIRSIG) model to address the simulation issues to better facilitate an understanding of a multi/hyper-spectral collection. Glint effects are simulated using a dynamic height field that can be driven by wave frequency models and generates a sea state at arbitrary time scales. The volume scattering problem is handled by coupling the geometry representing the surface (facetization by the height field) with the single scattering contribution at any point in the water. The problem is constrained somewhat by assuming that contributions come from a Snell’s window above the scattering point and by assuming a direct source (sun). Diffuse single scattered and multiple scattered energy contributions are handled by Monte Carlo techniques employed previously. The model is compared to existing radiative transfer codes where possible, with the objective of providing a robust movel of time-dependent absolute radiance at many wavelengths.
Influence of density on hyperspectral BRDF signatures of granular materials
Douglas Scott Peck, Malachi Schultz, Charles M. Bachmann, et al.
Recent hyperspectral measurements of composite granular sediments of varying densities have revealed phenomena that contradict what radiative transfer theory would suggest.5 In high-density sands where dominant constituents are translucent and supplementary, darker grains are present, bidirectional reflectance distribution function (BRDF) measurements of high density sediments showed reduced intensity when compared to lower density counterparts. It is conjectured that this is due to diminished multiple scattering from the darker particles which more optimally fill pore space as density increases. The goal of these experiments is to further expand upon these earlier results that were conducted primarily in the principal scattering plane and only at minimum and maximum densities. In the present study, the BRDF of granular composites is compared along a gradient of densities for optically contrasting materials. Systematic analysis of angular and material dependence will be used to develop better models for multiple scattering effects of the granular materials. The measurements in this experiment used the recently constructed, laboratory and field-deployable Goniometer of the Rochester Institute of Technology (GRIT), which measures BRDF for geometries covering 360 degrees in azimuth and 65 degrees in zenith. In contrast to the previous studies limited to the principal scattering plane, GRIT provides a full hemispherical BRDF measurement.
Development and comparison of data reconstruction methods for chromotomographic hyperspectral imagers
Chromotomography is a form of hyperspectral imaging that uses a prism to simultaneously record spectral and spatial information, like a slitless spectrometer. The prism is rotated to provide multiple projections of the 3D data cube on the 2D detector array. Tomographic reconstruction methods are then used to estimate the hyperspectral data cube from the projections. This type of system can collect hyperspectral imagery from fast transient events, but suffers from reconstruction artifacts due to the limited-angle problem. Several algorithms have been proposed in the literature to improve reconstruction, including filtered backprojection, projection onto convex sets, subspace constraint, and split- Bregman iteration. Here we present the first direct comparison of multiple methods against a variety of simulatedtargets. Results are compared based on both image quality and spectral accuracy of the reconstruction, where previous literature has emphasized imaging only. In addition, new algorithms and HSI quality metrics are proposed. We find the quality of the results depend strongly on the spatial and spectral content of the scene, and no single algorithm is consistently superior over a broad range of scenes.
SHARE 2012 Analysis Results
icon_mobile_dropdown
Target detection assessment of the SHARE 2010/2012 hyperspectral data collection campaign
It has been over four years since the first SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE) was conducted in 2010. As such, a second SHARE experiment was performed in 2012 using the same deployed target panels and HSI sensor (i.e., specifically related to the target detection experiment"). A suite of sensors were own over the target areas including multi- and hyperspectral imagers, as well as a LADAR sensor. Experiments were conducted to examine topics such as pixel unmixing, subpixel detection, forest health, and in-water target detection, to name a few. This paper's focus is on target detection of different colored panels deployed on different backgrounds viewed under different illumination conditions collected two years apart. Additionally, the calibration and reflectance retrieval of the data is also examined. Detection is on the standard reflectance product provided by the acquisition company. Results are illustrated in the form of ROC curves. Analysis was performed on (many) red and blue panels on backgrounds such as grass, gravel, and roof tar paper. The targets were in the open (i.e., fully illuminated), as well as heavy and light shadow, which were harder to discover than their open counterparts. Calibration of the 2012 data is good with some issues related to the 2010 data. Adjustments and corrections are discussed. Finally, discussion of where to obtain the free HSI and co-registered LADAR data set is discussed.
An analysis task comparison of uncorrected vs. geo-registered airborne hyperspectral imagery
Geo-registration is the task of assigning geospatial coordinates to the pixels of an image and placing them in a geographic coordinate system. However, the process of geo-registration can impair the quality of the image. This paper studies this topic by applying a comparison methodology to uncorrected and geo-registered airborne hyperspectral images obtained from the RIT SHARE 2012 data set. The uncorrected image was analyzed directly as collected by the sensor without being treated, while the geo-registered image was corrected using the nearest neighbor resampling approach. A comparison of performance was done for the analysis tasks of spectral unmixing and subpixel target detection, which can represent a measure of utility. The comparison demonstrates that the geo-registration process can affect the utility of hyperspectral imagery to a limited extent.
On the effects of spatial and spectral resolution on spatial-spectral target detection in SHARE 2012 and Bobcat 2013 hyperspectral imagery
Previous work with the Bobcat 2013 data set1 showed that spatial-spectral feature extraction on visible to near infrared (VNIR) hyperspectral imagery (HSI) led to better target detection and discrimination than spectral-only techniques; however, the aforementioned study could not consider the possible benefits of the shortwaveinfrared (SWIR) portion of the spectrum due to data limitations. In addition, the spatial resolution of the Bobcat 2013 imagery was fixed at 8cm without exploring lower spatial resolutions. In this work, we evaluate the tradeoffs in spatial and spectral resolution and spectral coverage between for a common set of targets in terms of their effects on spatial-spectral target detection performance. We show that for our spatial-spectral target detection scheme and data sets, the adaptive cosine estimator (ACE) applied to S-DAISY and pseudo Zernike moment (PZM) spatial-spectral features can distinguish between targets better than ACE applied only to the spectral imagery. In particular, S-DAISY operating on bands uniformly selected from the SWIR portion of ProSpecTIR-VS sensor imagery in conjunction with bands closely corresponding to the Airborne Real-time Cueing Hyperspectral Reconnaissance (ARCHER) sensor's VNIR bands (80 total) led to the best overall average performance in both target detection and discrimination.
Locating the shadow regions in LIDAR data: results on the SHARE 2012 dataset
In hyperspectral imaging, shadowy areas present a major problem as targets in shadow show decreased or no spectral signatures. One way to mitigate this problem is by the fusion of hyperspectral data with LiDAR data; since LiDAR data presents excellent information by providing elevation information, which can then be used to identify the regions of shadow. Although there is a lot of work to detect the shadowy areas, many are restricted to distinct platforms like ARGCIS, ENVI etc. The purpose of this study is to (i) detect the shadow areas and to (ii) give a shadowiness scale in LiDAR data with Matlab in an efficient way. For this work, we designed our Line of Sight (LoS) algorithm that is optimized to run in a Matlab interface. The LoS algorithm uses the sun angles (altitude and azimuth) and elevation of the earth; and marks the pixel as “in shadow” if there lies an object of higher elevation between a given pixel and the sun. This is computed for all pixels in the scene and a shadow map is generated. Further, if a pixel is marked as a shadow area, the algorithm assigns a different darkness level which is inversely proportional to the distance between the current pixel and the object that causes the shadow. With this shadow scale, it is both visually and computationally possible to distinguish the soft shadows from the dark shadows; an important information for hyperspectral imagery. The algorithm has been tested on the SHARE 2012 Avon AM dataset. We also show the effect of the shadowiness scale on the spectral signatures.
Effect of endmember clustering on proportion estimation: results on the SHARE 2012 dataset
Estimating the number of endmembers and their spectrum is a challenging task. For one, endmember detection algorithms may over or underestimate the number of endmembers in a given scene. Further, even if the number of endmembers are known beforehand, result of the endmember detection algorithms may not be accurate. They may find multiple endmembers representing the same class, while completely missing some of the endmembers representing the other classes. This hinders the performance of unmixing, resulting in incorrect endmember proportion estimates. In this study, SHARE-2012 AVON data pertaining to the unmixing experiment was considered. It was cropped to include only the eight pieces of cloth and a portion of the surrounding asphalt and grass. This data was used to evaluate the performance of five endmember detection algorithms, namely the PPI, VCA, N-FINDR, ICE and SPICE; none of which found the endmember spectra correctly. All of these algorithms generated multiple endmembers corresponding to the same class or they completely missed some of the endmembers. Hence, the peak-aware N-FINDR algorithm was devised to group the endmembers of the same class so as not to over or under-estimate the true endmembers. The comparisons with or without this refinement for the N-FINDR algorithm are demonstrated.
Hyperspectral Target Detection
icon_mobile_dropdown
Incorporating signal-dependent noise for hyperspectral target detection
Christopher J. Morman, Joseph Meola
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Robust chemical and chemical-resistant material detection using hyper-spectral imager and a new bend interpolation and local scaling HSI sharpening method
Hai-Wen Chen, Michael McGurr, Mark Brickhouse
We present new results from our ongoing research activity for chemical threat detection using hyper-spectral imager (HSI) detection techniques by detecting nontraditional threat spectral signatures of agent usage, such as protective equipment, coatings, paints, spills, and stains that are worn by human or on trucks or other objects. We have applied several current state-of-the-art HSI target detection methods such as Matched Filter (MF), Adaptive Coherence Estimator (ACE), Constrained Energy Minimization (CEM), and Spectral Angle Mapper (SAM). We are interested in detecting several chemical related materials: (a) Tyvek clothing is chemical resistance and Tyvek coveralls are one-piece garments for protecting human body from harmful chemicals, and (b) ammonium salts from background could be representative of spills from scrubbers or related to other chemical activities. The HSI dataset that we used for detection covers a chemical test field with more than 50 different kinds of chemicals, protective materials, coatings, and paints. Among them, there are four different kinds of Tyvek material, three types of ammonium salts, and one yellow jugs. The imagery cube data were collected by a HSI sensor with a spectral range of 400–2,500nm. Preliminary testing results are promising, and very high probability of detection (Pd) and low probability of false detection are achieved with the usage of full spectral range (400– 2,500nm). In the second part of this paper, we present our newly developed HSI sharpening technique. A new Band Interpolation and Local Scaling (BILS) method has been developed to improve HSI spatial resolution by 4-16 times with a low-cost high-resolution pen-chromatic camera and a RGB camera. Preliminary results indicate that this new technique is promising.
An adaptive locally linear embedding manifold learning approach for hyperspectral target detection
Algorithms for spectral analysis commonly use parametric or linear models of the data. Research has shown, however, that hyperspectral data -- particularly in materially cluttered scenes -- are not always well-modeled by statistical or linear methods. Here, we propose an approach to hyperspectral target detection that is based on a graph theory model of the data and a manifold learning transformation. An adaptive nearest neighbor (ANN) graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation. The artificial target manifold helps to guide the separation of the target data from the background data in the new, transformed manifold coordinates. Then, target detection is performed in the manifold space using Spectral Angle Mapper. This methodology is an improvement over previous iterations of this approach due to the incorporation of ANN, the artificial target manifold, and the choice of detector in the transformed space. We implement our approach in a spatially local way: the image is delineated into square tiles, and the detection maps are normalized across the entire image. Target detection results will be shown using laboratory-measured and scene-derived target data from the SHARE 2012 collect.
Ellipsoids for anomaly detection in remote sensing imagery
Guenchik Grosklos, James Theiler
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
Video rate multispectral imaging for camouflaged target detection
Sam Henry
The ability to detect and identify camouflaged targets is critical in combat environments. Hyperspectral and Multispectral cameras allow a soldier to identify threats more effectively than traditional RGB cameras due to both increased color resolution and ability to see beyond visible light. Static imagers have proven successful, however the development of video rate imagers allows for continuous real time target identification and tracking. This paper presents an analysis of existing anomaly detection algorithms and how they can be adopted to video rates, and presents a general purpose semisupervised real time anomaly detection algorithm using multiple frame sampling.
Evaluating backgrounds for subpixel target detection: when closer isn't better
N. Hasson, S. Asulin, D. Blumberg, et al.
Several different background estimators are considered when performing sub-pixel target acquisition. Although all leave noise of about the same amplitude, the difference in their N-dimensional orientation makes a big difference in the target detection performance. Metrics to evaluate the correlation of the noise are presented.
Novel Mathematically-Inspired Methods of Processing Hyperspectral Airborne and Satellite Imagery: Novel Mathematics Algorithms I
icon_mobile_dropdown
Spatial-spectral dimensionality reduction of hyperspectral imagery with partial knowledge of class labels
Nathan D. Cahill, Selene E. Chew, Paul S. Wenger
Laplacian Eigenmaps (LE) and Schroedinger Eigenmaps (SE) are effective dimensionality reduction algorithms that are capable of integrating both the spatial and spectral information inherent in a hyperspectral image. In this paper, we consider how to extend LE- and SE-based spatial-spectral dimensionality reduction algorithms to situations where partial knowledge of class labels exists, for example, when a subset of pixels has been manually labeled by an expert user. This partial knowledge is incorporated through the use of cluster potentials, turning each underlying algorithm into an instance of SE. Using publicly available data, we show that incorporating this partial knowledge improves the performance of subsequent classification algorithms.
Detecting plumes in LWIR using robust nonnegative matrix factorization with graph-based initialization
Jing Qin, Thomas Laurent, Kevin Bui, et al.
We consider the problem of identifying chemical plumes in hyperspectral imaging data, which is challenging due to the diffusivity of plumes and the presence of excessive noise. We propose a robust nonnegative matrix factorization (RNMF) method to segment hyperspectral images considering the low-rank structure of the noisefree data and sparsity of the noise. Because the optimization objective is highly non-convex, nonnegative matrix factorization is very sensitive to initialization. We address the issue by using the fast Nystrom method and label propagation algorithm (LPA). Using the alternating direction method of multipliers (ADMM), RNMF provides high quality clustering results effectively. Experimental results on real single frame and multiframe hyperspectral data with chemical plumes show that the proposed approach is promising in terms of clustering quality and detection accuracy.
Modeling and mitigating noise in graph and manifold representations of hyperspectral imagery
Over the past decade, manifold and graph representations of hyperspectral imagery (HSI) have been explored widely in HSI applications. There are a large number of data-driven approaches to deriving manifold coordinate representations including Isometric Mapping (ISOMAP)1, Local Linear Embedding (LLE)2, Laplacian Eigenmaps (LE)3, Diffusion Kernels (DK)4, and many related methods. Improvements to specific algorithms have been developed to ease computational burden or otherwise improve algorithm performance. For example, the best way to estimate the size of the locally linear neighborhoods used in graph construction have been addressed6 as well as the best method of linking the manifold representation with classifiers in applications. However, the problem of how to model and mitigate noise in manifold representations of hyperspectral imagery has not been well studied and remains a challenge for graph and manifold representations of hyperspectral imagery and their application. It is relatively easy to apply standard linear methods to remove noise from the data in advance of further processing, however, these approaches by and large treat the noise model in a global sense, using statistics derived from the entire data set and applying the results globally over the data set. Graph and manifold representations by their nature attempt to find an intrinsic representation of the local data structure, so it is natural to ask how can one best represent the noise model in a local sense. In this paper, we explore the approaches to modeling and mitigating noise at a local level, using manifold coordinates of local spectral subsets. The issue of landmark selection of the current landmark ISOMAP algorithm5 is addressed and a workflow is proposed to make use of manifold coordinates of local spectral subsets to make optimal landmark selection and minimize the effect of local noise.
Novel Mathematically-Inspired Methods of Processing Hyperspectral Airborne and Satellite Imagery: Novel Mathematics Algorithms II
icon_mobile_dropdown
Classification of multi-source sensor data with limited labeled data
Melba M. Crawford, Saurabh Prasad, Xiong Zhou, et al.
Classification of multi-source data has recently gained significant attention, as accuracies can often be improved by incorporating complementary information extracted in single and multi-sensor scenarios. Supervised approaches to classification of multi-source remote sensing data are dependent on the availability of representative labeled data, which are often limited relative to the dimensionality of the data for training. To address this problem, in this paper, we propose a new framework in which active learning (AL) and semi-supervised learning (SSL) strategies are combined for multi-source classification of hyperspectral images. First, the spatial-spectral features are represented via the redundant discrete wavelet transform (RDWT). Then, the spatial context provided by the hierarchical segmentation algorithm (HSEG) in conjunction with an unsupervised pruning strategy is exploited to combine AL and SSL. Finally, SVM classification is performed due to the high dimensionality of the feature space. The proposed framework is validated with two benchmark hyperspectral data sets. Higher classification accuracies are obtained by the proposed framework with respect to other state-of-the-art active learning classification approaches.
Schrodinger Eigenmaps for spectral target detection
Spectral imagery such as multispectral and hyperspectral data could be seen as a set of panchromatic images stacked as a 3d cube, with two spatial dimensions and one spectral. For hyperspectral imagery, the spectral dimension is highly sampled, which implies redundant information and a high spectral dimensionality. Therefore, it is necessary to use transformations on the data not only to reduce processing costs, but also to reveal some features or characteristics of the data that were hidden in the original space. Schrodinger Eigenmaps (SE) is a novel mathematical method for non-linear representation of a data set that attempts to preserve the local structure while the spectral dimension is reduced. SE could be seen as an extension of Laplacian Eigenmaps (LE), where the diffusion process could be steered in certain directions determined by a potential term. SE was initially introduced as a semi supervised classification technique and most recently, it has been applied to target detection showing promising performance. In target detection, only the barrier potential has been used, so different forms to define barrier potentials and its influence on the data embedding are studied here. In this way, an experiment to assess the target detection vs. how strong the influence of potentials is and how many eigenmaps are used in the detection, is proposed. The target detection is performed using a hyperspectral data set, where several targets with different complexity are presented in the same scene.
Functions of multiple instances for sub-pixel target characterization in hyperspectral imagery
In this paper, the Multi-target Extended Function of Multiple Instances (Multi-target eFUMI) method is developed and described. The method is capable of learning multiple target spectral signatures from weakly- and inaccurately-labeled hyperspectral imagery. Multi-target eFUMI is a generalization of the Function of Multiple Instances approach (FUMI). The FUMI approach differs significantly from standard Multiple Instance Learning (MIL) approach in that it assumes each data is a function of target and non-target “concepts.” In this paper, data points which are convex combinations of multiple target and several non-target “concepts” are considered. Moreover, it allows both “proportion-level” and “bag-level” uncertainties in training data. Training data needs only binary labels indicating whether some spatial area contains or does not contain some proportion of target; the specific target proportions for the training data are not needed. Multi-target eFUMI learns the target and non-target concepts, the number of non-target concepts, and the proportions of all the concepts for each data point. After learning the target concepts using the binary “bag-level” labeled training data, target detection can be performed on test data. Results for sub-pixel target detection on simulated and real airborne hyperspectral data are shown.
Anisotropic representations for superresolution of hyperspectral data
Edward H. Bosch, Wojciech Czaja, James M. Murphy, et al.
We develop a method for superresolution based on anisotropic harmonic analysis. Our ambition is to efficiently increase the resolution of an image without blurring or introducing artifacts, and without integrating additional information, such as sub-pixel shifts of the same image at lower resolutions or multimodal images of the same scene. The approach developed in this article is based on analysis of the directional features present in the image that is to be superesolved. The harmonic analytic technique of shearlets is implemented in order to efficiently capture the directional information present in the image, which is then used to provide smooth, accurate images at higher resolutions. Our algorithm is compared to both a recent anisotropic technique based on frame theory and circulant matrices,1 as well as to the standard superresolution method of bicubic interpolation. We evaluate our algorithm on synthetic test images, as well as a hyperspectral image. Our results indicate the superior performance of anisotropic methods, when compared to standard bicubic interpolation.
Spectral Signature Modeling, Measurements, and Applications II
icon_mobile_dropdown
The development of a DIRSIG simulation environment to support instrument trade studies for the SOLARIS sensor
Aaron D. Gerace, Adam A. Goodenough, Matthew Montanaro, et al.
NASA Goddard’s SOLARIS (Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer) sensor is the calibration demonstration system for CLARREO (Climate Absolute Radiance and Refractivity Observatory), a mission that addresses the need to make highly accurate observations of long-term climate change trends. The SOLARIS instrument will be designed to support a primary objective of CLARREO, which is to advance the accuracy of absolute calibration for space-borne instruments in the reflected solar wavelengths. This work focuses on the development of a simulated environment to facilitate sensor trade studies to support instrument design and build for the SOLARIS sensor. Openly available data are used to generate geometrically and radiometrically realistic synthetic landscapes to serve as input to an image generation model, specifically the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. Recent enhancements to DIRSIG’s sensor model capabilities have made it an attractive option for performing sensor trade studies. This research takes advantage of these enhancements to model key sensor characteristics (e.g., sensor noise, relative spectral response, spectral coverage, etc.) and evaluate their impact on SOLARIS’s stringent 0.3% error budget for absolute calibration. A SOLARIS sensor model is developed directly from measurements provided by NASA Goddard and various synthetic landscapes generated to identify potential calibration sites once the instrument achieves orbit. The results of these experiments are presented and potential sources of error for sensor inter-calibration are identified.
Empirical measurement and model validation of infrared spectra of contaminated surfaces
Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2−sr−μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.
Spectral analysis of water samples using modulated resonance features for monitoring of public water resources
S. G. Lambrakos, C. Yapijakis, D. Aiken, et al.
Hyperspectral analysis of water samples taken from public water resources in the New York City metro area has demonstrated the potential application of this type of analysis for water monitoring, treatment and evaluation prior to filtration. Hyperspectral monitoring of contaminants with respect to types and relative concentrations requires tracking statistical profiles of water contaminants in terms of spatial-temporal distributions of electromagnetic absorption spectra ranging from the ultraviolet to infrared, which are associated with specific water resources. To achieve this, it is necessary to establish correlation between hyperspectral signatures and types of contaminants to be found within specific water resources. Correlation between absorption spectra and changes in chemical and physical characteristics of contaminants requires sufficient sensitivity. The present study examines the sensitivity of modulated resonance features with respect to characteristics of water contaminants for hyperspectral analysis of water samples.
An accelerated line-by-line option for MODTRAN combining on-the-fly generation of line center absorption within 0.1 cm-1 bins and pre-computed line tails
A Line-By-Line (LBL) option is being developed for MODTRAN6. The motivation for this development is two-fold. Firstly, when MODTRAN is validated against an independent LBL model, it is difficult to isolate the source of discrepancies. One must verify consistency between pressure, temperature and density profiles, between column density calculations, between continuum and particulate data, between spectral convolution methods, and more. Introducing a LBL option directly within MODTRAN will insure common elements for all calculations other than those used to compute molecular transmittances. The second motivation for the LBL upgrade is that it will enable users to compute high spectral resolution transmittances and radiances for the full range of current MODTRAN applications. In particular, introducing the LBL feature into MODTRAN will enable first-principle calculations of scattered radiances, an option that is often not readily available with LBL models. MODTRAN will compute LBL transmittances within one 0.1 cm-1 spectral bin at a time, marching through the full requested band pass. The LBL algorithm will use the highly accurate, pressure- and temperature-dependent MODTRAN Padé approximant fits of the contribution from line tails to define the absorption from all molecular transitions centered more than 0.05 cm-1 from each 0.1 cm-1 spectral bin. The beauty of this approach is that the on-the-fly computations for each 0.1 cm-1 bin will only require explicit LBL summing of transitions centered within a 0.2 cm-1 spectral region. That is, the contribution from the more distant lines will be pre-computed via the Padé approximants. The status of the LBL effort will be presented. This will include initial thermal and solar radiance calculations, validation calculations, and self-validations of the MODTRAN band model against its own LBL calculations.
Surface retrievals from Hyperion EO1 using a new, fast, 1D-Var based retrieval code
Jean-Claude Thelen, Stephan Havemann, Gerald Wong
We have developed a new algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as Hyperion EO-1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an ’exact’ scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using hyperspectral images taken by Hyperion EO-1, an experimental pushbroom imaging spectrometer operated by NASA.
Spectral Sensor Design, Development, and Characterization
icon_mobile_dropdown
Passive standoff imaging using spatial-spectral multiplexing
Ethan R. Woodard, Michael W. Kudenov
The concept of a passive far-field imaging system, using a unique spatial-spectral multiplexing (SSM) technique, is presented. The described SSM technique uses spectrally resolved interferometry to multiplex a scene’s angular spectrum onto the power spectrum, while dispersion characteristics are implemented to heterodyne the channeled spectrum into the spectral range of visible light. In this paper, the theory of the design is detailed and an analysis of the spatial and spectral tradespace of the system are discussed. Applications for this imaging technique are primarily focused in remote sensing and far-field target identification.
Automated turbulences jitters correction with a dual ports imaging Fourier-transform spectrometer
Florent Prel, Stéphane Lantagne, Louis Moreau, et al.
When the scene observed by an imaging Fourier-Transform Spectrometer is not stable in amplitude or in position during the time it takes to acquire to spectrum, spectro-radiometric artifacts are generated. These artifacts reduce the radiometric accuracy and may also damage the spectral line shape. The displacements of the scene in the field of view can be due to air turbulence, platform jitters or scene jitters. We describe an automated correction process based on the information provided by the second output port of a two-port imaging FTS. Corrected and uncorrected data will be compared.
Data Fusion and Multiple Modality Spectral Applications
icon_mobile_dropdown
Integrated visible to near infrared, short wave infrared, and long wave infrared spectral analysis for surface composition mapping near Mountain Pass, California
We have developed new methods for enhanced surface material identification and mapping that integrate visible to near infrared (VNIR, ~0.4 – 1 μm), short wave infrared (SWIR, ~1 – 2.5 μm), and long wave infrared (LWIR, ~8 – 12 μm) multispectral and hyperspectral imagery. This approach produces a single map of surface composition derived from the full spectral range. We applied these methods to a spectrally diverse region around Mountain Pass, CA. A comparison of the integrated results with those obtained from analyzing the spectral ranges individually reveals compositional information not exhibited by the VNIR, SWIR or LWIR data alone. We also evaluate the benefit of hyperspectral rather than multispectral LWIR data for this integrated approach.
Exploration of integrated visible to near-, shortwave-, and longwave-infrared (full range) hyperspectral data analysis
Shelli R. Cone, Fred A. Kruse, Meryl L. McDowell
Visible to near-, shortwave-, and longwave-infrared (VNIR, SWIR, LWIR) hyperspectral data were integrated using a variety of approaches to take advantage of complementary wavelength-specific spectral characteristics for improved material classification. The first approach applied separate minimum noise fraction (MNF) transforms to the three regions and combined only non-noise transformed bands. A second approach integrated the VNIR, SWIR, and LWIR data before using MNF analysis to isolate linear band combinations containing high signal to noise. Spectral endmembers extracted from each integrated dataset were unmixed and spatially mapped using a partial unmixing approach. Integrated results were compared to baseline analyses of the separate spectral regions. Outcomes show that analyzing across the full VNIR-SWIR-LWIR spectrum improves material characterization and identification.
Analysis of multispectral and hyperspectral longwave infrared (LWIR) data for geologic mapping
Multispectral MODIS/ASTER Airborne Simulator (MASTER) data and Hyperspectral Thermal Emission Spectrometer (HyTES) data covering the 8 – 12 μm spectral range (longwave infrared or LWIR) were analyzed for an area near Mountain Pass, California. Decorrelation stretched images were initially used to highlight spectral differences between geologic materials. Both datasets were atmospherically corrected using the ISAC method, and the Normalized Emissivity approach was used to separate temperature and emissivity. The MASTER data had 10 LWIR spectral bands and approximately 35-meter spatial resolution and covered a larger area than the HyTES data, which were collected with 256 narrow (approximately 17nm-wide) spectral bands at approximately 2.3-meter spatial resolution. Spectra for key spatially-coherent, spectrally-determined geologic units for overlap areas were overlain and visually compared to determine similarities and differences. Endmember spectra were extracted from both datasets using n-dimensional scatterplotting and compared to emissivity spectral libraries for identification. Endmember distributions and abundances were then mapped using Mixture-Tuned Matched Filtering (MTMF), a partial unmixing approach. Multispectral results demonstrate separation of silica-rich vs non-silicate materials, with distinct mapping of carbonate areas and general correspondence to the regional geology. Hyperspectral results illustrate refined mapping of silicates with distinction between similar units based on the position, character, and shape of high resolution emission minima near 9 μm. Calcite and dolomite were separated, identified, and mapped using HyTES based on a shift of the main carbonate emissivity minimum from approximately 11.3 to 11.2 μm respectively. Both datasets demonstrate the utility of LWIR spectral remote sensing for geologic mapping.
Comparative analysis of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), and Hyperspectral Thermal Emission Spectrometer (HyTES) longwave infrared (LWIR) hyperspectral data for geologic mapping
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and spatially coincident Hyperspectral Thermal Emission Spectrometer (HyTES) data were used to map geology and alteration for a site in northern Death Valley, California and Nevada, USA. AVIRIS, with 224 bands at 10 nm spectral resolution over the range 0.4 – 2.5 μm at 3-meter spatial resolution were converted to reflectance using an atmospheric model. HyTES data with 256 bands at approximately 17 nm spectral resolution covering the 8 – 12 μm range at 4-meter spatial resolution were converted to emissivity using a longwave infrared (LWIR) radiative transfer atmospheric compensation model and a normalized temperature-emissivity separation approach. Key spectral endmembers were separately extracted for each wavelength region and identified, and the predominant material at each pixel was mapped for each range using Mixture-Tuned-Matched Filtering (MTMF), a partial unmixing approach. AVIRIS mapped iron oxides, clays, mica, and silicification (hydrothermal alteration); and the difference between calcite and dolomite. HyTES separated and mapped several igneous phases (not possible using AVIRIS), silicification, and validated separation of calcite from dolomite. Comparison of the material maps from the different modes, however, reveals complex overlap, indicating that multiple materials/processes exist in many areas. Combined and integrated analyses were performed to compare individual results and more completely characterize occurrences of multiple materials. Three approaches were used 1) integrated full-range analysis, 2) combined multimode classification, and 3) directed combined analysis in geologic context. Results illustrate that together, these two datasets provide an improved picture of the distribution of geologic units and subsequent alteration.
Multispectral Applications
icon_mobile_dropdown
Symmetrized regression for hyperspectral background estimation
We can improve the detection of targets and anomalies in a cluttered background by more effectively estimating that background. With a good estimate of what the target-free radiance or reflectance ought to be at a pixel, we have a point of comparison with what the measured value of that pixel actually happens to be. It is common to make this estimate using the mean of pixels in an annulus around the pixel of interest. But there is more information in the annulus than this mean value, and one can derive more general estimators than just the mean. The derivation pursued here is based on multivariate regression of the central pixel against the pixels in the surrounding annulus. This can be done on a band-by-band basis, or with multiple bands simultaneously. For overhead remote sensing imagery with square pixels, there is a natural eight-fold symmetry in the surrounding annulus, corresponding to reflection and right angle rotation. We can use this symmetry to impose constraints on the estimator function, and we can use these constraints to reduce the number or regressor variables in the problem. This paper investigates the utility of regression generally -- and a variety of different symmetric regression schemes particularly -- for hyperspectral background estimation in the context of generic target detection.
A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets
S. Grossman
Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.
Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)
Fusion of broadband panchromatic data with narrow band multispectral data – pansharpening – is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics’ correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.
Snapshot imaging Fraunhofer line discriminator for detection of plant fluorescence
Non-invasive quantification of plant health is traditionally accomplished using reflectance based metrics, such as the normalized difference vegetative index (NDVI). However, measuring plant fluorescence (both active and passive) to determine photochemistry of plants has gained importance. Due to better cost efficiency, lower power requirements, and simpler scanning synchronization, detecting passive fluorescence is preferred over active fluorescence. In this paper, we propose a high speed imaging approach for measuring passive plant fluorescence, within the hydrogen alpha Fraunhofer line at ~656 nm, using a Snapshot Imaging Fraunhofer Line Discriminator (SIFOLD). For the first time, the advantage of snapshot imaging for high throughput Fraunhofer Line Discrimination (FLD) is cultivated by our system, which is based on a multiple-image Fourier transform spectrometer and a spatial heterodyne interferometer (SHI). The SHI is a Sagnac interferometer, which is dispersion compensated using blazed diffraction gratings. We present data and techniques for calibrating the SIFOLD to any particular wavelength. This technique can be applied to quantify plant fluorescence at low cost and reduced complexity of data collection.
Assessing the impact of sub-pixel vegetation structure on imaging spectroscopy via simulation
Wei Yao, Martin van Leeuwen, Paul Romanczyk, et al.
Consistent and scalable estimation of vegetation structural parameters from imaging spectroscopy is essential to remote sensing for ecosystem studies, with applications to a wide range of biophysical assessments. To support global vegetation assessment, NASA has proposed the Hyperspectral Infrared Imager (HyspIRI) imaging spectrometer, which measures the randiance 380-2500nm in 10nm contiguous bands with 60m ground sample distance (GSD). However, because of the large pixel size on the ground, there is uncertainty as to the effects of vegetation structure on observed radiance. This research evaluates linkages between vegetation structure and imaging spectroscopy. Specifically, we assess the impact of within-pixel vegetation density and position on large-footprint spectral radiances. To achieve this objective, three virtual forest scenes were constructed, which correspond to the actual veg- etation structure of the National Ecological Observatory Network (NEON) Pacific Southwest domain (PSW; D17; Fresno, CA). These were used to simulate anticipated HyspIRI data (60m GSD) using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, a first-principles synthetic image generation model de- veloped by the Rochester Institute of Technology. Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) and NEON's high-resolution imaging spectrometer (NIS) data were used to verify the geometric parameters and physical models. Multiple simulated HyspIRI data sets were generated by varying within-pixel structural variables, such as forest density, position, and distribution of trees, in order to assess the impact of sub-pixel structural variation on observed HyspIRI data. Results indicate that HyspIRI is sensitive to sub-pixel vegetation density variation in the visible to short- wavelength infrared spectrum due to vegetation structural changes, and associated pigment and water content variation. This has implications for improving the system's suitability for consistent global vegetation structural assessments by adapting calibration strategies to account for this sub-pixel variation.
Poster Session
icon_mobile_dropdown
Imaging white blood cells using a snapshot hyperspectral imaging system
Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering coregistered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera; attached to a microscope at varying objective powers and illumination intensity. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyper-spectral data cube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting.
Cooperative spectral and spatial feature fusion for camouflaged target detection
This paper presents a novel camouflaged target detection method using spectral and spatial feature fusion. Conventional unsupervised learning methods using spectral information only can be feasible solutions. Such approaches, however, sometimes produce incorrect detection results because spatial information is not considered. This paper proposes a novel band feature selection method by considering both the spectral distance and spatial statistics after spectral normalization for illumination invariance. The statistical distance metric can generate candidate feature bands and further analysis of the spatial grouping property can trim the useless feature bands. Camouflaged targets can be detected better with less computational complexity by the spectral-spatial feature fusion.
On the response function separability of hyperspectral imaging systems
Jurij Jemec, Franjo Pernuš, Boštjan Likar, et al.
Hyperspectral imaging systems effectively collect information across the spectral and two spatial dimensions by employing three main components: the front lens, the light-diffraction element and a camera. Imperfections in these components introduce spectral and spatial dependent distortions in the recorded hyperspectral image. These can be characterized by a 3D response function that is subsequently used to remove distortions and enhance the resolution of the recorded images by deconvolution. The majority of existing characterization methods assume spatial and spectral separability of the 3D response function. In this way, the complex problem of 3D response function characterization is reduced to independent characterizations of the three orthogonal response function components. However, if the 3D response function is non-separable, such characterization can lead to poor response function estimates, and hence inaccurate and distorted results of the subsequent deconvolution-based calibration and image enhancement. In this paper, we evaluate the influence of the spatial response function non-separability on the results of the calibration by deconvolution. For this purpose, a novel procedure for direct measurement of the 2D spatial response function is proposed along with a quantitative measure of the spatial response function non-separability. The quality of deconvolved images is assessed in terms of full width at half maximum (FWHM) and step edge overshoot magnitude observed in the deconvolved images of slanted edges, images of biological slides, and 1951 USAF resolution test chart. Results show that there are cases, when nonseparability of the system response function is significant and should be considered by the deconvolution-based calibration and image enhancement methods.
Assessment of rainfall and NDVI anomalies in semi-arid regions using distributed lag models
Worku Zewdie, E. Csaplovics
The semiarid regions of Ethiopia are exposed to anthropogenic and natural calamities. In this study, we assessed the relationship between Tropical Applications of Meteorology using Satellite data (TAMSAT) and MODIS Normalized Difference Vegetation Index (NDVI) data for the period 2000 to 2014 on decadal and annual basis using multivariate distributed lag (DL) models. Decadal growing season (June to September) values for kaftahumera were calculated from MODIS NDVI data. The growing season NDVI values are highly correlated with the precipitations during the whole study period. A lag of up to 30 days observed in most parts of our study region in which the rainfall has effects on vegetation growth after 40 days. The lag-time effects vary with the distribution of land use types and seasons. A lower correlation was observed in the woodland regions where significant deforestation occurred due to expansion of croplands. The loss in vegetation contributed to the low biomass production attributable to extended loss in vegetation cover.
Skin detection in hyperspectral images
Hyperspectral imagers collect information of the scene being imaged at close contiguous bands in the electromagnetic spectrum at high spectral resolutions. The number of applications for these imagers has grown over the years as they are now used in various fields. Many algorithms are described in the literature for skin detection in color imagery. However increased detection accuracy, in particularly over cluttered backgrounds, and of small targets and in low spatial resolution systems can be achieved by taking advantage of the spectral information that can be collected with multi/hyperspectral imagers. The ultimate goal of our research work is the development of a human presence detection system over different backgrounds using hyperspectral imaging in the 400-1000nm region of the spectrum that can be used in the context of search and rescue operations, and surveillance in defense and security applications. The 400-1000 nm region is chosen because of availability of low cost imagers in this region of the spectrum. This paper presents preliminary results in the use of combinations of normalized difference indices that can be used to detect regions of interest in a scene that can be used as a pre-processor in a human detection system. A new normalized difference ratio, the Skin Normalized Difference Index (SNDI) is proposed. Experimental results show that a combination the NDGRI+NDVI+SNDI results in a probability of detection similar to that of the NDGRI. However, the combination of features results in a much lower probability of false alarm.
Can we match ultraviolet face images against their visible counterparts?
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.