Proceedings Volume 9840

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII

cover
Proceedings Volume 9840

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 7 July 2016
Contents: 13 Sessions, 64 Papers, 0 Presentations
Conference: SPIE Defense + Security 2016
Volume Number: 9840

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9840
  • Classification
  • Sensor Characterization
  • Applications
  • Invited Session: Solid Target Variability I
  • Invited Session: Solid Target Variability II
  • Target Detection
  • Invited Session: Novel Mathematically Inspired Methods of Processing Hyperspectral Imagery
  • Spectral Signature Modeling, Measurements, and Applications
  • Dimensionality Reduction
  • Spectral Characterization, Detection, and Identification
  • Sensor Design and Development
  • Interactive Poster Session
Front Matter: Volume 9840
icon_mobile_dropdown
Front Matter: Volume 9840
This PDF file contains the front matter associated with SPIE Proceedings Volume 9840, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Classification
icon_mobile_dropdown
A study of neural network parameters for improvement in classification accuracy
Hyperspectral data due to large number of spectral bands facilitates discrimination between large numbers of classes in a data; however, the advantage afforded by the hyperspectral data often tends to get lost in the limitations of convection al classifier techniques. Artificial Neural Networks (ANN) in several studies has shown to outperform convection al classifiers, however; there are several issues with regard to selection of parameters for achieving best possible classification accuracy. Objectives of this study have been accordingly formulated to include an investigation of t he effect of various Neural Network parameters on the accuracy of hyperspectral image classification. AVIRIS Hyperspectral Indian Pine Test site 3 dataset acquiredin220 Bands on June 12, 1992 has been used in the stud y. Thereafter, maximal feature extraction technique of Principle component analysis (PCA) is used to reduce the dataset t o 10 bands preserving of 99.96% variance. The data contains 16 major classes of which 4 have been considered for ANN based classification. The parameters selected for the study are – number of hidden layers, hidden Nodes, training sample size, learning rate and learning momentum. Backpropagation method of learning is adopted. The overall accuracy of the network trained has been assessed using test sample size of 300 pixels. Although, the study throws up certain distinct ranges within which higher classification accuracies can be expected, however, no definite relationship could be identified between various ANN parameters under study.
Tensor subspace analysis for spatial-spectral classification of hyperspectral data
Remotely sensed data fusion aims to integrate multi-source information generated from different perspectives, acquired with different sensors or captured at different times in order to produce fused data that contains more information than one individual data source. Recently, extended morphological attribute profiles (EMAPs) were proposed to embed contextual information, such as texture, shape, size and etc., into a high dimensional feature space as an alternative data source to hyperspectral image (HSI). Although EMAPs provide greater capabilities in modeling both spatial and spectral information, they lead to an increase in the dimensionality of the extracted features. Conventionally, a data point in high dimensional feature space is represented by a vector. For HSI, this data representation has one obvious shortcoming in that only spectral knowledge is utilized without contextual relationship being exploited. Tensors provide a natural representation for HSI data by incorporating both spatial neighborhood awareness and spectral information. Besides, tensors can be conveniently incorporated into a superpixel-based HSI image processing framework. In our paper, three tensor-based dimensionality reduction (DR) approaches were generalized for high dimensional image with promising results reported. Among the tensor-based DR approaches, the Tensor Locality Preserving Projection (TLPP) algorithm utilized graph Laplacian to model the pairwise relationship among the tensor data points. It also demonstrated excellent performance for both pixel-wise and superpixel-wise classification on Pavia University dataset.
Classification performance of a block-compressive sensing algorithm for hyperspectral data processing
Fernando X. Arias, Heidy Sierra, Emmanuel Arzuaga
Compressive Sensing is an area of great recent interest for efficient signal acquisition, manipulation and reconstruction tasks in areas where sensor utilization is a scarce and valuable resource. The current work shows that approaches based on this technology can improve the efficiency of manipulation, analysis and storage processes already established for hyperspectral imagery, with little discernible loss in data performance upon reconstruction. We present the results of a comparative analysis of classification performance between a hyperspectral data cube acquired by traditional means, and one obtained through reconstruction from compressively sampled data points. To obtain a broad measure of the classification performance of compressively sensed cubes, we classify a commonly used scene in hyperspectral image processing algorithm evaluation using a set of five classifiers commonly used in hyperspectral image classification. Global accuracy statistics are presented and discussed, as well as class-specific statistical properties of the evaluated data set.
Sensor Characterization
icon_mobile_dropdown
New applications of Spectral Edge image fusion
Alex E. Hayes, Roberto Montagna, Graham D. Finlayson
In this paper, we present new applications of the Spectral Edge image fusion method. The Spectral Edge image fusion algorithm creates a result which combines details from any number of multispectral input images with natural color information from a visible spectrum image. Spectral Edge image fusion is a derivative–based technique, which creates an output fused image with gradients which are an ideal combination of those of the multispectral input images and the input visible color image. This produces both maximum detail and natural colors. We present two new applications of Spectral Edge image fusion. Firstly, we fuse RGB–NIR information from a sensor with a modified Bayer pattern, which captures visible and near–infrared image information on a single CCD. We also present an example of RGB–thermal image fusion, using a thermal camera attached to a smartphone, which captures both visible and low–resolution thermal images. These new results may be useful for computational photography and surveillance applications.
Metamaterial based narrow bandwidth angle-of-incidence independent transmission filters for hyperspectral imaging
In this work hyperbolic metamaterials are integrated within Bragg transmission filters with the purpose of eliminating the dependence of the center wavelength of a narrow bandwidth transmission peak on the angle of incidence of the incoming TM polarized beam. The structure is composed of a multi-layer stack of dielectric materials with an array of metal wires vertically penetrating the entire structure. Two types of modeling methods are used to simulate the optical properties of the structure, a coupled wave algorithm that uses a transfer matrix method, and finite element modeling. It is shown that narrow band transmission filters can be designed such that the center wavelength of the transmission peak for TM polarized incident light does not change as the angle of incidence of an incoming beam changes. The method is applied to different hypothetical structures operating in the near infrared, mid wave infrared and long wave infrared. A structure operating at 1.5GHz is designed.
Applications
icon_mobile_dropdown
Developing a confidence metric for the Landsat land surface temperature product
Kelly G. Laraby, John R. Schott, Nina Raqueno
Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.
Detecting red blotch disease in grape leaves using hyperspectral imaging
Mehrube Mehrubeoglu, Keith Orlebeck, Michael J. Zemlan, et al.
Red blotch disease is a viral disease that affects grapevines. Symptoms appear as irregular blotches on grape leaves with pink and red veins on the underside of the leaves. Red blotch disease causes a reduction in the accumulation of sugar in grapevines affecting the quality of grapes and resulting in delayed harvest. Detecting and monitoring this disease early is important for grapevine management. This work focuses on the use of hyperspectral imaging for detection and mapping red blotch disease in grape leaves. Grape leaves with known red blotch disease have been imaged with a portable hyperspectral imaging system both on and off the vine to investigate the spectral signature of red blotch disease as well as to identify the diseased areas on the leaves. Modified reflectance calculated at spectral bands corresponding to 566 nm (green) and 628 nm (red), and modified reflectance ratios computed at two sets of bands (566 nm / 628 nm, 680 nm / 738 nm) were selected as effective features to differentiate red blotch from healthy-looking and dry leaf. These two modified reflectance and two ratios of modified reflectance values were then used to train the support vector machine classifier in a supervised learning scheme. Once the SVM classifier was defined, two-class classification was achieved for grape leaf hyperspectral images. Identification of the red blotch disease on grape leaves as well as mapping different stages of the disease using hyperspectral imaging are presented in this paper.
Spectral feature characterization methods for blood stain detection in crime scene backgrounds
Jie Yang, Jobin J. Mathew, Roger R. Dube, et al.
Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as “peaks” and “depths” of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of “depth” minus “peak” over“depth” add“peak” within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.
Invited Session: Solid Target Variability I
icon_mobile_dropdown
Ideal system morphology and reflectivity measurements for radiative-transfer model development and validation
T. J. Kulp, R. L. Sommers, K. L. Krafcik, et al.
This paper describes measurements being made on a series of material systems for the purpose of developing a radiative-transfer model that describes the reflectance of light by granular solids. It is well recognized that the reflectance spectra of granular materials depend on their intrinsic (n(λ) and k(λ)) and extrinsic (morphological) properties. There is, however, a lack of robust and proven models to relate spectra to these parameters. The described work is being conducted in parallel with a modeling effort1 to address this need. Each follows a common developmental spiral in which material properties are varied and the ability of the model to calculate the effects of the changes are tested. The parameters being varied include particle size/shape, packing density, material birefringence, optical thickness, and spectral contribution of a substrate. It is expected that the outcome of this work will be useful in interpreting reflectance data for hyperspectral imaging (HSI), and for a variety of other areas that rely on it.
Experimental effects on IR reflectance spectra: particle size and morphology
Toya N. Beiswenger, Tanya L. Myers, Carolyn S. Brauer, et al.
For geologic and extraterrestrial samples it is known that both particle size and morphology can have strong effects on a species’ infrared reflectance spectra. Due to such effects, the reflectance spectra cannot be predicted from the absorption coefficients alone. This is because reflectance is both a surface as well as a bulk phenomenon, incorporating both dispersion as well as absorption effects. The same spectral feature can even be observed as either a maximum or minimum. The complex effects depend on particle size and preparation, as well as the relative amplitudes of the optical constants n and k, i.e. the real and imaginary components of the complex refractive index. While somewhat oversimplified, upward-going amplitude in the reflectance spectrum usually results from surface scattering, i.e. rays that have been reflected from the surface without penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. While the effects are known, we report seminal measurements of reflectance along with quantified particle size of the samples, the sizing obtained from optical microscopy measurements. The size measurements are correlated with the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the spectral features as a function of the mean grain size. We report results for both anhydrous sodium sulfate Na2SO4 as well as ammonium sulfate (NH4)2SO4; the optical constants have been measured for (NH4)2SO4. To go a step further from the laboratory and into the field we explore our understanding of particle size effects on reflectance spectra using standoff detection at distances of up to 160 meters in a field experiment. The studies have shown that particle size has a strong influence on the measured reflectance spectra of such materials; successful identification requires sufficient, representative reflectance data to include the particle sizes of interest.
A next generation field-portable goniometer system
Justin D. Harms, Charles M. Bachmann, Jason W. Faulring, et al.
Various field portable goniometers have been designed to capture in-situ measurements of a materials bi-directional reflectance distribution function (BRDF), each with a specific scientific purpose in mind.1-4 The Rochester Institute of Technology's (RIT) Chester F. Carlson Center for Imaging Science recently created a novel instrument incorporating a wide variety of features into one compact apparatus in order to obtain very high accuracy BRDFs of short vegetation and sediments, even in undesirable conditions and austere environments. This next generation system integrates a dual-view design using two VNIR/SWIR pectroradiometers to capture target reflected radiance, as well as incoming radiance, to provide for better optical accuracy when measuring in non-ideal atmospheric conditions or when background illumination effects are non-negligible. The new, fully automated device also features a laser range finder to construct a surface roughness model of the target being measured, which enables the user to include inclination information into BRDF post-processing and further allows for roughness effects to be better studied for radiative transfer modeling. The highly portable design features automatic leveling, a precision engineered frame, and a variable measurement plane that allow for BRDF measurements on rugged, un-even terrain while still maintaining true angular measurements with respect to the target, all without sacrificing measurement speed. Despite the expanded capabilities and dual sensor suite, the system weighs less than 75 kg, which allows for excellent mobility and data collection on soft, silty clay or fine sand.
Invited Session: Solid Target Variability II
icon_mobile_dropdown
NEFDS contamination model parameter estimation of powder contaminated surfaces
Timothy J. Gibbs, David W. Messinger
Hyperspectral signatures of powdered contaminated surfaces are challenging to characterize due to intimate mixing between materials. Most radiometric models have difficulties in recreating these signatures due to non-linear interactions between particles with different physical properties. The Nonconventional Exploitation Factors Data System (NEFDS) Contamination Model is capable of recreating longwave hyperspectral signatures at any contamination mixture amount, but only for a limited selection of materials currently in the database. A method has been developed to invert the NEFDS model and perform parameter estimation on emissivity measurements from a variety of powdered materials on substrates. This model was chosen for its potential to accurately determine contamination coverage density as a parameter in the inverted model. Emissivity data were measured using a Designs and Prototypes fourier transform infrared spectrometer model 102 for different levels of contamination. Temperature emissivity separation was performed to convert data from measure radiance to estimated surface emissivity. Emissivity curves were then input into the inverted model and parameters were estimated for each spectral curve. A comparison of measured data with extrapolated model emissivity curves using estimated parameter values assessed performance of the inverted NEFDS contamination model. This paper will present the initial results of the experimental campaign and the estimated surface coverage parameters.
Radiative transfer modeling of surface chemical deposits
Remote detection of a surface-bound chemical relies on the recognition of a pattern, or “signature,” that is distinct from the background. Such signatures are a function of a chemical’s fundamental optical properties, but also depend upon its specific morphology. Importantly, the same chemical can exhibit vastly different signatures depending on the size of particles composing the deposit. We present a parameterized model to account for such morphological effects on surface-deposited chemical signatures. This model leverages computational tools developed within the planetary and atmospheric science communities, beginning with T-matrix and ray-tracing approaches for evaluating the scattering and extinction properties of individual particles based on their size and shape, and the complex refractive index of the material itself. These individual-particle properties then serve as input to the Ambartsumian invariant imbedding solution for the reflectance of a particulate surface composed of these particles. The inputs to the model include parameters associated with a functionalized form of the particle size distribution (PSD) as well as parameters associated with the particle packing density and surface roughness. The model is numerically inverted via Sandia’s Dakota package, optimizing agreement between modeled and measured reflectance spectra, which we demonstrate on data acquired on five size-selected silica powders over the 4-16 μm wavelength range. Agreements between modeled and measured reflectance spectra are assessed, while the optimized PSDs resulting from the spectral fitting are then compared to PSD data acquired from independent particle size measurements.
Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling
Dave W. Engel, Thomas A. Reichardt, Thomas J. Kulp, et al.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Advancing the retrievals of surface emissivity by modelling the spatial distribution of temperature in the thermal hyperspectral scene
M. Shimoni, R. Haelterman, P. Lodewyckx
Land Surface Temperature (LST) and Land Surface Emissivity (LSE) are commonly retrieved from thermal hyperspectral imaging. However, their retrieval is not a straightforward procedure because the mathematical problem is ill-posed. This procedure becomes more challenging in an urban area where the spatial distribution of temperature varies substantially in space and time. For assessing the influence of several spatial variances on the deviation of the temperature in the scene, a statistical model is created. The model was tested using several images from various times in the day and was validated using in-situ measurements. The results highlight the importance of the geometry of the scene and its setting relative to the position of the sun during day time. It also shows that when the position of the sun is in zenith, the main contribution to the thermal distribution in the scene is the thermal capacity of the landcover materials. In this paper we propose a new Temperature and Emissivity Separation (TES) method which integrates 3D surface and landcover information from LIDAR and VNIR hyperspectral imaging data in an attempt to improve the TES procedure for a thermal hyperspectral scene. The experimental results prove the high accuracy of the proposed method in comparison to another conventional TES model.
Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects
Steven Adler-Golden, David Less, Xuemin Jin, et al.
Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved “apparent” values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.
Solid target spectral variability in LWIR
We continue to highlight the pattern recognition challenges associated with solid target spectral variability in the longwave infrared (LWIR) region of the electromagnetic spectrum for a persistent imaging experiment. The experiment focused on the collection and exploitation of LWIR hyperspectral imagery. We propose two methods for target detection, one based on the repeated-random-sampling trial adaptation to a single-class version of support vector machine, and the other based on a longitudinal data model. The defining characteristic of a longitudinal study is that objects are measured repeatedly through time and, as a result, data are dependent. This is in contrast to cross-sectional studies in which the outcomes of a specific event are observed by randomly sampling from a large population of relevant objects in which data are assumed independent. Researchers in the remote sensing community generally assume the problem of object recognition to be cross-sectional. Performance contrast is quantified using a LWIR hyperspectral dataset acquired during three consecutive diurnal cycles, and results reinforce the need for using data models that are more realistic to LWIR spectral data.
Spectral BRDF modeling of vehicle signature observations in the VNIR-SWIR
T. Perkins, S. Adler-Golden, L. Muratov, et al.
Hyperspectral imaging (HSI) sensors have the ability to detect and identify objects within a scene based on the distinct attributes of their surface spectral signatures. Many targets of interest, such as vehicles, represent a complex arrangement of specular (non-Lambertian) materials with curved and flat surfaces oriented at varying view factors. This complexity, combined with possible changing atmospheric/illumination conditions and viewing geometries, can produce significant variations in the observed signatures from measurement to measurement, making detection and/or reacquisition challenging. This paper focuses on the characterization of visible-near infrared-short wave infrared (VNIR-SWIR) spectra for detection, identification and tracking of vehicles. Signature variations are predicted using a novel image simulation tool to calculate spectral images of complex 3D objects from a spectral material description such as the modified Beard-Maxwell BRDF model, a wireframe shape model, and a directional model of the illumination. We compare the simulations with recent VNIR-SWIR hyperspectral imagery of vehicles and panels collected at the Rochester Institute of Technology during an Autumn 2015 measurement campaign. Variations in both the simulated and measured spectra arise mainly from differences in the relative glint contribution. Implications of these variations on vehicle detection and identification are briefly discussed.
Instance influence estimation for hyperspectral target signature characterization using extended functions of multiple instances
The Extended Functions of Multiple Instances (eFUMI) algorithm1 is a generalization of Multiple Instance Learning (MIL). In eFUMI, only bag level (i.e. set level) labels are needed to estimate target signatures from mixed data. The training bags in eFUMI are labeled positive if any data point in a bag contains or represents any proportion of the target signature and are labeled as a negative bag if all data points in the bag do not represent any target. From these imprecise labels, eFUMI has been shown to be effective at estimating target signatures in hyperspectral subpixel target detection problems. One motivating scenario for the use of eFUMI is where an analyst circles objects/regions of interest in a hyperspectral scene such that the target signatures of these objects can be estimated and be used to determine whether other instances of the object appear elsewhere in the image collection. The regions highlighted by the analyst serve as the imprecise labels for eFUMI. Often, an analyst may want to iteratively refine their imprecise labels. In this paper, we present an approach for estimating the influence on the estimated target signature if the label for a particular input data point is modified. This instance influence estimation guides an analyst to focus on (re-)labeling the data points that provide the largest change in the resulting estimated target signature and, thus, reduce the amount of time an analyst needs to spend refining the labels for a hyperspectral scene. Results are shown on real hyperspectral sub-pixel target detection data sets.
Graph-based and statistical approaches for detecting spectrally variable target materials
In discriminating target materials from background clutter in hyperspectral imagery, one must contend with variability in both. Most algorithms focus on the clutter variability, but for some materials there is considerable variability in the spectral signatures of the target. This is especially the case for solid target materials, whose signatures depend on morphological properties (particle size, packing density, etc.) that are rarely known a priori. In this paper, we investigate detection algorithms that explicitly take into account the diversity of signatures for a given target. In particular, we investigate variable target detectors when applied to new representations of the hyperspectral data: a manifold learning based approach, and a residual based approach. The graph theory and manifold learning based approach incorporates multiple spectral signatures of the target material of interest; this is built upon previous work that used a single target spectrum. In this approach, we first build an adaptive nearest neighbors (ANN) graph on the data and target spectra, and use a biased locally linear embedding (LLE) transformation to perform nonlinear dimensionality reduction. This biased transformation results in a lower-dimensional representation of the data that better separates the targets from the background. The residual approach uses an annulus based computation to represent each pixel after an estimate of the local background is removed, which suppresses local backgrounds and emphasizes the target-containing pixels. We will show detection results in the original spectral space, the dimensionality-reduced space, and the residual space, all using subspace detectors: ranked spectral angle mapper (rSAM), subspace adaptive matched filter (ssAMF), and subspace adaptive cosine/coherence estimator (ssACE). Results of this exploratory study will be shown on a ground-truthed hyperspectral image with variable target spectra and both full and mixed pixel targets.
Identification of solid materials using HSI spectral oscillators
Cory L. Lanker, Milton O. Smith
Our research aims to characterize solid materials through LWIR reflectance spectra in order to improve com-positional exploitation in a hyperspectral imaging (HSI) sensor data cube. Specifically, we aim to reduce false alarm rates when identifying target materials without compromising sensitivity. We employ dispersive analysis to extract the material oscillator resonances from reflectance spectra with a stepwise fitting algorithm to estimate the Lorentz or Gaussian oscillators effectively present in the HSI spectral measurements. The proposed algorithm operates through nonlinear least squares minimization through a grid search over potential oscillator resonance frequencies and widths. Experimental validation of the algorithm is performed with published values of crys-talline and amorphous materials. Our aim is to use the derived oscillator parameters to characterize the materials that are present in an HSI pixel. We demonstrate that there are material-specific properties of oscillators that show subtle variability when considering changes in morphology or measurement conditions. The experimentally verified results include variability in material particle size, measurement angle, and atmospheric conditions for six mineral measurements. Once a target material’s oscillators are characterized, we apply statistical learning techniques to form a classifier based on the estimated spectral oscillators of the HSI pixels. We show that this approach has good initial identification results that are extendible across localized experimental conditions.
Target Detection
icon_mobile_dropdown
Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms
Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.
Target detection in hyperspectral Imaging using logistic regression
Target detection is an important application in hyperspectral imaging. Conventional algorithms for target detection assume that the pixels have a multivariate normal distribution. The pixels in most images do not have multivariate normal distributions. The logistic regression model, which does not require the assumption of multivariate normal distribution, is proposed in this paper as a target detection algorithm. Experimental results show that the logistic regression model can work well in target detection.
Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery
Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as “latent” blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.
Biased normalized cuts for target detection in hyperspectral imagery
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
Methods and challenges for target detection and material identification for longwave infrared hyperspectral imagery
Blake M. Rankin, Joseph Meola, David L. Perry, et al.
Hyperspectral imaging (HSI) combined with target detection and identification algorithms require spectral signatures for target materials of interest. The longwave infrared (LWIR) region of the electromagnetic spectrum is dominated by thermal emission, and thus, estimates of target temperature are necessary for emissivity retrieval through temperature-emissivity separation or for conversion of known emissivity signatures to radiance units. Therefore, lack of accurate target temperature information poses a significant challenge for target detection and identification algorithms. Previous studies have demonstrated both LWIR target detection using signature subspaces and visible/shortwave subpixel target identification. This work compares adaptive coherence estimator (ACE) and subspace target detection algorithms for various target materials, atmospheric compensation algorithms, and imagery domains (radiance or emissivity) for several data sets. Preliminary results suggest that target detection in the radiance and emissivity domains is complementary, in the sense that certain material classes may be more easily detected using subspaces, while others require conversion to emissivity space. Furthermore, a radiance domain LWIR material identification algorithm that accounts for target temperature uncertainty is presented. The latter algorithm is shown to effectively distinguish between materials with a high degree of spectral similarity.
Invited Session: Novel Mathematically Inspired Methods of Processing Hyperspectral Imagery
icon_mobile_dropdown
Agile multi-scale decompositions for automatic image registration
James M. Murphy, Omar Navarro Leija, Jacqueline Le Moigne
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the mixed MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
Schroedinger eigenmaps with knowledge propagation for target detection
The applicability of Laplacian Eigenmaps (LE) and Schroedinger Eigenmaps (SE) has been widely shown in the processing of hyperspectral imagery. Specifically, we have previously shown that SE has a promising performance in spectral target detection. SE, unlike LE, could include prior information or labeled data points into a barrier potential term that steers the transformation in certain directions making the labeled points and the similar points pulled toward the origin in the new space. We have also noticed that the barrier potentials generated from a few labeled points may affect in a brittle manner the dimensionality in the Schroedinger space and in turn, the target detection performance. In this paper, we show that the number of SE used in the detection could be increased without affecting the detection performance by adding spatial and spectral constraints on the individual labeled points and propagating this knowledge to nearby points through a modified Schroedinger matrix. We apply our algorithm to hyperspectral data sets with several target panels and different complexity in order to have a wide framework of assessment.
Building robust neighborhoods for manifold learning-based image classification and anomaly detection
We exploit manifold learning algorithms to perform image classification and anomaly detection in complex scenes involving hyperspectral land cover and broadband IR maritime data. The results of standard manifold learning techniques are improved by including spatial information. This is accomplished by creating super-pixels which are robust to affine transformations inherent in natural scenes. We utilize techniques from harmonic analysis and image processing, namely, rotation, skew, flip, and shift operators to develop a more representational graph structure which defines the data-dependent manifold.
A parametric study of unsupervised anomaly detection performance in maritime imagery using manifold learning techniques
We investigate the parameters that govern an unsupervised anomaly detection framework that uses nonlinear techniques to learn a better model of the non-anomalous data. A manifold or kernel-based model is learned from a small, uniformly sampled subset in order to reduce computational burden and under the assumption that anomalous data will have little effect on the learned model because their rarity reduces the likelihood of their inclusion in the subset. The remaining data are then projected into the learned space and their projection errors used as detection statistics. Here, kernel principal component analysis is considered for learning the background model. We consider spectral data from an 8-band multispectral sensor as well as panchromatic infrared images treated by building a data set composed of overlapping image patches. We consider detection performance as a function of patch neighborhood size as well as embedding parameters such as kernel bandwidth and dimension. ROC curves are generated over a range of parameters and compared to RX performance.
Use of high dimensional model representation in dimensionality reduction: application to hyperspectral image classification
Recently, information extraction from hyperspectral images (HI) has become an attractive research area for many practical applications in earth observation due to the fact that HI provides valuable information with a huge number of spectral bands. In order to process such a huge amount of data in an effective way, traditional methods may not fully provide a satisfactory performance because they do not mostly consider high dimensionality of the data which causes curse of dimensionality also known as Hughes phenomena. In case of supervised classification, a poor generalization performance is achieved as a consequence resulting in availability of limited training samples. Therefore, advance methods accounting for the high dimensionality need to be developed in order to get a good generalization capability. In this work, a method of High Dimensional Model Representation (HDMR) was utilized for dimensionality reduction, and a novel feature selection method was introduced based on global sensitivity analysis. Several implementations were conducted with hyperspectral images in comparison to state-of-art feature selection algorithms in terms of classification accuracy, and the results showed that the proposed method outperforms the other feature selection methods even with all considered classifiers, that are support vector machines, Bayes, and decision tree j48.
Analyzing hyperspectral images into multiple subspaces using Gaussian mixture models
Clay D. Spence
I argue that the spectra in a hyperspectral datacube will usually lie in several low-dimensional subspaces, and that these subspaces are more easily estimated from the data than the endmembers. I present an algorithm for finding the subspaces. The algorithm fits the data with a Gaussian mixture model, in which the means and covariance matrices are parameterized in terms of the subspaces. The locations of materials can be inferred from the fit of library spectra to the subspaces. The algorithm can be modified to perform material detection. This has better performance than standard algorithms such as ACE, and runs in real time.
A nonlinear modeling framework for the detection of underwater objects in hyperspectral imagery
The detection of underwater objects of interest (or targets) in hyperspectral imagery is a challenging problem, with a number of complications that are not present in land-based hyperspectral target detection. The main challenge in underwater detection is that, in contrast to land, where the observed spectrum of an associated target is largely independent of the surrounding background (e.g. the signature of a tank looks more or less the same whether it is on a road or in a field of grass), the observed spectrum of an underwater target is a highly nonlinear function of the background – that is, the optical properties of the water body that the object is submerged in, as well as the depth of the target. As a result, the same object in different types of water and/or at different depths will in general have very different observed spectral signatures. In this work, we present a general overview of the various challenges involved in underwater detection, and present a novel approach that fuses forward radiative-transfer modeling, ocean color predictions, and nonlinear mathematical techniques (manifold learning) to model both the background and target signature(s) and perform detection over a wide range of environmental conditions and depths.
Spectral Signature Modeling, Measurements, and Applications
icon_mobile_dropdown
A hyperspectral vehicle BRDF sampling experiment
Hyperspectral imagery was taken of four vehicles from a roof at the Rochester Institute of Technology (RIT) at various vehicle orientations in illumination conditions dominated by direct solar radiation in order to explore and model the in-scene bidirectional reflectance distribution functions (BRDFs) of 3D objects. The four vehicles were rotated and imaged through the span of six hours resulting in many combinations of vehicle orientation, source azimuth, and source zenith. In addition to the general sampling of vehicle BRDFs, three experiments were designed and executed in order to understand the contributions of vehicle shape, vehicle color, and background on the observed in-scene BRDFs.
Calculation of vibrational and electronic excited state absorption spectra of arsenic-water complexes using density functional theory
L. Huang, S. G. Lambrakos, A. Shabaev, et al.
Calculations are presented of vibrational and electronic excited-state absorption spectra for As-H2O complexes using density function theory (DFT) and time-dependent density functional theory (TD-DFT). DFT and TD-DFT can provide interpretation of absorption spectra with respect to molecular structure for excitation by electromagnetic waves at frequencies within the IR and UV-visible ranges. The absorption spectrum corresponding to excitation states of As-H2O complexes consisting of relatively small numbers of water molecules should be associated with response features that are intermediate between that of isolated molecules and that of a bulk system. DFT and TD-DFT calculated absorption spectra represent quantitative estimates that can be correlated with additional information obtained from laboratory measurements and other types of theory based calculations. The DFT software GAUSSIAN was used for the calculations of excitation states presented here.
Modeling of forest canopy BRDF using DIRSIG
Rajagopalan Rengarajan, John R. Schott
The characterization and temporal analysis of multispectral and hyperspectral data to extract the biophysical information of the Earth's surface can be significantly improved by understanding its aniosotropic reflectance properties, which are best described by a Bi-directional Reflectance Distribution Function (BRDF). The advancements in the field of remote sensing techniques and instrumentation have made hyperspectral BRDF measurements in the field possible using sophisticated goniometers. However, natural surfaces such as forest canopies impose limitations on both the data collection techniques, as well as, the range of illumination angles that can be collected from the field. These limitations can be mitigated by measuring BRDF in a virtual environment. This paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) model. A synthetic forest canopy scene is constructed by modeling the 3D geometries of different tree species using OnyxTree software. The field collected spectra from the Harvard forest is used to represent the optical properties of the tree elements. The canopy radiative transfer is estimated using the DIRSIG model for specific view and illumination angles to generate BRDF measurements. A full hemispherical BRDF is generated by fitting the measured BRDF to a semi-empirical BRDF model. The results from fitting the model to the measurement indicates a root mean square error of less than 5% (2 reflectance units) relative to the forest's reflectance in the VIS-NIR-SWIR region. The process can be easily extended to generate a spectral BRDF library for various biomes.
Imaging of gaseous oxygen through DFB laser illumination
L. Cocola, M. Fedel, G. Tondello, et al.
A Tunable Diode Laser Absorption Spectroscopy setup with Wavelength Modulation has been used together with a synchronous sampling imaging sensor to obtain two-dimensional transmission-mode images of oxygen content. Modulated laser light from a 760nm DFB source has been used to illuminate a scene from the back while image frames were acquired with a high dynamic range camera. Thanks to synchronous timing between the imaging device and laser light modulation, the traditional lock-in approach used in Wavelength Modulation Spectroscopy was replaced by image processing techniques, and many scanning periods were averaged together to allow resolution of small intensity variation over the already weak absorption signals from oxygen absorption band. After proper binning and filtering, the time-domain waveform obtained from each pixel in a set of frames representing the wavelength scan was used as the single detector signal in a traditional TDLAS-WMS setup, and so processed through a software defined digital lock-in demodulation and a second harmonic signal fitting routine. In this way the WMS artifacts of a gas absorption feature were obtained from each pixel together with intensity normalization parameter, allowing a reconstruction of oxygen distribution in a two-dimensional scene regardless from broadband transmitted intensity. As a first demonstration of the effectiveness of this setup, oxygen absorption images of similar containers filled with either oxygen or nitrogen were acquired and processed.
Towards an improved understanding of the influence of subpixel vegetation structure on pixel-level spectra: a simulation approach
Wei Yao, Martin van Leeuwen, Paul Romanczyk, et al.
The planned NASA Hyperspectral Infrared Imager (HyspIRI) mission, equipped with an imaging spectrometer that has the capability of monitoring ecosystems globally, will provide an unprecedented opportunity to address scientific challenges related to ecosystem function and change. However, uncertainty remains around the impact of subpixel vegetation structure, in combination with the point spread function, on pixel-level imaging spectroscopy data. We estimated structural parameters, e.g., leaf area index (LAI), canopy cover, and tree location, from HyspIRI spectral data, with the goal of assessing how subpixel variation in these parameters impact pixel-level imaging spectroscopy data. The fine-scale variability of real vegetation structure makes this a challenging endeavor. Therefore, we utilized a simulation-based approach to counter the time-consuming and often destructive sampling needs of vegetation structural analysis and to simultaneously generate synthetic HyspIRI data pre-launch. Three virtual scenes were constructed, corresponding to the actual vegetation structure in the National Ecological Observatory Network’s (NEON) Pacific Southwest Domain (Fresno, CA). These included an oak savanna, a dense coniferous forest, and a conifer-manzanita-mixed forest. Simulated spectroscopy data for these scenes were then generated using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. Simulations first were used to verify the physical model, virtual scene geometrical information, and simulation parameters. This was followed by simulations of HyspIRI data, where within-pixel structural variability was introduced, e.g., by iteratively changing per-pixel canopy cover and tree placement, tree clustering, leaf area index (LAI), etc., between simulation runs for the virtual scenes. Finally, narrow-band vegetation indices (VIs) were extracted from the data in an attempt to describe the variability of the subpixel structural parameters; this was done in order to assess VI robustness to changes in structural “levels”, as well as placement of trees/canopies within the instrument’s instantaneous field-of-view (IFOV). Our ultimate goal is not only to better understand how such subpixel variability influence imaging spectroscopy outputs, but also to better estimate vegetation structural parameters using spectra. We constructed regression models for LAI (R2 = 0.92) and canopy cover (R2 = 0.97) with narrow-band VIs via this simulation approach. Our models ultimately are intended to improve the HyspIRI mission’s ability to monitor global vegetation structure.
Dimensionality Reduction
icon_mobile_dropdown
How many spectral bands are necessary to describe the directional reflectance of beach sands?
Katarina Z. Doctor, Steven G. Ackleson, Charles M. Bachmann, et al.
Spectral variability in the visible, near-infrared and shortwave directional reflectance factor of beach sands and freshwater sheet flow is examined using principal component and correlation matrix analysis of in situ measurements. In previous work we concluded that the hyperspectral bidirectional reflectance distribution function (BRDF) of beach sands in the absence of sheet flow exhibit weak spectral variability, the majority of which can be described with three broad spectral bands with wavelength ranges of 350-450 nm, 700-1350 nm, and 1450-2400 nm.1 Observing sheet flow on sand we find that a thin layer of water enhances reflectance in the specular direction at all wavelengths and that spectral variability may be described using four spectral band regions of 350-450 nm, 500-950 nm, 950-1350 nm, and 1450-2400 nm. Spectral variations are more evident in sand surfaces of greater visual roughness than in smooth surfaces, regardless of sheet flow.
Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization
A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint ℓ2 − ℓ1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the ℓ2-norm, penalized by the ℓ1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).
Manifold alignment with Schroedinger eigenmaps
The sun-target-sensor angle can change during aerial remote sensing. In an attempt to compensate BRDF effects in multi-angular hyperspectral images, the Semi-Supervised Manifold Alignment (SSMA) algorithm pulls data from similar classes together and pushes data from different classes apart. SSMA uses Laplacian Eigenmaps (LE) to preserve the original geometric structure of each local data set independently. In this paper, we replace LE with Spatial-Spectral Schoedinger Eigenmaps (SSSE) which was designed to be a semisupervised enhancement to the to extend the SSMA methodology and improve classification of multi-angular hyperspectral images captured over Hog Island in the Virginia Coast Reserve.
Spectral Characterization, Detection, and Identification
icon_mobile_dropdown
Chemical plume detection with an iterative background estimation technique
Eric Truslow, Steven Golowich, Dimitris Manolakis
The detection of chemical vapor plumes using passive hyperspectral sensors operating in the longwave infrared is a challenging problem with many applications. For adequate performance, detection algorithms require an estimate of a scene’s background statistics, including the mean and covariance. Diffuse plumes with a large spatial extent are particularly difficult to detect in single-image schemes because of contamination of background statistics by the plume. To mitigate the effects of plume contamination, a first pass of the detector can be used to create a background mask. However, large diffuse plumes are typically not removed by a single pass. Instead, contamination can be reduced by using smoothed detection results as a background mask. In the proposed procedure, a detector bank is run on the cube, and a threshold applied to produce a binary image. The binary image can be modeled as a spatial point process consisting of high density and low density regions. By applying a spatial filter to the detection image, regions with overall higher intensity are detected as containing plume and can be removed from background statistic estimates. The key intuition is that regions with a higher density of hits are more likely to contain plume since plumes are spatially contiguous. We demonstrate with real plume data that this method can drastically improve detection performance over the single-pass method, and explore tradeoffs between different filter sizes and thresholds.
Flag-based detection of weak gas signatures in long-wave infrared hyperspectral image sequences
Timothy Marrinan, J. Ross Beveridge, Bruce Draper, et al.
We present a flag manifold based method for detecting chemical plumes in long-wave infrared hyperspectral movies. The method encodes temporal and spatial information related to a hyperspectral pixel into a flag, or nested sequence of linear subspaces. The technique used to create the flags pushes information about the background clutter, ambient conditions, and potential chemical agents into the leading elements of the flags. Exploiting this temporal information allows for a detection algorithm that is sensitive to the presence of weak signals. This method is compared to existing techniques qualitatively on real data and quantitatively on synthetic data to show that the flag-based algorithm consistently performs better on data when the SINRdB is low, and beats the ACE and MF algorithms in probability of detection for low probabilities of false alarm even when the SINRdB is high.
Temperature-emissivity separation for LWIR sensing using MCMC
Signal processing for long-wave infrared (LWIR) sensing is made complicated by unknown surface temperatures in a scene which impact measured radiance through temperature-dependent black-body radiation of in-scene objects. The unknown radiation levels give rise to the temperature-emissivity separation (TES) problem describing the intrinsic ambiguity between an object’s temperature and emissivity. In this paper we present a novel Bayesian TES algorithm that produces a probabilistic posterior estimate of a material’s unknown temperature and emissivity. The statistical uncertainty characterization provided by the algorithm is important for subsequent signal processing tasks such as classification and sensor fusion. The algorithm is based on Markov chain Monte Carlo (MCMC) methods and exploits conditional linearity to achieve efficient block-wise Gibbs sampling for rapid inference. In contrast to existing work, the algorithm optimally incorporates prior knowledge about inscene materials via Bayesian priors which may optionally be learned using training data and a material database. Examples demonstrate up to an order of magnitude reduction in error compared to classical filter-based TES methods.
Polarimetric assist to HSI atmospheric compensation and material identification
Mark Gibney
In this effort, we investigated how polarimetric HyperSpectral Imaging (pHSI) data might benefit specified Material Identification of diffuse materials in the VNIR. The experiment compared paint reflectivities extracted from polarimetric hyperspectral data acquired in the field to a database of truth reflectivities measured in the lab. Both the polarimetric hyperspectral data and the reflectivities were acquired using an Ocean Optics spectrometer which was polarized using a fast filter wheel loaded with high extinction polarizers. During the experiment, we discovered that the polarized spectra from the polarimetric hyper spectral data could be used to estimate the relative spectral character of the field source (the exo-atmospheric sun plus the atmosphere). This benefit, which strongly parallels the QUAC atmospheric correction method, relies on the natural spectral flatness of the polarized spectrum that originates in the spectral flatness of the index of refraction in the reflective regime. Using this estimate of the field source, excellent estimates of the paint reflectivities (matching 10 paint reflectivities to ≤ 0.5% RSS) were obtained. The impact of atmospheric upwell on performance was then investigated using these ground based polarimetric hyper spectral data in conjunction with modeled atmospheric path effects. The path effects were modeled using the high fidelity Polarimetry Phenomenology Simulation (PPS) plate model developed by AFRL, which includes polarized Modtran. We conclude with a discussion of actual and potential applications of this method, and how best to convert an existing VNIR HSI sensor into a pHSI sensor for an airborne Proof Of Concept experiment.
A spectral climatology for atmospheric compensation of hyperspectral imagery
John H. Powell, Ronald G. Resmini
Most Earth observation hyperspectral imagery (HSI) detection and identification algorithms depend critically upon a robust atmospheric compensation capability to correct for the effects of the atmosphere on the radiance signal. Atmospheric compensation methods typically perform optimally when ancillary ground truth data are available, e.g., high fidelity in situ radiometric observations or atmospheric profile measurements. When ground truth is incomplete or not available, additional assumptions must be made to perform the compensation. Meteorological climatologies are available to provide climatological norms for input into the radiative transfer models; however no such climatologies exist for empirical methods. The success of atmospheric compensation methods such as the empirical line method suggests that remotely sensed HSI scenes contain comprehensive sets of atmospheric state information within the spectral data itself. It is argued that large collections of empirically-derived atmospheric coefficients collected over a range of climatic and atmospheric conditions comprise a resource that can be applied to prospective atmospheric compensation problems. A previous study introduced a new climatological approach to atmospheric compensation in which empirically derived spectral information, rather than sensible atmospheric state variables, is the fundamental datum. The current work expands the approach across an experimental archive of 127 airborne HSI datasets spanning nine physical sites to represent varying climatological conditions. The representative atmospheric compensation coefficients are assembled in a scientific database of spectral observations and modeled data. Improvements to the modeling methods used to standardize the coefficients across varying collection and illumination geometries and the resulting comparisons of adjusted coefficients are presented. The climatological database is analyzed to show that common spectral similarity metrics can be used to separate the climatological classes to a degree of detail commensurate with the modest size and range of the imaging conditions comprising the study.
Generation of remotely sensed reference data using low altitude, high spatial resolution hyperspectral imagery
McKay D. Williams, Jan van Aardt, John P. Kerekes
Exploitation of imaging spectroscopy (hyperspectral) data using classification and spectral unmixing algorithms is a major research area in remote sensing, with reference data required to assess algorithm performance. However, we are limited by our inability to generate rapid, accurate, and consistent reference data, thus making quantitative algorithm analysis difficult. As a result, many investigators present either limited quantitative results, use synthetic imagery, or provide qualitative results using real imagery. Existing reference data typically classify large swaths of imagery pixel-by-pixel, per cover type. While this type of mapping provides a first order understanding of scene composition, it is not detailed enough to include complexities such as mixed pixels, intra-end-member variability, and scene anomalies. The creation of more detailed ground reference data based on field work, on the other hand, is complicated by the spatial scale of common hyperspectral data sets. This research presents a solution to this challenge via classification of low altitude, high spatial resolution (1m GSD) National Ecological Observatory Network (NEON) hyperspectral imagery, on a pixel-by-pixel basis, to produce sub-pixel reference data for high altitude, lower spatial resolution (15m GSD) AVIRIS imagery. This classification is performed using traditional classification techniques, augmented by (0.3m GSD) NEON RGB data. This paper provides a methodology for generating large scale, sub-pixel reference data for AVIRIS imagery using NEON imagery. It also addresses challenges related to the fusion of multiple remote sensing modalities (e.g., different sensors, sensor look angles, spatial registration, varying scene illumination, etc.). A new algorithm for spatial registration of hyperspectral imagery with disparate resolutions is presented. Several versions of reference data results are compared to each other and to direct spectral unmixing of AVIRIS data. Initial results are promising, with ground based surveying required to quantify the accuracy of remotely sensed reference data."
Sensor Design and Development
icon_mobile_dropdown
An imaging spectro-polarimeter for measuring hemispherical spectrally resolved down-welling sky polarization
A full sky imaging spectro-polarimeter has been developed that measures spectrally resolved (~2.5 nm resolution) radiance and polarization (𝑠0, 𝑠1, 𝑠2 Stokes Elements) of natural sky down-welling over approximately 2π sr between 400nm and 1000nm. The sensor is based on a scanning push broom hyperspectral imager configured with a continuously rotating polarizer (sequential measurement in time polarimeter). Sensor control and processing software (based on Polaris Sensor Technologies Grave’ camera control software) has a straight-forward and intuitive user interface that provides real-time updated sky down-welling spectral radiance/polarization maps and statistical analysis tools.
Compact hyperspectral camera in the mid-infrared for small UAVs
Armande Pola Fossi, Yann Ferrec, Christophe Coudrain, et al.
Hyperspectral imaging from small unmanned aerial vehicles (UAVs) arouses a growing interest, as well for military applications as for civilian applications like agriculture management, pollution monitoring or mining. This paper establishes a quick state of the art of cameras of which the capabilities in small-UAVs embedded campaigns have been demonstrated. We also introduce a novel compact hyperspectral camera operating in mid-infrared spectral range embeddable on small UAVs. This camera combines birefringent interferometer for size reduction and cooled imaging optics for a better signal noise ratio. The design of a first prototype and first results from a ground-based measurement campaign, of which the goal was an optical concept validation, is also discussed. Finally, we present the design modifications for the small-UAVs-embeddable version.
Compact multispectral multi-camera imaging system for small UAVs
Hans Erling Torkildsen, Trym Haavardsholm, Thomas Opsahl, et al.
Cameras with filters in the focal plane provide the most compact solution for multispectral imaging. A small UAV can carry multiple such cameras, providing large area coverage rate at high spatial resolution. We investigate a camera concept where a patterned bandpass filter with six bands provides multiple interspersed recordings of all bands, enabling consistency checks for improved spectral integrity. A compact sensor payload has been built with multiple cameras and a data acquisition computer. Recorded imagery demonstrates the potential for large area coverage with good spectral integrity.
Software defined multi-spectral imaging for Arctic sensor networks
Sam Siewert, Vivek Angoth, Ramnarayan Krishnamurthy, et al.
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
Interactive Poster Session
icon_mobile_dropdown
Lossless compression of hyperspectral images based on the prediction error block
Yongjun Li, Yunsong Li, Juan Song, et al.
A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.
Minimum removal and maximum normalization of VNIR hyperspectral image for shade and specular invariance
Sungho Kim, Heekang Kim
A novel hyperspectral image normalization method was developed to make the hyperspectral profile invariant to illumination. The well-known band-ratio method shows unstable spectral profile to shades and highlights. The proposed minimum removal and maximum normalization method is simple but can reduce the spectral variations caused by shades and high-lights effectively, which leads to enhanced abnormal region detection performance in VNIR hyperspectral images.
A generalized representation-based approach for hyperspectral image classification
Sparse representation-based classifier (SRC) is of great interest recently for hyperspectral image classification. It is assumed that a testing pixel is linearly combined with atoms of a dictionary. Under this circumstance, the dictionary includes all the training samples. The objective is to find a weight vector that yields a minimum L2 representation error with the constraint that the weight vector is sparse with a minimum L1 norm. The pixel is assigned to the class whose training samples yield the minimum error. In addition, collaborative representation-based classifier (CRC) is also proposed, where the weight vector has a minimum L2 norm. The CRC has a closed-form solution; when using class-specific representation it can yield even better performance than the SRC. Compared to traditional classifiers such as support vector machine (SVM), SRC and CRC do not have a traditional training-testing fashion as in supervised learning, while their performance is similar to or even better than SVM. In this paper, we investigate a generalized representation-based classifier which uses Lq representation error, Lp weight norm, and adaptive regularization. The classification performance of Lq and Lp combinations is evaluated with several real hyperspectral datasets. Based on these experiments, recommendation is provide for practical implementation.
Multispectral image fusion based on diffusion morphology for enhanced vision applications
Vladimir A. Knyaz, Oleg V. Vygolov, Yury V. Vizilter, et al.
Existing image fusion methods based on morphological image analysis, that expresses the geometrical idea of image shape as a label image, are quite sensitive to the quality of image segmentation and, therefore, not sufficiently robust to noise and high frequency distortions. On the other hand, there are a number of methods in the field of dimensionality reduction and data comparison that give possibility of avoiding an image segmentation step by using diffusion maps techniques. The paper proposes a new approach for multispectral image fusion based on the combination of morphological image analysis and diffusion maps theory (i.e. Diffusion Morphology). A new image fusion algorithm is described that uses a matched diffusion filtering procedure instead of morphological projection. The algorithm is implemented for a three channels Enhanced Vision System prototype. The comparative results of image fusion are shown on real images acquired in flight experiments.
Compressive hyperspectral and multispectral imaging fusion
Óscar Espitia, Sergio Castillo, Henry Arguello
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
On validating remote sensing simulations using coincident real data
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra’s shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
Spectral signature verification using statistical analysis and text mining
Mallory E. DeCoster, Alexe H. Firpi, Samantha K. Jacobs, et al.
In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material’s spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum’s quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is present for comparison. The spectral validation method proposed is described from a practical application and analytical perspective.
Toward prediction of hyperspectral target detection performance after lossy image compression
Hyperspectral imagery (HSI) offers numerous advantages over traditional sensing modalities with its high spectral content that allows for classification, anomaly detection, target discrimination, and change detection. However, this imaging modality produces a huge amount of data, which requires transmission, processing, and storage resources; hyperspectral compression is a viable solution to these challenges. It is well known that lossy compression of hyperspectral imagery can impact hyperspectral target detection. Here we examine lossy compressed hyperspectral imagery from data-centric and target-centric perspectives. The compression ratio (CR), root mean square error (RMSE), the signal to noise ratio (SNR), and the correlation coefficient are computed directly from the imagery and provide insight to how the imagery has been affected by the lossy compression process. With targets present in the imagery, we perform target detection with the spectral angle mapper (SAM) and adaptive coherence estimator (ACE) and evaluate the change in target detection performance by examining receiver operating characteristic (ROC) curves and the target signal-to-clutter ratio (SCR). Finally, we observe relationships between the data- and target-centric metrics for selected visible/near-infrared to shortwave infrared (VNIR/SWIR) HSI data, targets, and backgrounds that motivate potential prediction of change in target detection performance as a function of compression ratio.
Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures
Travis R. Gault, Melissa E. Jansen, Mallory E. DeCoster, et al.
Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor’s field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory’s Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.
Middle infrared (wavelength range: 8 µm-14 µm) 2-dimensional spectroscopy (total weight with electrical controller: 1.7 kg, total cost: less than 10,000 USD) so-called hyper-spectral camera for unmanned air vehicles like drones
Naoyuki Yamamoto, Tsubasa Saito, Satoru Ogawa, et al.
We developed the palm size (optical unit: 73[mm]×102[mm]×66[mm]) and light weight (total weight with electrical controller: 1.7[kg]) middle infrared (wavelength range: 8[μm]-14[μm]) 2-dimensional spectroscopy for UAV (Unmanned Air Vehicle) like drone. And we successfully demonstrated the flights with the developed hyperspectral camera mounted on the multi-copter so-called drone in 15/Sep./2015 at Kagawa prefecture in Japan. We had proposed 2 dimensional imaging type Fourier spectroscopy that was the near-common path temporal phase-shift interferometer. We install the variable phase shifter onto optical Fourier transform plane of infinity corrected imaging optical systems. The variable phase shifter was configured with a movable mirror and a fixed mirror. The movable mirror was actuated by the impact drive piezo-electric device (stroke: 4.5[mm], resolution: 0.01[μm], maker: Technohands Co.,Ltd., type:XDT50-45, price: around 1,000USD). We realized the wavefront division type and near common path interferometry that has strong robustness against mechanical vibrations. Without anti-mechanical vibration systems, the palm-size Fourier spectroscopy was realized. And we were able to utilize the small and low-cost middle infrared camera that was the micro borometer array (un-cooled VOxMicroborometer, pixel array: 336×256, pixel pitch: 17[μm], frame rate 60[Hz], maker: FLIR, type: Quark 336, price: around 5,000USD). And this apparatus was able to be operated by single board computer (Raspberry Pi.). Thus, total cost was less than 10,000 USD. We joined with KAMOME-PJ (Kanagawa Advanced MOdule for Material Evaluation Project) with DRONE FACTORY Corp., KUUSATSU Corp., Fuji Imvac Inc. And we successfully obtained the middle infrared spectroscopic imaging with multi-copter drone.
Tracking the on-orbit spatial performance of MODIS using ground targets
Nearly-identical MODIS instruments are operating onboard both the NASA EOS Terra and Aqua spacecraft. Each instrument records earth-scene data using 490 detectors divided among 36 spectral bands. These bands range in center wavelength from 0.4 μm to 14.2 μm to benefit studies of the entire earth system including land, atmosphere, and ocean disciplines. Many of the resultant science data products are the result of multiple bands used in combination. Any mis-registration between the bands would adversely affect subsequent data products. The relative registration between MODIS bands was measured pre-launch and continues to be monitored on-orbit via the Spectro-radiometric Calibration Assembly (SRCA), an on-board calibrator. Analysis has not only shown registration differences pre-launch, but also long-term and seasonal changes. While the ability to determine registration changes on-orbit using the SRCA is unique to MODIS, the use of ground targets to determine relative registration has been used for other instruments. This paper evaluates a ground target for MODIS spatial characterization using the MODIS calibrated data product. Results are compared against previously reported findings using MODIS data and the operational on-board characterization using the SRCA.
Monitoring of urban heat island over Shenzhen, China using remotely sensed measurements
Weimin Wang, Liang Hong, Lijun Yang, et al.
In the past three decades, the Shenzhen city, which is located in south of China, has experienced a rapid urbanization process characterized by sharp decrease in farmland and increases in urban area. This rapid urbanization is one of the main causes of many environmental and ecological problems including urban heat island (UHI). Therefore, the monitoring of rapid urbanization regions and the environment is of critical importance for their sustainable development. In this study, Landsat-8 OLI and TIR images, which were acquired on 2013, are used to monitor urban heat island. After radiometric calibration and atmospheric correction with a simplified method for the atmospheric correction (SMAC) are applied to OLI image, an index-based build-up index (IBI), which is based on the soil adjusted vegetation index (SAVI), the modified normalized difference water index (MNDWI) and the normalized difference built-up index (NDBI), is employed to extract the build-up land features with a given thresholds. A single-channel algorithm is used to retrieve land surface temperature while the land surface emissivity is derived from a normalized differential vegetation index (NDVI) thresholds method. Surface urban heat island index (SUHII) and urban heat island ratio index (URI) are computed for ten districts of Shenzhen based on build-up land distribution and land surface temperature data. A correlation analysis is conducted between heat island index (including SUHII and URI) and socio-economic statistics (including total population and population density) also are included in this analysis. The results show that, a weak relationship between urban heat island and socio-economic statistics are found.