Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
Author(s):
Jie Yang;
David W. Messinger;
Roger R. Dube;
Emmett J. Ientilucci
Show Abstract
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Designing manufacturable filters for a 16-band plenoptic camera using differential evolution
Author(s):
Timothy Doster;
Colin C. Olson;
Erin Fleet;
Michael Yetzbacher;
Andrey Kanaev;
Paul Lebow;
Robert Leathers
Show Abstract
A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.
Fresnel zone plate light field spectral imaging simulation
Author(s):
Francis D. Hallada;
Anthony L. Franz;
Michael R. Hawks
Show Abstract
Through numerical simulation, we have demonstrated a novel snapshot spectral imaging concept using binary diffractive optics. Binary diffractive optics, such as Fresnel zone plates (FZP) or photon sieves, can be used as the single optical element in a spectral imager that conducts both imaging and dispersion. In previous demonstrations of spectral imaging with diffractive optics, the detector array was physically translated along the optic axis to measure different image formation planes. In this new concept the wavelength-dependent images are constructed synthetically, by using integral photography concepts commonly applied to light field (plenoptic) cameras. Light field cameras use computational digital refocusing methods after exposure to make images at different object distances. Our concept refocuses to make images at different wavelengths instead of different object distances. The simulations in this study demonstrate this concept for an imager designed with a FZP. Monochromatic light from planar sources is propagated through the system to a measurement plane using wave optics in the Fresnel approximation. Simple images, placed at optical infinity, are illuminated by monochromatic sources and then digitally refocused to show different spectral bins. We show the formation of distinct images from different objects, illuminated by monochromatic sources in the VIS/NIR spectrum. Additionally, this concept could easily be applied to imaging in the MWIR and LWIR ranges. In conclusion, this new type of imager offers a rugged and simple optical design for snapshot spectral imaging and warrants further development.
Three-dimensional hyperspectral imaging technique
Author(s):
Jörgen Ahlberg;
Ingmar G. Renhorn;
Tomas R. Chevalier;
Joakim Rydell;
David Bergström
Show Abstract
Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.
Spectrally resolved longitudinal spatial coherence inteferometry
Author(s):
Ethan R. Woodard;
Michael W. Kudenov
Show Abstract
We present an alternative imaging technique using spectrally resolved longitudinal spatial coherence interferometry to encode a scene’s angular information onto the source’s power spectrum. Fourier transformation of the spectrally resolved channeled spectrum output yields a measurement of the incident scene’s angular spectrum. Theory for the spectrally resolved interferometric technique is detailed, demonstrating analogies to conventional Fourier transform spectroscopy. An experimental proof of concept system and results are presented using an angularly-dependent Fabry-Perot interferometer-based optical design for successful reconstruction of one-dimensional sinusoidal angular spectra. Discussion for a potential future application of the technique, in which polarization information is encoded onto the source’s power spectrum is also given.
Real-time hyperspectral image processing for UAV applications, using HySpex Mjolnir-1024
Author(s):
Pesal Koirala;
Trond Løke;
Ivar Baarstad;
Andrei Fridman;
Julio Hernandez
Show Abstract
The HySpex Mjolnir-1024 hyperspectral camera provides a unique combination of small form factor and low mass combined with high performance and scientific grade data quality. The camera has spatial resolution of 1024 pixels, spectral resolution of 200 bands within 400 nm to 1000 nm wavelength range and F1.8 optics that ensures high light throughput. Rugged design with good thermal and mechanical stability makes Mjolnir-1024 an excellent option for a wide range of scientific applications for airborne UAV operations and field applications. The optical architecture is based on the high-end ODIN-1024 system and features a total FOV of 20 degrees with approximately 0.1 pixel residual keystone effect and even smaller residual smile effect after resampling. With a total mass of less than 4 kg including hyperspectral camera, data acquisition unit, IMU and GPS, the system is suitable for even relatively small UAVs. The system is generic and can be deployed on a wide range of UAVs with various downlink capabilities. The ground station software enables full control of the sensor settings and has the capability to show in real time the location of the UAV, plot the flight path of the UAV and display a georeferenced waterfall preview image in order to give instant feedback on spatial coverage. The system can be triggered automatically by the UAV’s flight management system, but can also be controlled manually. Mjolnir-1024 housing contains both the camera hardware and a high performance onboard computer. The computer enables advanced processing capabilities such as real-time georeferencing based on the data streams from the camera and INS. The system is also capable of performing real-time image analysis such as anomaly detection, NDVI and SAM. The data products can be overlaid on top of various background maps and images in real time. The real-time processing results can also be downlinked and displayed directly on the monitor of the ground station.
Intelligent detection algorithm of hazardous gases for FTIR-based hyperspectral imaging system using SVM classifier
Author(s):
Hyeong-Geun Yu;
Jae-Hoon Lee;
Yong-Chan Kim;
Dong-Jo Park
Show Abstract
A hyperspectral imaging system (HIS) with a Fourier transform infrared (FTIR) spectrometer is an excellent method for the detection and identification of gaseous fumes. Various detection algorithms can remove background spectra from measured spectra and determine the degree of spectral similarity between the extracted signature and reference signatures of target compounds. However, given the interference signatures caused by FTIR instruments, it is impossible to extract the spectral signatures of target gases perfectly. Such interference signatures degrade the detection performance. In this paper, a detection algorithm for gaseous fumes using a multiclass support vector machine (SVM) classifier is proposed. The proposed algorithm has a training step and a test step. In the training step, the spectral signatures are extracted from measured spectra which are labeled. Then, hyperplanes which classify gas spectra are trained and the multiclass SVM classifier outcomes are calculated using the hyperplanes. In the test step, spectral signatures extracted from unknown measured spectra are substituted to the SVM classifier, after which the detection result is obtained. This multiclass SVM classifier robustly responds to performance degradation caused by unremoved interference signatures because it trains not only gaseous signatures but also the related interference signatures. The experimental results verify that the algorithm can effectively detect hazardous clouds.
Characterizing sensitivity of longwave infrared hyperspectral target detection with respect to signature mismatch and dimensionality reduction
Author(s):
Joseph Meola
Show Abstract
Hyperspectral target detection typically relies upon libraries of material reflectance and emissivity signatures. Application to real-world, airborne data requires estimation of atmospheric properties in order to convert reflectance/emissivity signatures to the sensor data domain. In the longwave infrared, an additional nuisance parameter of surface temperature exists that further complicates the signature conversion process. A significant amount of work has been done in atmospheric compensation and temperature-emissivity-separation techniques. This work examines the sensitivity of target detection performance for various materials with respect to target signature mismatch introduced from atmospheric compensation error or target temperature mismatch. Additionally, the impact of dimensionality reduction via principal components analysis is assessed.
Total electron count variability and stratospheric ozone effects on solar backscatter and LWIR emissions
Author(s):
John S. Ross;
Steven T. Fiorino
Show Abstract
The development of an accurate ionospheric Total Electron Count (TEC) model is of critical importance to high frequency (HF) radio propagation and satellite communications. However, the TEC is highly variable and is continually influenced by geomagnetic storms, extreme UV radiation, and planetary waves. Being able to capture this variability is essential to improve current TEC models. The growing body of data involving ionospheric fluctuations and stratospheric variations has revealed a correlation. In particular, there is a marked and persistent association between increases in stratospheric ozone and variability of the TEC. The spectral properties of ozone show that it is a greenhouse gas that alters long wave emissions from Earth and interacts with the UV spectrum coming from the sun. This study uses the Laser Environment Effects Definition and Reference (LEEDR) radiative transfer and atmospheric characterization code to model the effects of changes in stratospheric ozone on solar backscatter and longwave (LWIR) terrestrial emissions and infer TEC and TEC variability.
Improved atmospheric characterization for hyperspectral exploitation
Author(s):
Nathan P. Wurst;
Joseph Meola;
Steven T. Fiorino
Show Abstract
Airborne hyperspectral imaging (HSI)has shown utility in material detection and identification. Recent interest in longwave infrared (LWIR) HSI systems operating in the 7-14 micron range has developed due to strong spectral features of minerals, chemicals, and gaseous effluents. LWIR HSI has the advantage over other spectral bands by operating in day or night scenarios because emitted/reflected thermal radiation rather than reflected sunlight is measured. This research seeks to determine the most effective methods to perform model-based atmospheric compensation (AC) of LWIR HSI data using two existing atmospheric radiative transfer (RT) models, MODTRAN and LEEDR. MODTRAN is the more established RT model, but it lacks LEEDRs robust capability to generate realistic atmospheric profiles from probabilistic climatology or observations and forecasts from numerical weather prediction (NWP) models. The advantage of LEEDR’s ability to generate atmospheres is tested by using LEEDR atmospheres, a MODTRAN standard model, and radiosonde data to perform AC on an airborne hyperspectral datacube with nadir looking geometry. This work investigates the potential benefit of LEEDR’s weather/climatology tools for improving and/or expediting the AC process for LWIR HSI.
Mid-infrared hyperspectral simulator for laser-based detection of trace chemicals on surfaces
Author(s):
Travis Myers;
Derek Wood;
Anish K. Goyal;
David Kelley;
Petros Kotidis;
Gil Raz;
Cara Murphy;
Chelsea Georgan
Show Abstract
Laser-based, mid-infrared (MIR) hyperspectral imaging (HSI) has the potential to detect a wide range of trace chemicals on a variety of surfaces under standoff conditions. The major challenge of MIR reflection spectroscopy is that the reflection signatures for surface chemicals can be complex and exhibit significant spectral variability. This paper describes a MIR Hyperspectral Simulator that is being developed to model the reflectance signatures from surfaces including the effects of speckle and other sources of spectral variability. Simulated hypercubes will be compared with experiments.
Novel trace chemical detection algorithms: a comparative study
Author(s):
Gil Raz;
Cara Murphy;
Chelsea Georgan;
Ross Greenwood;
R. K. Prasanth;
Travis Myers;
Anish Goyal;
David Kelley;
Derek Wood;
Petros Kotidis
Show Abstract
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
Deep learning over diurnal and other environmental effects
Author(s):
Dalton Rosario;
Patrick Rauss
Show Abstract
We study the transfer learning behavior of a Hybrid Deep Network (HDN) applied to a challenging longwave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower, over multiple full diurnal cycles and different atmospheric conditions. The HDN architecture adopted in this study stakes a number of Restricted Boltzmann Machines to form a deep belief network for generative pre-training, or initialization of weight parameters, and then combines with a discriminative learning procedure that fine-tune all of the weights jointly to improve the network’s performance. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite of significant data variability observed between and within classes due to environmental and temperature variation, occurring within full diurnal cycles. We argue, however, that more question are raised than answers are provided regarding the generalization capacity of these deep nets through experiments aimed for investigating their training and transfer learning behavior in the longwave infrared region of the electromagnetic spectrum.
Experiments with Simplex ACE: dealing with highly variable targets
Author(s):
Amanda Ziemann;
James Theiler;
Emmett Ientilucci
Show Abstract
We investigate a constrained subspace detector that models the target spectrum as a positive linear combination of multiple reference spectra. This construction permits the input of a large number of target reference spectra, which enables us to go after even highly variable targets without being overwhelmed by false alarms. This constrained basis approach led to the derivation of both the simplex adaptive matched filter (Simplex AMF) and simplex adaptive cosine estimator (Simplex ACE) detectors. Our primary interest is in Simplex ACE, and as such, the experiments in this paper focus on evaluating the robustness of Simplex ACE (with Simplex AMF included for comparison). We present results using large spectral libraries implanted into real hyperspectral data, and compare the performance of our simplex detectors against their traditional subspace detector counterparts. In addition to a large (i.e., several hundred spectra) target library, we induce further target variability by implanting subpixel targets with both added noise and scaled illumination. As a corollary, we also show that in the limit as the target subspace approaches the image space, Subspace AMF becomes the RX anomaly detector.
Crop classification using temporal stacks of multispectral satellite imagery
Author(s):
Daniela I. Moody;
Steven P. Brumby;
Rick Chartrand;
Ryan Keisler;
Nathan Longbotham;
Carly Mertes;
Samuel W. Skillman;
Michael S. Warren
Show Abstract
The increase in performance, availability, and coverage of multispectral satellite sensor constellations has led to a drastic increase in data volume and data rate. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. The data analysis capability, however, has lagged behind storage and compute developments, and has traditionally focused on individual scene processing. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and can scale with the high-rate and dimensionality of imagery being collected. We investigate and compare the performance of pixel-level crop identification using tree-based classifiers and its dependence on both temporal and spectral features. Classification performance is assessed using as ground-truth Cropland Data Layer (CDL) crop masks generated by the US Department of Agriculture (USDA). The CDL maps contain 30m spatial resolution, pixel-level labels for around 200 categories of land cover, but are however only available post-growing season. The analysis focuses on McCook county in South Dakota and shows crop classification using a temporal stack of Landsat 8 (L8) imagery over the growing season, from April through October. Specifically, we consider the temporal L8 stack depth, as well as different normalized band difference indices, and evaluate their contribution to crop identification. We also show an extension of our algorithm to map corn and soy crops in the state of Mato Grosso, Brazil.
Invariance concepts in spectral analysis
Author(s):
Alan Schaum
Show Abstract
Methods are developed for insuring robust discrimination performance in detection problems with epistemic unknowns. The problem is first solved for the class of problems exhibiting some symmetry, as expressed by invariances to some group of feature space transformations. The determination of whether a problem admits a uniformly most powerful invariant (UMPI) solution (and how to derive it) is solved with a new and simple procedure. This motivates an approach for solving problems where a symmetry is gracefully broken, which leads in turn to a general approach for producing robust detectors. This introduces a new category of detector, the UMPIC (UMPI constrained). Finally, principles of UMPIC construction are shown to apply to problems exhibiting no invariances.
Spectral and spatial variability of undisturbed and disturbed grass under different view and illumination directions
Author(s):
Christoph C. Borel-Donohue;
Sarah Wells Shivers;
Damon Conover
Show Abstract
It is well known that disturbed grass covered surfaces show variability with view and illumination conditions. A good example is a grass field in a soccer stadium that shows stripes indicating in which direction the grass was mowed. These spatial variations are due to a complex interplay of spectral characteristics of grass blades, density, their length and orientations. Viewing a grass surface from nadir or near horizontal directions results in observing different components. Views from a vertical direction show more variations due to reflections from the randomly oriented grass blades and their shadows. Views from near horizontal show a mixture of reflected and transmitted light from grass blades. An experiment was performed on a mowed grass surface which had paths of simulated heavy foot traffic laid down in different directions. High spatial resolution hyperspectral data cubes were taken by an imaging spectrometer covering the visible through near infrared over a period of time covering several hours. Ground truth grass reflectance spectra with a hand held spectrometer were obtained of undisturbed and disturbed areas. Close range images were taken of selected areas with a hand held camera which were then used to reconstruct the 3D geometry of the grass using structure-from-motion algorithms. Computer graphics rendering using raytracing of reconstructed and procedurally created grass surfaces were used to compute BRDF models. In this paper, we discuss differences between observed and simulated spectral and spatial variability. Based on the measurements and/or simulations, we derive simple spectral index methods to detect spatial disturbances and apply scattering models.
Measurement of optical constants for spectral modeling: n and k values for ammonium sulfate via single-angle and ellipsometric methods
Author(s):
Thomas A. Blake;
Carolyn S. Brauer;
Molly Rose Kelly-Gorham;
Sarah D. Burton;
Mary Bliss;
Tanya L. Myers;
Timothy J. Johnson;
Thomas E. Tiwald
Show Abstract
The complex index of refraction, ñ = n + ik, has two components, n(ν) and k(ν), both a function of frequency, ν. The constant n is the real component, and k is the complex component, proportional to the absorption. In combination with other parameters, n and k can be used to model infrared spectra. However, obtaining reliable n/k values for solid materials is often difficult. In the past, the best results for n and k have been obtained from bulk, polished homogeneous materials free of defects; i.e. materials where the Fresnel equations are valid and there is no appreciable light scattering. Since it is often not possible to obtain such pure macroscopic samples, the alternative is to press the powder form of the material into a uniform disk. Recently, we have pressed such pellets from ammonium sulfate powder, and have measured the pellets’ n and k values via two independent methods: 1) ellipsometry, which measures the changes in amplitude and phase of light reflected from the material of interest as a function of wavelength and angle of incidence, and 2) single-angle reflectance using a specular reflectance device within a Fourier transform infrared spectrometer. This technique measures the change in amplitude of light reflected from the material of interest as a function of wavelength over a wide spectral domain. The optical constants are determined from the single-angle measurements using the Kramers-Kronig relationship, whereas an oscillator model is used to analyze the ellipsometric measurements. The n(ν) and k(ν) values determined by the two methods were compared to previous values determined from single crystal samples from which transmittance and reflectance measurements were made and converted to n(ν) and k(ν) using a simple dispersion model. [Toon et al., Journal of Geophysical Research, 81, 5733–5748, (1976)]. Comparison with the literature values shows good agreement, indicating that these are promising techniques to measure the optical constants of other materials.
Characterizing the temporal and spatial variability of longwave infrared spectral images of targets and backgrounds
Author(s):
Nirmalan Jeganathan;
John Kerekes;
Dalton Rosario
Show Abstract
Following the public release of the Spectral and Polarimetric Imagery Collection Experiment (SPICE) dataset, a persistent imaging experiment dataset collected by the Army Research Laboratory (ARL), the data were analyzed and materials in the scene characterized temporally and spatially using radiance data. The noise equivalent spectral radiance provided by the sensor manufacturer was compared with instrument noise calculated from in-scene information, and found to be comparable given differences in laboratory setting and real-life conditions. The processed dataset have regular "inconsistent cubes," specifically for data collected immediately after blackbody measurements, which were automatically executed approximately at each hour mark. Omitting these erroneous data, three target detection algorithms (adaptive coherent/cosine estimator, spectral angle mapper, and spectral matched filter) were tested on the temporal data using two target spectra (noon and midnight). The spectral matched filter produced the best detection rate for both noon and midnight target spectra for a 24-hrs period.
Spatial-spectral signature modeling for solid targets in hyperspectral imagery
Author(s):
Jason R. Kaufman;
Joseph Meola
Show Abstract
Spatial-spectral feature extraction algorithms – such as those based on spatial descriptors applied to selected spectral bands within a hyperspectral image – can provide additional discrimination capability beyond traditional spectral-only approaches. However, when attempting to detect a target with such algorithms, an exemplar target signature is often manually derived from the hyperspectral images representation in the spatial-spectral feature space. This requires a reference image in which the targets location is known. Additionally, the scenebased signature captures only the representation of the target under certain collection conditions from a specific sensor, namely, illumination level and atmospheric composition, look angle, and target pose against a specific background. A detection algorithm utilizing this spatial-spectral signature (or the spatial descriptor itself) that is sensitive to changes in these collection conditions could suffer a loss in performance should the new conditions significantly deviate from the exemplars case. To begin to overcome these limitations, we formulate and evaluate the effectiveness of a modeling technique for synthesizing exemplar spatial-spectral signatures for solid targets, particularly when the spatial structure of the target of interest varies due to pose or obscuration by the background, and when applicable, the target temperature varies. We assess the impact of these changes on a group of spatial descriptors responses to guide the modeling process for a set of two-dimensional targets specifically designed for this study. The sources of variability that most affect each descriptor are captured in target subspaces, which then form the basis of new spatial-spectral target detection algorithms.
Contaminant mass estimation of powder contaminated surfaces
Author(s):
Timothy J. Gibbs;
David W. Messinger
Show Abstract
How can we determine the physical characteristics of a mixture of multiple materials within a single pixel? Intimate mixing occurs when different materials within the region encompassed by a pixel interact with each other prior to reaching the sensor. For powder contaminated surfaces, nonlinear mixing is unavoidable. The Nonconventional Exploitation Factors Data System (NEFDS) Contamination Model can make longwave hyperspectral mixture signatures, but only for a small subset of their spectral library. In addition, the model uses percent coverage as its only physical property input despite it not being informative to the contaminants physical properties. Through a complex parameter inversion, the NEFDS contamination model can be used to derive various physical properties. These physical characteristics were estimated by using empirically measured data of varying contaminant amounts using a Designs and Prototypes Fourier transform infrared spectrometer. Once estimated parameters are found, the mixture spectra was recreated and compared to the measured data. The estimated areal coverage density is used to derive a total deposited mass on the surface based on the area of contaminated surface. This is compared to the known amount deposited that was measured during the experimental campaign. This paper presents some results of those measurements and model estimates.
Improvements to an earth observing statistical performance model with applications to LWIR spectral variability
Author(s):
Runchen Zhao;
Emmett J. Ientilucci
Show Abstract
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model.
FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
Piecewise flat embeddings for hyperspectral image analysis
Author(s):
Tyler L. Hayes;
Renee T. Meinhold;
John F. Hamilton Jr.;
Nathan D. Cahill
Show Abstract
Graph-based dimensionality reduction techniques such as Laplacian Eigenmaps (LE), Local Linear Embedding (LLE), Isometric Feature Mapping (ISOMAP), and Kernel Principal Components Analysis (KPCA) have been used in a variety of hyperspectral image analysis applications for generating smooth data embeddings. Recently, Piecewise Flat Embeddings (PFE) were introduced in the computer vision community as a technique for generating piecewise constant embeddings that make data clustering / image segmentation a straightforward process. In this paper, we show how PFE arises by modifying LE, yielding a constrained ℓ1-minimization problem that can be solved iteratively. Using publicly available data, we carry out experiments to illustrate the implications of applying PFE to pixel-based hyperspectral image clustering and classification.
Supervised non-negative tensor factorization for automatic hyperspectral feature extraction and target discrimination
Author(s):
Dylan Anderson;
Aleksander Bapst;
Joshua Coon;
Aaron Pung;
Michael Kudenov
Show Abstract
Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.
Band selection for hyperspectral image classification using extreme learning machine
Author(s):
Jiaojiao Li;
Benjamin Kingsdorf;
Qian Du
Show Abstract
Extreme learning machine (ELM) is a feedforward neural network with one hidden layer, which is similar to a multilayer perceptron (MLP). To reduce the complexity in the training process of MLP using the traditional backpropagation algorithm, the weights in ELM between input and hidden layers are random variables. The output layer in the ELM is linear, as in a radial basis function neural network (RBFNN), so the output weights can be easily estimated with a least squares solution. It has been demonstrated in our previous work that the computational cost of ELM is much lower than the standard support vector machine (SVM), and a kernel version of ELM can offer comparable performance as SVM. In our previous work, we also investigate the impact of the number of hidden neurons to the performance of ELM. Basically, more hidden neurons are needed if the number of training samples and data dimensionality are large, which results in a very large matrix inversion problem. To avoid handling such a large matrix, we propose to conduct band selection to reduce data dimensionality (i.e., the number of input neurons), thereby reducing network complexity. Experimental results show that ELM using selected bands can yield similar or even better classification accuracy than using all the original bands.
A comparison of column subset selection methods for hyperspectral band subset selection (Conference Presentation)
Author(s):
Maher Aldeghlawi;
Miguel Velez-Reyes
Show Abstract
Observations from hyperspectral imaging sensors lead to high dimensional data sets from hundreds of images taken at closely spaced narrow spectral bands. High storage and transmission requirements, computational complexity, and statistical modeling problems combined with physical insight motivate the idea of hyperspectral dimensionality reduction using band subset selection. Many algorithms are described in the literature to solve supervised and unsupervised band subset selection problems. This paper explores the use of unsupervised band subset selection methods using column subset selection (CSS). Column subset selection is the problem (CSSP) of selecting the most independent columns of a matrix. A recent variant of this problem is the positive column subset selection problem (pCSSP) which restricts column subset selection to only consider positive linear combinations. Many algorithms have been proposed in the literature for the solution of the CSSP. However, the pCSSP is less studied. This paper will present a comparison of different algorithms to solve the CSSP and the pCSSP for band subset selection. The performance of classifiers using the algorithms as a dimensionality reduction stage will be used to evaluate the usefulness of these algorithms in hyperspectral image exploitation.
Band selection for change detection from hyperspectral images
Author(s):
Sicong Liu;
Qian Du;
Xiaohua Tong
Show Abstract
In this paper, we propose to apply unsupervised band selection to improve the performance of change detection in multitemporal hyperspectral images (HSI-CD). By reducing data dimensionality through finding the most distinctive and informative bands in the difference image, foreground changes may be better detected. Band selection-based dimensionality reduction (BS-DR) technique is considered to investigate in details the following sub-problems in HSI-CD including: 1) the estimated number of multi-class changes; 2) the binary CD; 3) the multiple CD; 4) the change discriminability; 5) the optimal number of selected bands. Thus it contributes at first time a quantitative analysis of the BS-DR approach impacting on the HSI-CD performance. Due to the difficulty of having training samples in an unknown environment, unsupervised band selection and change detection are considered. A pair of real multitemporal hyperspectral Hyperion data set has been used to validate the proposed approach. Experimental results confirmed the effectiveness of selecting a band subset to obtain a satisfactory CD result, comparing with the one using original full bands. In addition, the results also demonstrated that the reduced feature space is capable to maintain sufficient information for detecting the occurred spectrally significant changes. CD performance is enhanced with respect to the increasing of change representative and discriminable capabilities.
Method of sensitivity analysis in anomaly detection algorithms for hyperspectral images
Author(s):
Adam J. Messer;
Kenneth W. Bauer Jr.
Show Abstract
Anomaly detection within hyperspectral images often relies on the critical step of thresholding to declare the specific pixels based on their anomaly scores. When the detector is built upon sound statistical assumptions, this threshold is often probabilistically based, such as the RX detector and the chi-squared threshold. However, when either the detector lacking statistical framework or the background pixels of the image violate the required assumptions, the approach to thresholding is complicated and can resolve into performance instability. We present a method to test the sensitivity thresholding to small changes in the characteristics of the anomalies based on their Mahalanobis distance to the background class. In doing so, we highlight issues in detectors thresholding techniques comparing statistical approaches against heuristic methods of thresholding.
Local background estimation and the replacement target model
Author(s):
James Theiler;
Amanda Ziemann
Show Abstract
We investigate the detection of opaque targets in cluttered multi/hyper-spectral imagery, using a local background estimation model. Unlike transparent "additive-model" targets (like gas-phase plumes), these are solid "replacement-model" targets, which means that the observed spectrum is a linear combination of the target signature and the background signature. Pixels with stronger targets are associated with correspondingly weaker backgrounds, and background estimators can over-estimate the background in a target pixel. In particular, "subtracting the background" (which generalizes the usual notion of subtracting the mean) to produce a residual image can actually have deleterious effect. We examine an adaptive partial background subtraction scheme, and evaluate its utility for the detection of replacement-model targets.
A study of anomaly detection performance as a function of relative spectral abundances for graph- and statistics-based detection algorithms
Author(s):
C. C. Olson;
M. Coyle;
T. Doster
Show Abstract
We investigate an anomaly detection framework that leverages manifold learning techniques to learn a background model. A manifold is learned from a small, uniformly sampled subset under the assumption that any anomalous samples will have little effect on the learned model. The remaining data are then projected into the manifold space and their projection errors used as detection statistics. We study detection performance as a function of the interplay between sub-sampling percentage and the abundance of anomalous spectra relative to background class abundances using synthetic data derived from field collects. Results are compared against both graph-based and traditional statistical models.
Transformation for target detection in hyperspectral imaging
Author(s):
Edisanter Lo;
Emmett Ientilucci
Show Abstract
Conventional algorithms for target detection in hyperspectral imaging usually require multivariate normal distributions for the background and target pixels. Significant deviation from the assumed distributions could lead to incorrect detection. It is possible to make the non-normal pixels into more normal-looking pixels by using a transformation on the pixels. A multivariate transformation based maximum likelihood is proposed in this paper to improve target detection in hyperspectral imaging. Experimental results show that the distribution of the transformed pixels become closer to a multivariate normal distribution and the performance of the detection algorithms improves after the transformation.
Terrestrial hyperspectral image shadow restoration through fusion with terrestrial lidar
Author(s):
Preston J. Hartzell;
Craig L. Glennie;
David C. Finnegan;
Darren L. Hauser
Show Abstract
Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from exclusively airborne observations to include terrestrial modalities. In contrast to airborne collection geometry, hyperspectral imagery captured from terrestrial cameras is prone to extensive solar shadowing on vertical surfaces leading to reductions in pixel classification accuracies or outright removal of shadowed areas from subsequent analysis tasks. We demonstrate the use of lidar spatial information for sub-pixel HSI shadow detection and the restoration of shadowed pixel spectra via empirical methods that utilize sunlit and shadowed pixels of similar material composition. We examine the effectiveness of radiometrically calibrated lidar intensity in identifying these similar materials in sun and shade conditions and further evaluate a restoration technique that leverages ratios derived from the overlapping lidar laser and HSI wavelengths. Simulations of multiple lidar wavelengths, i.e., multispectral lidar, indicate the potential for HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance of shadowed HSI pixels is quantified for imagery of a geologic outcrop through improvements in spectral shape, spectral scale, and HSI band correlation.
Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite
Author(s):
Grzegorz Miecznik;
Jeff Shafer;
William M. Baugh;
Brett Bader;
Milan Karspeck;
Fabio Pacifici
Show Abstract
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing
Author(s):
Zhejun Wu;
Michael W. Kudenov
Show Abstract
This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.
Target-driven selection of lossy hyperspectral image compression ratios
Author(s):
Jason R. Kaufman;
Christopher D. McGuinness
Show Abstract
A common problem in applying lossy compression to a hyperspectral image is predicting its effect on spectral target detection performance. Recent work has shown that light amounts of lossy compression can remove noise in hyperspectral imagery that would otherwise bias a covariance-based spectral target detection algorithm’s background-normalized response to target samples. However, the detection performance of such an algorithm is a function of both the specific target of interest as well as the background, among other factors, and therefore sometimes lossy compression operating at a particular compression ratio (CR) will not negatively affect the detection of one target, while it will negatively affect the detection of another. To account for the variability in this behavior, we have developed a target-centric metric that guides the selection of a lossy compression algorithm’s CR without knowledge of whether or not the targets of interest are present in an image. Further, we show that this metric is correlated with the adaptive coherence estimator’s (ACE’s) signal to clutter ratio when targets are present in an image.
On the creation of high spatial resolution imaging spectroscopy data from multi-temporal low spatial resolution imagery
Author(s):
Wei Yao;
Jan van Aardt;
David Messinger
Show Abstract
The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery.
The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is
x; and iii) at least r
2 images are aligned on a grid of
x/r - then a high-resolution image, with a pixel size of
x/r, can be reconstructed from the multi-temporal set.
The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.
Globally scalable generation of high-resolution land cover from multispectral imagery
Author(s):
S. Craig Stutts;
Benjamin L. Raskob;
Eric J. Wenger
Show Abstract
We present an automated method of generating high resolution (~ 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).
Genetic algorithm for flood detection and evacuation route planning
Author(s):
Rahul Gomes;
Jeremy Straub
Show Abstract
A genetic-type algorithm is presented that uses satellite geospatial data to determine the most probable path to safety for individuals in a disaster area, where a traditional routing system cannot be used. The algorithm uses geological features and disaster information to determine the shortest safe path. It predicts how a flood can change a landform over time and uses this data to predict alternate routes. It also predicts safe routes in rural locations where GPS/map-based routing data is unavailable or inaccurate. Reflectance and a supervised classification algorithm are used and the output is compared with RFPI and PCR-GLOBWB data.
Application of a neural network for reflectance spectrum classification
Author(s):
Gefei Yang;
Michael Gartley
Show Abstract
Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT’s micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.
Subsurface classification of objects under turbid waters by means of regularization techniques applied to real hyperspectral data
Author(s):
Emmanuel Carpena;
Luis O. Jiménez;
Emmanuel Arzuaga;
Sujeily Fonseca;
Ernesto Reyes;
Juan Figueroa
Show Abstract
Improved benthic habitat mapping is needed to monitor coral reefs around the world and to assist coastal zones management programs. A fundamental challenge to remotely sensed mapping of coastal shallow waters is due to the significant disparity in the optical properties of the water column caused by the interaction between the coast and the sea. The objects to be classified have weak signals that interact with turbid waters that include sediments. In real scenarios, the absorption and backscattering coefficients are unknown with different sources of variability (river discharges and coastal interactions). Under normal circumstances, another unknown variable is the depth of shallow waters. This paper presents the development of algorithms for retrieving information and its application to the classification and mapping of objects under coastal shallow waters with different unknown concentrations of sediments. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. The retrieval of information requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and classification of hyperspectral data. The algorithms developed were applied to one set of real hyperspectral imagery taken in a tank filled with water and TiO2 that emulates turbid coastal shallow waters. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo of the water tank using a priori information in the form of stored spectral signatures, previously measured, of objects of interest.
Improving the detection of cocoa bean fermentation-related changes using image fusion
Author(s):
Daniel Ochoa;
Ronald Criollo;
Wenzhi Liao;
Juan Cevallos-Cevallos;
Rodrigo Castro;
Oswaldo Bayona
Show Abstract
Complex chemical processes occur in during cocoa bean fermentation. To select well-fermented beans, experts take a sample of beans, cut them in half and visually check its color. Often farmers mix high and low quality beans therefore, chocolate properties are difficult to control. In this paper, we explore how close-range hyper- spectral (HS) data can be used to characterize the fermentation process of two types of cocoa beans (CCN51 and National). Our aim is to find spectral differences to allow bean classification. The main issue is to extract reliable spectral data as openings resulting from the loss of water during fermentation, can cover up to 40% of the bean surface. We exploit HS pan-sharpening techniques to increase the spatial resolution of HS images and filter out uneven surface regions. In particular, the guided filter PCA approach which has proved suitable to use high-resolution RGB data as guide image. Our preliminary results show that this pre-processing step improves the separability of classes corresponding to each fermentation stage compared to using the average spectrum of the bean surface.
A pigment analysis tool for hyperspectral images of cultural heritage artifacts
Author(s):
Di Bai;
David W. Messinger;
David Howell
Show Abstract
The Gough Map, in the collection at the Bodleian Library, Oxford University, is one of the earliest surviving maps of Britain. Previous research deemed that it was likely created over the 15th century and afterwards it was extensively revised more than once. In 2015, the Gough Map was imaged using a hyperspectral imaging system at the Bodleian Library. The collection of the hyperspectral image (HSI) data was aimed at faded text enhancement for reading and pigment analysis for the material diversity of its composition and potentially the timeline of its creation. In this research, we introduce several methods to analyze the green pigments in the Gough Map, especially the number and spatial distribution of distinct green pigments. One approach, called the Gram Matrix, has been used to estimate the material diversity in a scene (i.e., endmember selection and dimensionality estimation). Here, we use the Gram Matrix technique to study the within-material differences of pigments in the Gough map with common visual color. We develop a pigment analysis tool that extracts visually common pixels, green pigments in this case, from the Gough Map and estimates its material diversity. It reveals that the Gough Map consists of at least six kinds of dominant green pigments. Both historical geographers and cartographic historians will benefit from this work to analyze the pigment diversity using HSI of cultural heritage artifacts.
Image denoising and deblurring using multispectral data
Author(s):
E. A. Semenishchev;
V. V. Voronin;
V. I. Marchuk
Show Abstract
Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.
Dimensionality reduction using superpixel segmentation for hyperspectral unmixing using the cNMF
Author(s):
Jiarui Yi;
Miguel Velez-Reyes
Show Abstract
This paper presents an approach to reduce dimensionality for hyperspectral unmixing using superpixel segmentation. The dimensionality reduction is achieved by over-segmenting the hyperspectral image using superpixels that are used as a reduced subset of representative pixels for the full hyperspectral image. Once superpixel are extracted, endmember extraction methods are applied to the reduced spectral data set with clear computational advantages. The proposed method is illustrated on the AVIRIS image captured over Fort AP Hill, Virginia. A comparison of the method with standard unmixing techniques is also included.
Ensemble learning and model averaging for material identification in hyperspectral imagery
Author(s):
William F. Basener
Show Abstract
In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging.
Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm.
In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.