Proceedings Volume 8743

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIX

Sylvia S. Shen, Paul E. Lewis
cover
Proceedings Volume 8743

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIX

Sylvia S. Shen, Paul E. Lewis
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 14 June 2013
Contents: 16 Sessions, 71 Papers, 0 Presentations
Conference: SPIE Defense, Security, and Sensing 2013
Volume Number: 8743

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8743
  • Detection, Identification, and Quantification I
  • Spectral Methodologies and Applications I
  • Spectral Methodologies and Applications II
  • Spectral Data Collections and Experiments
  • Spectral Data Analysis Methodologies I
  • Multisensor Data Fusion
  • Spectral Data Analysis Methodologies II
  • Spectral Methodologies and Applications III
  • Detection, Identification, and Quantification II
  • Spectral Sensor Development and Characterization
  • Spectral Data Analysis Methodologies III
  • Spectral Signature Measurements and Applications
  • Spectral Data Enhancement Technologies and Techniques
  • Clustering and Classification
  • Poster Session
Front Matter: Volume 8743
icon_mobile_dropdown
Front Matter: Volume 8743
This PDF file contains the front matter associated with SPIE Proceedings Volume 8743, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Detection, Identification, and Quantification I
icon_mobile_dropdown
The remarkable success of adaptive cosine estimator in hyperspectral target detection
D. Manolakis, M. Pieper, E. Truslow, et al.
A challenging problem of major importance in hyperspectral imaging applications is the detection of subpixel targets of military and civilian interest. The background clutter surrounding the target, acts as an interference source that simultaneously distorts the target spectrum and reduces its strength. Two additional limiting factors are the spectral variability of the background clutter and the spectral variability of the target. Since a result in applied statistics is only as reliable as the assumptions from which it is derived, it is important to investigate whether the basic assumptions used for the derivation of matched filter and adaptive cosine estimator algorithms are a reasonable description of the physical situation. Careful examination of the linear signal model used to derive these algorithms and the replacement signal model, which is a more realistic model for subpixel targets, reveals a serious discrepancy between modeling assumptions and the physical world. Despite this discrepancy and additional mismatches between assumed and actual signal and clutter models, the adaptive cosine estimator shows an amazing effectiveness in practical target detection applications. The objective of this paper is an attempt to explain this unbelievable effectiveness using a combination of classical statistical detection theory, geometrical interpretations, and a novel realistic performance prediction model for the adaptive cosine estimator.
A hyperspectral anomaly detector based on partitioning pixel into adjacent components
Detection of anomalous objects in a large scene is an important application of hyperspectral imaging in remote sensing. Current algorithms for anomaly detection are based on partialling out the main background structure from each spectral component of a pixel from a hyperspectral image. The Maximized Subspace Model (MSM) detector has the best probability of detection in comparison with the other anomaly detectors that are based on this model. This paper proposes an anomaly detection algorithm that is based on a more general model than the MSM detector. The anomaly detector is also defined as the Mahalanobis distance of the resulting residual. Experimental results show that the anomaly detector has a substantial improvement in detection over the conventional anomaly detectors.
False alarm mitigation techniques for hyperspectral target detection
M. L. Pieper, D. Manolakis, E. Truslow, et al.
A challenging problem of major importance in hyperspectral imaging applications is the detection of subpixel objects of military and civilian interest. High false alarm thresholds are required to detect subpixel objects due to the large amount of surrounding background clutter. These high false alarm rates are unacceptable for military purposes, requiring the need for false alarm mitigation (FAM) techniques to weed out the objects of interest. The objective of this paper is to provide a comparison of the implementation of these FAM techniques and their inherent benefits in the whitened detection space. The widely utilized matched filter (MF) and adaptive cosine estimator (ACE) are both based on a linear mixing model (LMM) between a background and object class. The matched filter approximates the object abundance, and the ACE measures the model error. Each of these measurements provides inadequate object separation alone, but by using both the object abundance and model error, the objects can be separated from the false alarms.
Image change detection via ensemble learning
The concept of geographic change detection is relevant in many areas. Changes in geography can reveal much information about a particular location. For example, analysis of changes in geography can identify regions of population growth, change in land use, and potential environmental disturbance. A common way to perform change detection is to use a simple method such as differencing to detect regions of change. Though these techniques are simple, often the application of these techniques is very limited. Recently, use of machine learning methods such as neural networks for change detection has been explored with great success. In this work, we explore the use of ensemble learning methodologies for detecting changes in bitemporal synthetic aperture radar (SAR) images. Ensemble learning uses a collection of weak machine learning classifiers to create a stronger classifier which has higher accuracy than the individual classifiers in the ensemble. The strength of the ensemble lies in the fact that the individual classifiers in the ensemble create a "mixture of experts" in which the final classification made by the ensemble classifier is calculated from the outputs of the individual classifiers. Our methodology leverages this aspect of ensemble learning by training collections of weak decision tree based classifiers to identify regions of change in SAR images collected of a region in the Staten Island, New York area during Hurricane Sandy. Preliminary studies show that the ensemble method has approximately 11.5% higher change detection accuracy than an individual classifier.
LWIR hyperspectral change detection for target acquisition and situation awareness in urban areas
Rob J. Dekker, Piet B. W. Schwering, Koen W. Benoist, et al.
This paper studies change detection of LWIR (Long Wave Infrared) hyperspectral imagery. Goal is to improve target acquisition and situation awareness in urban areas with respect to conventional techniques. Hyperspectral and conventional broadband high-spatial-resolution data were collected during the DUCAS trials in Zeebrugge, Belgium, in June 2011. LWIR data were acquired using the ITRES Thermal Airborne Spectrographic Imager TASI-600 that operates in the spectral range of 8.0-11.5 μm (32 band configuration). Broadband data were acquired using two aeroplanemounted FLIR SC7000 MWIR cameras. Acquisition of the images was around noon. To limit the number of false alarms due to atmospheric changes, the time interval between the images is less than 2 hours. Local co-registration adjustment was applied to compensate for misregistration errors in the order of a few pixels. The targets in the data that will be analysed in this paper are different kinds of vehicles. Change detection algorithms that were applied and evaluated are Euclidean distance, Mahalanobis distance, Chronochrome (CC), Covariance Equalisation (CE), and Hyperbolic Anomalous Change Detection (HACD). Based on Receiver Operating Characteristics (ROC) we conclude that LWIR hyperspectral has an advantage over MWIR broadband change detection. The best hyperspectral detector is HACD because it is most robust to noise. MWIR high spatial-resolution broadband results show that it helps to apply a false alarm reduction strategy based on spatial processing.
Spectral Methodologies and Applications I
icon_mobile_dropdown
Hyperspectral imaging of the crime scene for detection and identification of blood stains
Blood stains are an important source of information in forensic investigations. Extraction of DNA may lead to the identification of victims or suspects, while the blood stain pattern may reveal useful information for the reconstruction of a crime. Consequently, techniques for the detection and identification of blood stains are ideally non-destructive in order not to hamper both DNA and the blood stain pattern analysis. Currently, forensic investigators mainly detect and identify blood stains using chemical or optical methods, which are often either destructive or subject to human interpretation.

We demonstrated the feasibility of hyperspectral imaging of the crime scene to detect and identify blood stains remotely. Blood stains outside the human body comprise the main chromophores oxy-hemoglobin, methemoglobin and hemichrome. Consequently, the reflectance spectra of blood stains are influenced by the composite of the optical properties of the individual chromophores and the substrate. Using the coefficient of determination between a non-linear least squares multi-component fit and the measured spectra blood stains were successfully distinguished from other substances visually resembling blood (e.g. ketchup, red wine and lip stick) with a sensitivity of 100 % and a specificity of 85 %. The practical applicability of this technique was demonstrated at a mock crime scene, where blood stains were successfully identified automatically.
Spectral Methodologies and Applications II
icon_mobile_dropdown
Undercomplete learned dictionaries for land cover classification in multispectral imagery of Arctic landscapes using CoSA: clustering of sparse approximations
Daniela I. Moody, Steven P. Brumby, Joel C. Rowland, et al.
Techniques for automated feature extraction, including neuroscience-inspired machine vision, are of great interest for landscape characterization and change detection in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methodologies to the environmental sciences, using state-of-theart adaptive signal processing, combined with compressive sensing and machine learning techniques. We use a Hebbian learning rule to build undercomplete spectral-textural dictionaries that are adapted to the data. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labels are automatically generated using our CoSA algorithm: unsupervised Clustering of Sparse Approximations. We demonstrate our method using multispectral Worldview-2 data from three Arctic study areas: Barrow, Alaska; the Selawik River, Alaska; and a watershed near the Mackenzie River delta in northwest Canada. Our goal is to develop a robust classification methodology that will allow for the automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and geomorphic characteristics. To interpret and assign land cover categories to the clusters we both evaluate the spectral properties of the clusters and compare the clusters to both field- and remote sensing-derived classifications of landscape attributes. Our work suggests that neuroscience-based models are a promising approach to practical pattern recognition problems in remote sensing.
Security inspection in ports by anomaly detection using hyperspectral imaging technology
Javier Rivera, Fernando Valverde, Manuel Saldaña, et al.
Applying hyperspectral imaging technology in port security is crucial for the detection of possible threats or illegal activities. One of the most common problems that cargo suffers is tampering. This represents a danger to society because it creates a channel to smuggle illegal and hazardous products. If a cargo is altered, security inspections on that cargo should contain anomalies that reveal the nature of the tampering. Hyperspectral images can detect anomalies by gathering information through multiple electromagnetic bands. The spectrums extracted from these bands can be used to detect surface anomalies from different materials. Based on this technology, a scenario was built in which a hyperspectral camera was used to inspect the cargo for any surface anomalies and a user interface shows the results. The spectrum of items, altered by different materials that can be used to conceal illegal products, is analyzed and classified in order to provide information about the tampered cargo. The image is analyzed with a variety of techniques such as multiple features extracting algorithms, autonomous anomaly detection, and target spectrum detection. The results will be exported to a workstation or mobile device in order to show them in an easy -to-use interface. This process could enhance the current capabilities of security systems that are already implemented, providing a more complete approach to detect threats and illegal cargo.
Extending continuum fusion to create unbeatable detectors
We develop an extension of continuum fusion methods that allows the generation of unbeatable decision rules for discrete binary composite hypothesis testing problems. Background: Amongst the many flavors of continuum fusion (CF) algorithm, one can always be found that will produce the uniformly most powerful (UMP) solution to any composite hypothesis (CH) testing problem, when such a solution exists [1]. This optimality property, combined with the flexibility in design afforded by CF principles, led to the prospect that with any reasonably defined optimality metric, any detection problem could be solved with some CF-based decision rule (DR). Doubt was cast on this possibility in a paper by Theiler [2], who showed that applying continuum fusion logical rules to a particular discrete (as opposed to continuum) problem could not produce the better algorithm. Theiler’s example requires creation of a CH test, and for these no generally optimal form exists. However, Theiler’s problem also obeyed an invariance principle, and if solutions are restricted to obey the same invariance, then a uniformly most powerful invariant (UMPI) solution does exist. This solution cannot be generated by applying current CF principles to this discrete parameter problem. In short, standard CF logic cannot produce a highly desirable answer. The UMPI solution exemplifies Bayesian solutions to discrete parameter CH problems, and it is shown below why standard CF solutions cannot always produce them, in agreement with Theiler’s result. Bayesian solutions feature prominently in statistical decision theory, because they form the class of unbeatable decision rules, as defined below. Thus, standard CF principles cannot produce an important class of solutions to discrete CH problems. Here we extend the CF methodology in a way that converts any discrete parameter fusion problem into a continuous one. Continuum fusion solutions to the converted problem then generate the entire class of unbeatable detectors.
A multistage framework for dismount spectral verification in the VNIR
A multistage algorithm suite is proposed for a specific target detection/verification scenario, where a visible/near infrared hyperspectral (HS) sample is assumed to be available as the only cue from a reference image frame. The target is a suspicious dismount. The suite first applies a biometric based human skin detector to focus the attention of the search. Using as reference all of the bands in the spectral cue, the suite follows with a Bayesian Lasso inference stage designed to isolate pixels representing the specific material type cued by the user and worn by the human target (e.g., hat, jacket). In essence, the search focuses on testing material types near skin pixels. The third stage imposes an additional constraint through RGB color quantization and distance metric checking, limiting even further the search for material types in the scene having visible color similar to the target visible color. Using the proposed cumulative evidence strategy produced some encouraging range-invariant results on real HS imagery, dramatically reducing to zero the false alarm rate on the example dataset. These results were in contrast to the results independently produced by each one of the suite’s stages, as the spatial areas of each stage’s high false alarm outcome were mutually exclusive in the imagery. These conclusions also apply to results produced by other standard methods, in particular the kernel SVDD (support vector data description) and matched filter, as shown in the paper.
Spectral Data Collections and Experiments
icon_mobile_dropdown
The SHARE 2012 data campaign
A multi-modal (hyperspectral, multispectral, and LIDAR) imaging data collection campaign was conducted just south of Rochester New York in Avon, NY on September 20, 2012 by the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, the Air Force Research Lab (AFRL), the Naval Research Lab (NRL), United Technologies Aerospace Systems (UTAS) and MITRE. The campaign was a follow on from the SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE) from 2010. Data was collected in support of the eleven simultaneous experiments described here. The airborne imagery was collected over four different sites with hyperspectral, multispectral, and LIDAR sensors. The sites for data collection included Avon, NY, Conesus Lake, Hemlock Lake and forest, and a nearby quarry. Experiments included topics such as target unmixing, subpixel detection, material identification, impacts of illumination on materials, forest health, and in-water target detection. An extensive ground truthing effort was conducted in addition to collection of the airborne imagery. The ultimate goal of the data collection campaign is to provide the remote sensing community with a shareable resource to support future research. This paper details the experiments conducted and the data that was collected during this campaign.
SHARE 2012: large edge targets for hyperspectral imaging applications
Kelly Canham, Daniel Goldberg, John Kerekes, et al.
Spectral unmixing is a type of hyperspectral imagery (HSI) sub-pixel analysis where the constituent spectra and abundances within the pixel are identified. However, validating the results obtained from spectral unmixing is very difficult due to a lack of real-world data and ground-truth information associated with these real-world images. Real HSI data is preferred for validating spectral unmixing, but when there is no HSI truth-data available, then validation of spectral unmixing algorithms relies on user-defined synthetic images which can be generated to exploit the benefits (or hide the flaws) in the new unmixing approaches. Here we introduce a new dataset (SHARE 2012: large edge targets) for the validation of spectral unmixing algorithms. The SHARE 2012 large edge targets are uniform 9m by 9m square regions of a single material (grass, sand, black felt, or white TyVek). The spectral profile and the GPS of the corners of the materials were recorded so that the heading of the edge separating any two materials can be determined from the imagery. An estimate for the abundance of two neighboring materials along a common edge can be calculated geometrically by identifying the edge which spans multiple pixels. These geometrically calculated abundances can then be used as validation of spectral unmixing algorithms. The size, shape, and spectral profiles of these targets also make them useful for radiometric calibration, atmospheric adjacency effects, and sensor MTF calculations. The imagery and ground-truth information are presented here.
SHARE 2012: subpixel detection and unmixing experiments
John P. Kerekes, Kyle Ludgate, AnneMarie Giannandrea, et al.
The quantitative evaluation of algorithms applied to remotely sensed hyperspectral imagery require data sets with known ground truth. A recent data collection known as SHARE 2012, conducted by scientists in the Digital Imaging and Remote Sensing Laboratory at the Rochester Institute of Technology together with several outside collaborators, acquired hyperspectral data with this goal in mind. Several experiments were designed, deployed, and ground truth collected to support algorithm evaluation. In this paper, we describe two experiments that addressed the particular needs for the evaluation of subpixel detection and unmixing algorithms. The subpixel detection experiment involved the deployment of dozens of nearly identical subpixel targets in a random spatial array. The subpixel targets were pieces of wood painted either green or yellow. They were sized to occupy about 5% to 20% of the 1 m pixels. The unmixing experiment used novel targets with prescribed fractions of different materials based on a geometric arrangement of subpixel patterns. These targets were made up of different fabrics with various colors. Whole pixel swatches of the same materials were also deployed in the scene to provide in-scene endmembers. Alternatively, researchers can use the unmixing targets alone to derive endmembers from the mixed pixels. Field reflectance spectra were collected for all targets and adjacent background areas. While efforts are just now underway to evaluate the detection performance using the subpixel targets, initial results for the unmixing targets have demonstrated retrieved fractions that are close approximations to the geometric fractions. These data, together with the ground truth, are planned to be made available to the remote sensing research community for evaluation and development of detection and unmixing algorithms.
SHARE 2012: analysis of illumination differences on targets in hyperspectral imagery
This paper looks at a new data set that has been designed to analyze the various impacts of illumination change on targets. Similar targets were placed on different backgrounds where spectral signatures were analyzed to determined impacts of background type. Targets were also placed next to tree lines where they were fully illuminated but with the possible impact of tree shine. Hyperspectral, multispectral, and LiDAR modalities were used to image the targets in the above mentioned scenarios. Applications such as target detection with results are used to assess difficulties with finding such targets and impacts on illumination.
Spectral Data Analysis Methodologies I
icon_mobile_dropdown
Detection and tracking of gas plumes in LWIR hyperspectral video sequence data
Torin Gerhart, Justin Sunu, Lauren Lieu, et al.
Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.
Geometrical interpretation of the adaptive coherence estimator for hyperspectral target detection
Shahar Bar, Ori Bass, Alon Volfman, et al.
A hyperspectral cube consists of a set of images taken at numerous wavelengths. Hyperspectral image data analysis uses each material’s distinctive patterns of reflection, absorption and emission of electromagnetic energy at specific wavelengths for classification or detection tasks. Because of the size of the hyperspectral cube, data reduction is definitely advantageous; when doing this, one wishes to maintain high performances with the least number of bands. Obviously in such a case, the choice of the bands will be critical. In this paper, we will consider one particular algorithm, the adaptive coherence estimator (ACE) for the detection of point targets. We give a quantitative interpretation of the dependence of the algorithm on the number and identity of the bands that have been chosen. Results on simulated data will be presented.
A novel automated object identification approach using key spectral components
Bart Kahler, Todd Noble
Spectral remote sensing provides solutions to a wide range of commercial, civil, agricultural, atmospheric, security, and defense problems. Technological advances have expanded multispectral (MSI) and hyperspectral (HSI) sensing capabilities from air and space borne sensors. The greater spectral and spatial sensitivity have vastly increased the available content for analysis. The amount of information in the data cubes obtained from today’s sensors enable material identification via complex processing techniques. With sufficient sensor resolution, multiple pixels on target are obtained and by exploiting the key spectral features of a material signature among a group of target pixels and associating the features with neighboring pixels, object identification is possible. The authors propose a novel automated approach to object classification with HSI data by focusing on the key components of an HSI signature and the relevant areas of the spectrum (bands) of surrounding pixels to identify an object. The proposed technique may be applied to spectral data from any region of the spectrum to provide object identification. The effort will focus on HSI data from the visible, near-infrared and short-wave infrared to prove the algorithm concept.
Target detection using the background model from the topological anomaly detection algorithm
The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.
Low-rank decomposition-based anomaly detection
Shih-Yu Chen, Shiming Yang, Konstantinos Kalpakis, et al.
With high spectral resolution hyperspectral imaging is capable of uncovering many subtle signal sources which cannot be known a priori or visually inspected. Such signal sources generally appear as anomalies in the data. Due to high correlation among spectral bands and sparsity of anomalies, a hyperspectral image can be e decomposed into two subspaces: a background subspace specified by a matrix with low rank dimensionality and an anomaly subspace specified by a sparse matrix with high rank dimensionality. This paper develops an approach to finding such low-high rank decomposition to identify anomaly subspace. Its idea is to formulate a convex constrained optimization problem that minimizes the nuclear norm of the background subspace and little ι1 norm of the anomaly subspace subject to a decomposition of data space into background and anomaly subspaces. By virtue of such a background-anomaly decomposition the commonly used RX detector can be implemented in the sense that anomalies can be separated in the anomaly subspace specified by a sparse matrix. Experimental results demonstrate that the background-anomaly subspace decomposition can actually improve and enhance RXD performance.
Improved target recognition with live atmospheric correction
Hyperspectral airborne sensing systems frequently employ spectral signature databases to detect materials. To achieve high detection and low false alarm rates, it is critical to retrieve accurate reflectance values from the camera’s digital number (dn) output. A one-time camera calibration converts dn values to reflectance. However, changes in solar angle and atmospheric conditions distort the reflected energy, reducing detection performance of the system.

Changes in solar angle and atmospheric conditions introduce both additive (offset) and multiplicative (gain) effects for each waveband. A gain and offset correction can mitigate these effects. Correction methods based on radiative transfer models require equipment to measure solar angle and atmospheric conditions. Other methods use known reference materials in the scene to calculate the correction, but require an operator to identify the location of these materials. Our unmanned airborne vehicles application can use no additional equipment or require operator intervention. Applicable automated correction approaches typically analyze gross scene statistics to find the gain and offset values. Airborne hyperspectral systems have high ground resolution but limited fields-of-view, so an individual frame does not include all the variation necessary to accurately calculate global statistics.

In the present work we present our novel approach to the automatic estimation of atmospheric and solar effects from the hyperspectral data. Our approach is based on Hough transform matching of background spectral signatures with materials extracted from the scene. Scene materials are identified with low complexity agglomerative clustering. Detection results with data gathered from recent field tests are shown.
Multisensor Data Fusion
icon_mobile_dropdown
Multimodal detection of man-made objects in simulated aerial images
Matthew S. Baran, Richard L. Tutwiler, Donald J. Natale, et al.
This paper presents an approach to multi-modal detection of man-made objects from aerial imagery. Detections are made in polarization imagery, hyperspectral imagery, and LIDAR point clouds then fused into a single confidence map. The detections are based on reflective, spectral, and geometric features of man-made objects in airborne images. The polarization imagery detector uses the Stokes parameters and the degree of linear polarization to find highly polarizing objects. The hyperspectral detector matches scene spectra to a library of man-made materials using a combination of the spectral gradient angle and the generalized likelihood ratio test. The LIDAR detector clusters 3D points into objects using principle component analysis and prunes the detections by size and shape. Once the three channels are mapped into detection images, the information can be fused without some of the problems of multi-modal fusion, such as edge reversal. The imagery used in this system was simulated with a first-principles ray tracing image generator known as DIRSIG.
A method to generate sub-pixel classification maps for use in DIRSIG three-dimensional models
Ryan N. Givens, Karl C. Walli, Michael T. Eismann
Developing new remote sensing instruments is a costly and time consuming process. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model gives users the ability to create synthetic images for a proposed sensor before building it. However, to produce synthetic images, DIRSIG requires facetized, three-dimensional models attributed with spectral and texture information which can themselves be costly and time consuming to produce. Recent work has been successful in generating these scenes using an automated method when coincident HyperSpectral Imagery (HSI), LIght Detection and Ranging (LIDAR), and high-resolution imagery of a site are available. An important step in this process is attributing the three-dimensional information gained from the LIDAR with spectral information gained from the HSI. Previous work was able to do this attribution at the resolution of the HSI, but the HSI is generally at the lowest resolution of the three modalities. Due to the highly accurate method used to register the HSI, LIDAR, and highresolution imagery, the potential for bringing additional information into the classification process exists. This paper will present a method to generate classification maps at or near the resolution of the high-resolution imagery component of the fused imagery. Initial results using this new method are provided and are promising in terms of their ability to ultimately help produce higher fidelity DIRSIG models.
Snapshot spectral and polarimetric imaging; target identification with multispectral video
Brent D. Bartlett, Mikel D. Rodriguez
As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.
Blind analysis of multispectral and polarimetric data via canonical correlation analysis
Özgür Murat Polat, Yakup Özkazanç
In blind scene analysis, the aim is to obtain information about background and targets without any prior information. Blind methods can be considered as pre-processing steps for scene understanding. By means of blind signal separation methodologies, anomalies can be detected and these anomalies can be exploited for target detection. There are many imaging sensor systems which uses different properties of the emittance or the reflectance characteristics of the scene components. Spectral reflectance properties are related to the material composition and these multispectral characteristics can be exploited for detection, identification and classification of the scene components. As the light scattered from the scene elements shows polarization, polarized measurements can be used as extra features. Multispectral and polarimetric images of a scene provide information to some level and this information can be used to get further information on the scene and to facilitate detection. In this study, spectral and polarimetric images of a scene are analyzed via Canonical Correlation Analysis (CCA) which is a powerful multivariate statistical methodology. Multispectral and polarimetric data (spectro-polarimetric data) are treated as two different sets. Canonical variants obtained by CCA give different scene components such as background elements and some man-made objects. The linear relationship of the polarimetric and multispectral data of the same scene is also obtained by CCA.
Spectral Data Analysis Methodologies II
icon_mobile_dropdown
Lossless to lossy compression for hyperspectral imagery based on wavelet and integer KLT transforms with 3D binary EZW
Kai-jen Cheng, Jeffrey Dill
In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.
Analytical and comparative analysis of lossy ultraspectral image compression
Rolando Herrero, Vinay K. Ingle
AIRS (Atmospheric Infrared Sounder) images are a type of ultraspectral data cubes that are good candidates for compression as they include several thousand bands that account for well over 40MB of information per image. In this paper we describe and mathematically model an improved architecture to accomplish lossy compression of AIRS images by presenting a sequence of techniques executed under the context of preprocessing and compression stages. Specifically we describe both a preprocessing reversible stage that rearranges the AIRS data cube and a linear prediction based compression stage that improves the compression rate when compared to other state of the art ultraspectral data compression techniques. After defining a distortion measure as well as its effect on real applications (i.e. AIRS Level 2 products) we present a mathematical model to approximate the rate-distortion of the architecture and compare it against the experimental performance of the algorithm. The analysis relies on the vector quantization of the prediction error and assumes that the individual samples follow a Laplacian distribution that is the only source of distortion. In general under an open-loop encoding scheme, the distortion caused by the quantization of linear-prediction coefficients is masked by the distortion introduced by the prediction error itself. The effect of the preprocessing stage on the analytical model is accounted by different values of the Laplacian distribution such that the curve obtained by parametrically plotting rate against distortion is a close approximation of the experimental one.
Supervised method for optimum hyperspectral band selection
Much effort has been devoted to development of methods to reduce hyperspectral image dimensionality by locating and retaining data relevant for image interpretation while discarding that which is irrelevant. Irrelevance can result from an absence of information that could contribute to the classification, or from the presence of information that could contribute to the classification but is redundant with other information already selected for inclusion in the classification process.

We describe a new supervised method that uses mutual information to incrementally determine the most relevant combination of available bands and/or derived pseudo bands to differentiate a specified set of classes. We refer to this as relevance spectroscopy. The method identifies a specific optimum band combination and provides estimates of classification accuracy for data interpretation using a complementary, also information theoretic, classification procedure.

When modest numbers of classes are involved the number of relevant bands to achieve good classification accuracy is typically three or fewer. Time required to determine the optimum band combination is of the order of a minute on a personal computer. Automated interpretation of intermediate images derived from the optimum band set can often keep pace with data acquisition speeds.
Second order statistics target-specified virtual dimensionality
Virtual dimensionality (VD) has received considerable interest in its use of specifying the number of spectrally distinct signatures. So far all techniques are decomposition approaches which use eigenvalues, eigenvectors or singular vectors to estimate the virtual dimensionality. However, when eigenvalues are used to estimate VD such as Harsanyi-Farrand- Chang’s method or hyperspectral signal subspace identification by minimum error (HySime), there will be no way to find what the spectrally distinct signatures are. On the other hand, if eigenvectors/singular vectors are used to estimate VD such as maximal orthogonal complement algorithm (MOCA), eigenvectors/singular vectors do not represent real signal sources. In this paper we introduce a new concept, referred to as target-specified VD (TSVD), which operates on the signal sources themselves to both determine the number of distinct sources and identify their signature. The underlying idea of TSVD was derived from that used to develop high-order statistics (HOS) VD where its applicability to second order statistics (2OS) was not explored. In this paper we investigate a 2OS-based target finding algorithm, called automatic target generation process (ATGP) to determine VD. Experiments are conducted in comparison with well-known and widely used eigen-based approaches.
Hyperspectral image unmixing via bilinear generalized approximate message passing
Jeremy Vila, Philip Schniter, Joseph Meola
In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing (i.e., joint estimation of endmembers and abundances) based on loopy belief propagation. In particular, we employ the bilinear generalized approximate message passing algorithm (BiG-AMP), a recently proposed belief-propagation-based approach to matrix factorization, in a “turbo” framework that enables the exploitation of spectral coherence in the endmembers, as well as spatial coherence in the abundances. In conjunction, we propose an expectation- maximization (EM) technique that can be used to automatically tune the prior statistics assumed by turbo BiG-AMP. Numerical experiments on synthetic and real-world data confirm the state-of-the-art performance of our approach.
Comparing quadtree region partitioning metrics for hyperspectral unmixing
An approach for unsupervised unmixing using quadtree region partitioning is studied. Images are partitioned in spectrally homogeneous regions using quadtree region partitioning. Unmixing is performed in each individual region using the positive matrix factorization and extracted endmembers are the clustered in endmembers classes which account for the variability of spectral endmembers across the scene. The proposed method lends itself to an unsupervised approach. In the paper, the effect of different spectral variability metrics in the splitting of the image using quadtree partitioning is studied. Experimental results using the AVIRIS AP Hill image show that the Shannon entropy produces the image partitioning that agrees with published ground truth.
Spectral Methodologies and Applications III
icon_mobile_dropdown
Impact of specular reflection on bottom type retrieved from WorldView-2 images
Karen W. Patterson, Gia Lamela
The Naval Research Laboratory (NRL) has been developing the Coastal Water Spectral Toolkit (CWST) to estimate water depth, bottom type and water column constituents such as chlorophyll, suspended sediments and chromophoric dissolved organic matter from hyperspectral imagery. The CWST uses a look-up table approach, comparing remote sensing reflectance spectra observed in an image to a database of modeled spectra for pre-determined water column constituents, depth and bottom type. Recently the CWST was modified to process multi-spectral WorldView-2 imagery. Generally imagery processed through the CWST has been collected under optimal sun and viewing conditions so as to minimize surface effects such as specular reflection. As such, in our standard atmospheric correction process we do not include a specular reflection correction. In June 2010 a series of 7 WorldView-2 images was collected within 2 minutes over Moreton Bay, Australia. The images clearly contain varying amounts of surface specular reflection. Each of the 7 images was processed through the CWST using identical processing to evaluate the impact of ignoring specular reflection on coverage and consistency of bottom types retrieved.
Using multi-angle WorldView-2 imagery to determine bathymetry near Oahu, Hawaii
Multispectral imaging (MSI) data collected at multiple angles over shallow water provide analysts with a unique perspective of bathymetry in coastal areas. Observations taken by DigitalGlobe’s WorldView-2 (WV-2) sensor acquired at 39 different view angles on 30 July 2011 were used to determine the effect of acquisition angle on bathymetry derivation. The site used for this study was Kailua Bay (on the windward side of the island of Oahu). Satellite azimuth and elevation for these data ranged from 18.8 to 185.8 degrees and 24.9 (forward-looking) to 24.5 (backward-looking) degrees (respectively) with 90 degrees representing a nadir view. Bathymetry were derived directly from the WV-2 radiance data using a band ratio approach. Comparison of results to LiDAR-derived bathymetry showed that varying view angle impact the quality of the inferred bathymetry. Derived and reference bathymetry have a higher correlation as images are acquired closer to nadir. The band combination utilized for depth derivation also has an effect on derived bathymetry. Four band combinations were compared, and the Blue and Green combination provided the best results.
Automatic ship detection from commercial multispectral satellite imagery
Brian J. Daniel, Alan P. Schaum, Eric C. Allman, et al.
Commercial multispectral satellite sensors spend much of their time over the oceans. NRL has demonstrated an automatic processing system for finding ships at sea using commercially available multispectral data. To distinguish ships from whitecaps and clouds, a water/cloud clutter subspace is estimated and a continuum fusion derived anomaly detection algorithm is applied. This provides a maritime awareness capability with an acceptable detection rate while maintaining a low rate of false alarms. The system also provides a confidence metric, which can be used to further limit the false alarm rate.
A decade of measured greenhouse forcings from AIRS
D. Chapman, P. Nguyen, M. Halem
Increased greenhouse gasses reduce the transmission of Outgoing Longwave Radiation (OLR) to space along spectral absorption lines eventually causing the Earth’s temperature to rise in order to preserve energy equilibrium. This greenhouse forcing effect can be directly observed in the Outgoing Longwave Spectra (OLS) from space-borne infrared instruments with sufficiently high resolving power 3, 8. In 2001, Harries et. al observed significant increases in greenhouse forcings by direct inter-comparison of the IRIS spectra 1970 and the IMG spectra 19978. We have extended this effort by measuring the annual rate of change of AIRS all-sky Outgoing Longwave Spectra (OLS) with respect to greenhouse forcings. Our calculations make use of a 2°x2° degree monthly gridded Brightness Temperature (BT) product. Decadal trends for AIRS spectra from 2002-2012 indicate continued decrease of -0.06 K/yr in the trend of CO2 BT (700cm-1 and 2250cm-1), a decrease of -0.04 K/yr of O3 BT (1050 cm-1), and a decrease of -0.03 K/yr of the CH4 BT (1300cm-1). Observed decreases in BT trends are expected due to ten years of increased greenhouse gasses even though global surface temperatures have not risen substantially over the last decade.
Initial validation of atmospheric compensation for a Landsat land surface temperature product
Monica J. Cook, John R. Schott
The Landsat series of satellites is the longest set of continuously acquired moderate resolution multispectral satellite imagery collected on a single maintained family of instruments. The data are very attractive because the entire archive has been radiometrically calibrated and characterized so that sensor reaching radiance values are well known. Because of the spatial and temporal coverage provided by Landsat, it is an intriguing candidate for a land surface temperature (LST) product, an important earth system data record for a number of fields including numerical weather prediction, climate research and a number of agricultural applications. Using the Landsat long-wave infrared thermal band, LST can be derived with a well-characterized atmosphere and a known surface emissivity. This work integrates the North America Regional Reanalysis dataset (atmospheric profile data) with ASTER derived emissivity data to perform LST retrievals. This paper emphasizes progress toward atmospheric compensation at each Landsat pixel. Due to differences in temporal and spatial sampling, a number of interpolations are required to compute the radiance due to temperature at each pixel. Radiosonde data and water temperatures derived from buoys are used as ground truth data to explore the error in the final predicted temperature. Preliminary results show consistent errors of less than 1 K in clear atmospheres but higher errors in hotter and more humid atmospheres. Future work will analyze results to predict error in the final retrieved temperatures using atmospheric conditions. The final goal is to report both a predicted LST and a confidence in this value.
Detection, Identification, and Quantification II
icon_mobile_dropdown
Detection of unknown gas-phase chemical plumes in hyperspectral imagery
Gas-phase chemical plumes exhibit, particularly in the infrared, distinctive emission signatures as a function of wavelength. Hyperspectral imagery can exploit this distinctiveness to detect specific chemicals, even at low concentrations, using matched filters that are tailored both the the specific structure of the chemical signature and to the statistics of the background clutter. But what if the chemical species is unknown? One can apply matched filters to a long list of candidate chemicals (or chemical mixtures), or one can treat the problem as one of anomaly detection. In this case, however, the anomalous signals of interest are not completely unknown. Gas spectra are generically sparse (absorbing or emitting at only a few wavelengths), and this property can be exploited to enhance the sensitivity of anomaly detection algorithms. This paper investigates the utility of sparse signal anomaly detection for the problem of finding plumes of gas with unknown chemistry in hyperspectral imagery.
Hyperspectral chemical plume quantification via background radiance estimation
Sidi Niu, Steven E. Golowich, Vinay K. Ingle, et al.
Existing chemical plume quantification algorithms assume that the off-plume radiance of a pixel containing the plume signal is unobservable. When the problem is limited to a single gas, the off-plume radiance may be estimated from the bands in which the gas absorption is nearly zero. It is then possible to compute the difference between the on- and off-plume radiances and solve for the plume strength from Beer's Law. The major advantage of this proposed method is that the gas strength can be resolved from the radiance difference so that the estimation error remains small for thick plumes.
Detection and tracking of gas clouds in an urban area by imaging infrared spectroscopy
Samer Sabbah, Peter Rusch, Jörn-Hinnrich Gerhard, et al.
The release of toxic industrial compounds in urban areas is a threat for the population and the environment. In order to supply emergency response forces with information about the released compounds after accidents or terrorist attacks, monitoring systems such as the scanning imaging spectrometer SIGIS 2 or the hyperspectral imager HI 90 were developed. Both systems are based on the method of infrared spectroscopy. The systems were deployed to monitor gas clouds released in the harbor area of Hamburg. The gas clouds were identified, visualized and quantified from a distance in real time. Using data of two systems it was possible to identify contaminated areas and to determine the source location.
Spectral target detection using a physical model and a manifold learning technique
Identification of materials from calibrated radiance data collected by an airborne imaging spectrometer depends strongly on the atmospheric and illumination conditions at the time of collection. This paper presents a methodology for identifying material spectra using the assumption that each unique material class forms a lower-dimensional manifold (surface) in the higher-dimensional spectral radiance space and that all image spectra reside on, or near, these theoretic manifolds. Using a physical model, a manifold characteristic of the target material exposed to varying illumination and atmospheric conditions is formed. A graph-based model is then applied to the radiance data to capture the intricate structure of each material manifold followed by the application of the commute time distance (CTD) transformation to separate the target manifold from the background. Detection algorithms are than applied in the CTD subspace. This nonlinear transformation is based on a Markov-chain model of a random walk on a graph and is derived from an eigendecomposition of the pseudoinverse of the graph Laplacian matrix. This paper discusses the properties of the CTDtransformation, the atmospheric and illumination parameters varied in the physics-based model and demonstrates the influence the target manifold samples have on the orientation of the coordinate axes in the transformed space. A comparison between detection performance in the CTD subspace and spectral radiance space is also given for two hyperspectral images.
Target detection performed on manifold approximations recovered from hyperspectral data
In high dimensional data, manifold learning seeks to identify the embedded lower-dimensional, non-linear mani- fold upon which the data lie. This is particularly useful in hyperspectral imagery where inherently m-dimensional data is often sparsely distributed throughout the d-dimensional spectral space, with m << d. By recovering the manifold, inherent structures and relationships within the data – which are not typically apparent otherwise – may be identified and exploited. The sparsity of data within the spectral space can prove challenging for many types of analysis, and in particular with target detection. In this paper, we propose using manifold recovery as a preprocessing step for spectral target detection algorithms. A graph structure is first built upon the data and the transformation into the manifold space is based upon that graph structure. Then, the Adaptive Co- sine/Coherence Estimator (ACE) algorithm is applied. We present an analysis of target detection performance in the manifold space using scene-derived target spectra from two different hyperspectral images.
Target detection in inhomogeneous non-Gaussian hyperspectral data based on nonparametric density estimation
Performance of algorithms for target signal detection in Hyperspectral Imagery (HSI) is often deteriorated when the data is neither statistically homogeneous nor Gaussian or when its Joint Probability Density (JPD) does not match any presumed particular parametric model. In this paper we propose a novel detection algorithm which first attempts at dividing data domain into mostly Gaussian and mostly Non-Gaussian (NG) subspaces, and then estimates the JPD of the NG subspace with a non-parametric Graph-based estimator. It then combines commonly used detection algorithms operating on the mostly-Gaussian sub-space and an LRT calculated directly with the estimated JPD of the NG sub-space, to detect anomalies and known additive-type target signals. The algorithm performance is compared to commonly used algorithms and is found to be superior in some important cases.
Spectral Sensor Development and Characterization
icon_mobile_dropdown
On super-resolved coded aperture spectral imaging
Hoover F. Rueda, Henry Arguello, Gonzalo R. Arce
Coded Aperture Snapshot Spectral Imager (CASSI) senses three-dimensional scenes using a single focal plane array (FPA) snapshot. CASSI measurements are random projections of the underlying scene which can be recovered using the theory of compressive sensing (CS). This work generalizes the CASSI architecture which permits to sense high resolution hyper-spectral scenes using low resolution FPAs. We present a multiple-shot extension of CASSI where spectral super-resolution can be attained. In the proposed system, a second highresolution coded aperture is introduced in CASSI to encode both spatial and spectral dimensions of the data cube. This approach allows the reconstruction of super-resolved hyper-spectral data cubes, where the number of spectral bands is significantly increased. Simulations show an improvement of up to 6 dB in PSNR image reconstruction, and a four-fold increase in spectral resolution.
Modeling, development, and testing of a shortwave infrared supercontinuum laser source for use in active hyperspectral imaging
Joseph Meola, Anthony Absi, James D. Leonard, et al.
A fundamental limitation of current visible through shortwave infrared hyperspectral imaging systems is the dependence on solar illumination. This reliance limits the operability of such systems to small windows during which the sun provides enough solar radiation to achieve adequate signal levels. Similarly, nighttime collection is infeasible. This work discusses the development and testing of a high-powered super-continuum laser for potential use as an on-board illumination source coupled with a hyperspectral receiver to allow for day/night operability. A 5-watt shortwave infrared supercontinuum laser was developed, characterized in the lab, and tower-tested along a 1.6km slant path to demonstrate propagation capability as a spectral light source.
Low-complexity image processing for a high-throughput low-latency snapshot multispectral imager with integrated tiled filters
Bert Geelen, Murali Jayapala, Nicolaas Tack, et al.
Traditional spectral imaging cameras typically operate as pushbroom cameras by scanning a scene. This approach makes such cameras well-suited for high spatial and spectral resolution scanning applications, such as remote sensing and machine vision, but ill-suited for 2D scenes with free movement. This limitation can be overcome by single frame, multispectral (here called snapshot) acquisition, where an entire three-dimensional multispectral data cube is sensed at one discrete point in time and multiplexed on a 2D sensor. Our snapshot multispectral imager is based on optical filters monolithically integrated on CMOS image sensors with large layout flexibility. Using this flexibility, the filters are positioned on the sensor in a tiled layout, allowing trade-offs between spatial and spectral resolution. At system-level, the filter layout is complemented by an optical sub-system which duplicates the scene onto each filter tile. This optical sub-system and the tiled filter layout lead to a simple mapping of 3D spectral cube data on the sensor, facilitating simple cube assembly. Therefore, the required image processing consists of simple and highly parallelizable algorithms for reflectance and cube assembly, enabling real-time acquisition of dynamic 2D scenes at low latencies. Moreover, through the use of monolithically integrated optical filters the multispectral imager achieves the qualities of compactness, low cost and high acquisition speed, further differentiating it from other snapshot spectral cameras. Our prototype camera can acquire multispectral image cubes of 256x256 pixels over 32 bands in the spectral range of 600-1000nm at 340 cubes per second for normal illumination levels.
Spectral Data Analysis Methodologies III
icon_mobile_dropdown
Learning to merge: a new tool for interactive mapping
Reid B. Porter, Sheng Lundquist, Christy Ruggiero
The task of turning raw imagery into semantically meaningful maps and overlays is a key area of remote sensing activity. Image analysts, in applications ranging from environmental monitoring to intelligence, use imagery to generate and update maps of terrain, vegetation, road networks, buildings and other relevant features. Often these tasks can be cast as a pixel labeling problem, and several interactive pixel labeling tools have been developed. These tools exploit training data, which is generated by analysts using simple and intuitive paint-program annotation tools, in order to tailor the labeling algorithm for the particular dataset and task. In other cases, the task is best cast as a pixel segmentation problem. Interactive pixel segmentation tools have also been developed, but these tools typically do not learn from training data like the pixel labeling tools do. In this paper we investigate tools for interactive pixel segmentation that also learn from user input. The input has the form of segment merging (or grouping). Merging examples are 1) easily obtained from analysts using vector annotation tools, and 2) more challenging to exploit than traditional labels. We outline the key issues in developing these interactive merging tools, and describe their application to remote sensing.
Enhancement of hyperspectral imagery using spectrally weighted tensor anisotropic nonlinear diffusion for classification
Tensor Anisotropic Nonlinear Diffusion (TAND) is a divergence PDE-based diffusion technique that is “guided” by an edge descriptor, such as the structure tensor, to stir the diffusion. The structure tensor for vector valued images such as HSI is most often defined as the average of the scalar structure tensors for each band. The problem with this definition is the assumption that all bands provide the same amount of edge information giving them the same weights. As a result non-edge pixels can be reinforced and edges can be weakened resulting in poor performance by processes that depend on the structure tensor. Iterative processes such as TAND, in particular, are vulnerable to this phenomenon. Recently a weighted structure tensor based on the heat operator has been proposed [1]. The weights are based on the heat operator. This tensor takes advantage of the fact that, in HSI, neighboring spectral bands are highly correlated, as are the bands of its gradient. By taking advantage of local spectral information, the proposed scheme gives higher weighting to local spectral features that could be related to edge information allowing the diffusion process to better enhance edges while smoothing out uniform regions facilitating the process of classification. This article present how classification results are affected by using TAND based on the heat weighted structure tensor as an image enhancement step in a classification system.
Pan-sharpening of spectral image with anisotropic diffusion for fine feature extraction using GPU
Feature extraction from satellite imagery is a challenging topic. Commercial multispectral satellite data sets, such as WorldView 2 images, are often delivered with a high spatial resolution panchromatic image (PAN) as well as a corresponding low-resolution multispectral spectral image (MSI). Certain fine features are only visible on the PAN but difficult to discern on the MSI. To fully utilize the high spatial resolution of the PAN and the rich spectral information from the MSI, a pan sharpening process can be carried out. In this paper, we propose a novel and fast pan sharpening process based on anisotropic diffusion with the aim to aid feature extraction that enhances salient spatial features. Our approach assumes that each pixel spectrum in the pan-sharpened image is a weighted linear mixture of the spectra of its immediate neighboring superpixels; it treats spectrum as its smallest element of operation, which is different from most existing algorithms that process each band separately. Our approach is shown to be capable of preserving salient features. In addition, the process is highly parallel with intensive neighbor operations and is implemented on a general purpose GPU card with NVIDIA CUDA architecture that achieves approximately 25 times speedup for our setup. We expect this algorithm to facilitate fine feature extraction from satellite images.
An analysis of the probability distribution of spectral angle and Euclidean distance in hyperspectral remote sensing using microspectroscopy
Ronald G. Resmini, Christopher J. Deloye, David W. Allen
Determining the probability distribution of hyperspectral imagery (HSI) data and of the results of algorithms applied to those data, is critical to understanding algorithm performance and for establishing performance metrics such as probability of detection, false alarm rate, and minimum detectable and identifiable quantities. The results of analyses of visible/near-infrared (VNIR; 400 nm to 900 nm) HSI microscopy data of small fragments (~1.25 cm in size) of minerals are presented. HSI microscopy, also known as microspectroscopy, is the acquisition of HSI data cubes of fields of view ranging from centimeters to millimeters in size. It is imaging spectrometry but at a small spatial scale. With HSI microspectroscopy, several thousand spectral signatures may be easily acquired of individual target materials—samples of which may be quite small. With such data, probability distributions may be very precisely determined. For faceted/irregularly shaped samples and mixtures (checkerboard, intimate, or microscopic), HSI microscopy data readily facilitate a detailed assessment of the contribution of the materials, their morphology, spectral mixing interactions, radiative transfer processes, view/illumination geometry contributions, etc., to the observed probability distribution(s) of the HSI data and of algorithm output. Here, spectral angle, the individual components of spectral angle (e.g., the inner product or numerator of the spectral angle equation), Euclidean distance, and L1 norm values are calculated. Regions of interest (ROI) on the fragments are easily defined that contain thousands of spectra far from the fragments' edges though translucency sometimes remains a factor impacting spectral signatures. The aforementioned metrics are derived for the spectra in an ROI of individual mineral fragments; across ROIs of different minerals; and with an ROI of an inert background. The resulting probability distributions of the various populations of the metrics are decidedly non-Gaussian though the precise probability distribution is difficult to determine. Spectral angle values appear to be most closely related to beta distributed. The HSI microscopy method is described as are the results of the analyses applied to the data of the mineral fragments. The interpretation of the microspectroscopy data is considered within the ongoing investigation into determining how the spectral variability on the ~10 micrometer spatial scale relates to the spectral variability on larger scales such as those acquired by airborne remote sensing systems.
Advanced spectral signature discrimination algorithm
Sumit Chakravarty, Wenjie Cao, Alim Samat
This paper presents a novel approach to the task of hyperspectral signature analysis. Hyperspectral signature analysis has been studied a lot in literature and there has been a lot of different algorithms developed which endeavors to discriminate between hyperspectral signatures. There are many approaches for performing the task of hyperspectral signature analysis. Binary coding approaches like SPAM and SFBC use basic statistical thresholding operations to binarize a signature which are then compared using Hamming distance. This framework has been extended to techniques like SDFC wherein a set of primate structures are used to characterize local variations in a signature together with the overall statistical measures like mean. As we see such structures harness only local variations and do not exploit any covariation of spectrally distinct parts of the signature. The approach of this research is to harvest such information by the use of a technique similar to circular convolution. In the approach we consider the signature as cyclic by appending the two ends of it. We then create two copies of the spectral signature. These three signatures can be placed next to each other like the rotating discs of a combination lock. We then find local structures at different circular shifts between the three cyclic spectral signatures. Texture features like in SDFC can be used to study the local structural variation for each circular shift. We can then create different measure by creating histogram from the shifts and thereafter using different techniques for information extraction from the histograms. Depending on the technique used different variant of the proposed algorithm are obtained. Experiments using the proposed technique show the viability of the proposed methods and their performances as compared to current binary signature coding techniques.
Blind source separation of the HyMap hyperspectral data via canonical correlation analysis
Özgür Murat Polat, Yakup Özkazanç
In hyperspectral data analysis, blind separation of the target and background can be considered as a pre-processing step in the target detection process. Blind Source Separation (BSS) techniques can be used when there is no prior information on the scene for image understanding. Previously, Principal Component Analysis and Independent Component Analysis methods are used as blind techniques for the analysis of hyperspectral data. In this study, we propose a blind analysis methodology based on Canonical Correlation Analysis (CCA) for the analysis of hyperspectral data sets. CCA is a multivariate method of analysis for the exploration of the data structures which extremize the correlations between two data sets. The hyperspectral data analyzed in this study is the HyMap sensor data which is available in the Target Detection Blind Test website. We produce two data sets out of the HyMap data cube which are later subjected to CCA. In the creation of these data sets, two different approaches are used. In first case, the HyMap data cube is simply divided into two sub-cubes by simple spectral separation. As another approach, the second data cube is derived from the HyMap data by a spatial filtering. In both cases, two data sets are analyzed via CCA and canonical variates of these data sets are obtained. The scene components are obtained from images expressed by the canonical variates obtained via CCA. The CCA methodology and its use as a blind analysis tools is presented on the HyMap data.
Spectral Signature Measurements and Applications
icon_mobile_dropdown
Intensity offset and correction of solid spectral library samples measured behind glass
Bruce E. Bernacki, Rebecca L. Redding, Yin-Fong Su, et al.
Accurate and calibrated diffuse reflectance spectra libraries of solids are becoming more important for hyperspectral and multispectral remote sensing exploitation. Many solids are in the form of powders or granules and in order to measure their diffuse reflectance spectra in the laboratory, it is often necessary to place the samples behind a transparent medium such as glass or quartz for the ultraviolet (UV), visible or near-infrared spectral regions to prevent their unwanted dispersal into the instrument or laboratory environment. Using both experimental and theoretical methods we have found that for the case of fused quartz this leads to an intensity offset in the reflectance values. Although expected dispersive effects were observed for the fused quartz window in the UV, the measured hemispherical reflectance values are predominantly vertically shifted by the reflectance from the air-quartz and sample-quartz interfaces with intensity dependent offsets leading to measured values up to ∼6% too high for a 2% reflectance surface, ∼3.8% too high for 10% reflecting materials, approximately correct (to within experimental error) for 40% to 60% diffuse reflecting surfaces, and ∼2% too low for 99% reflecting Spectralon surfaces. For the diffuse reflectance case, the measured values are uniformly too low due to the glass, with differences nearly 6% too high for reflectance values approaching 99%. The deviations arise from the added reflections from the quartz surfaces as verified by theory, modeling and experiment. Empirical correction factors were implemented into post-processing software to redress the artifact for hemispherical and diffuse reflectance data across the 300 nm to 2300 nm range.
A microscene approach to the evaluation of hyperspectral system level performance
David W. Allen, Ronald G. Resmini, Christopher J. Deloye, et al.
Assessing the ability of a hyperspectral imaging (HSI) system to detect the presence of a substance or to quantify abundance requires an understanding of the many factors in the end-to-end remote sensing scenario from scene to sensor to data exploitation. While there are methods which attempt to model such an overall scenario, they are necessarily implemented with assumptions and approximations that do not completely capture the true complexity of the actual radiative transfer processes nor do they capture the range of variability that materials display in a natural setting. We propose one alternative to numerical data models that generate hyperspectral image cubes for system trade studies and for algorithm development and testing. This approach makes use of compact hyperspectral imagers that can be used in the laboratory to measure materials in a 'microscene' specific to one’s application. The key to acceptance of this approach is quantifying the distributions of spectra as points in n-D space so that one can compare the spectral complexity of laboratory generated microscene data to that of an earth remote sensing scene. The spectral complexity of the microscene generated in the lab is thus compared to airborne remotely sensed HSI. We produce and measure a microscene, estimate its data dimensionality, and compare that to similar estimates of dimensionality of airborne HSI data sets. Signal-to-clutter ratios (SCR) of the microscene are also compared to those derived from airborne HSI data. The results suggest the microscene is capable of producing a scene that is as complex, if not more so, than that of a hyperspectral scene collected from an airborne sensor. A scene classification analysis and a system trade study are conducted to illustrate the utility of the microscene for assessing system-level performance. This simple, low-cost method can provide proxy data with a distribution of points in n-dimensional (n-D) hyperspace that are indistinguishable from an earth remote sensing scene.
Spectral variability constraints on multispectral and hyperspectral mapping performance
F. A. Kruse, K. G. Fairbarn
Common approaches to multispectral imagery (MSI) and hyperspectral imagery (HSI) data analysis often utilize key image endmember spectra as proxies for ground measurements to classify imagery based on their spectral signatures. Most of these, however, take an average spectral signature approach and do not consider spectral variability. Multiple spectral measurements, whether from imagery data or utilizing a field spectrometer, demonstrate high variability linked not only to inherent material variability, but to acquisition parameters such as spatial and spectral resolution, and spectral mixing. This research explores causes and characteristics of spectral variability in remotely sensed data and its effect on spectral classification and mapping. WorldView-2 multispectral imagery and Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral (HSI) data at similar spatial resolutions were corrected to reflectance using a model-based approach supplemented by field spectral measurements. A second AVIRIS dataset at lower spatial resolution was also used. These data were then analyzed using ground and image spectral endmember spectra. Endmember spectra were assessed in terms of their spectral variability, and statistical and spectral-feature-based classification approaches were tested and compared. Results illustrate that improved mapping can be achieved when spectral variability of individual endmembers is taken into account in the classification.
Multispectral and hyperspectral advanced characterization of soldier's camouflage equipment
The requirements for soldier camouflage in the context of modern warfare are becoming more complex and challenging given the emergence of novel infrared sensors. There is a pressing need for the development of adapted fabrics and soldier camouflage devices to provide efficient camouflage in both the visible and infrared spectral ranges. The Military University of Technology has conducted an intensive project to develop new materials and fabrics to further improve the camouflage efficiency of soldiers. The developed materials shall feature visible and infrared properties that make these unique and adapted to various military context needs. This paper presents the details of an advanced measurement campaign of those unique materials where the correlation between multispectral and hyperspectral infrared measurements is performed.
Spectral Data Enhancement Technologies and Techniques
icon_mobile_dropdown
Spectral image destriping using a low-dimensional model
Striping effects, i.e., artifacts that vary systematically with the image column or row, may arise in hyperspectral or multispectral imagery from a variety of sources. One potential source of striping is a physical effect inherent in the measurement, such as a variation in viewing geometry or illumination across the image. More common sources are instrumental artifacts, such as a variation in spectral resolution, wavelength calibration or radiometric calibration, which can result from imperfect corrections for spectral “smile” or detector array nonuniformity. This paper describes a general method of suppressing striping effects in spectral imagery by referencing the image to a spectrally lowdimensional model. The destriping transform for a given column or row is taken to be affine, i.e., specified by a gain and offset. The image cube model is derived from a subset of spectral bands or principal components thereof. The general approach is effective for all types of striping, including broad or narrow, sharp or graduated, and is applicable to radiance data at all optical wavelengths and to reflectance data in the solar (visible through short-wave infrared) wavelength region. Some specific implementations are described, including a method for suppressing effects of viewing angle variation in VNIR-SWIR imagery.
Fully automatic destriping of Hyperion hyperspectral satellite imagery using local window statistics
NASA’s EO-1 satellite, well into it’s second decade of operation, continues to provide multispectral and hyperspectral data to the remote sensing community. The Hyperion pushbroom type hyperspectral spectrometer aboard EO-1 can be a rich and useful source of high temporal resolution hyperspectral data. Unfortunately the Hyperion sensor suffers from several issues including a low signal to noise ratio in many band regions as well as imaging artifacts. One artifact is the presence of vertical striping, which, if uncorrected, limits the value of the Hyperion imagery. The detector array reads in all spectral bands one spatial dimension (cross-track) at a time. The second spatial dimension (in-track) arises from the motion of the satellite. The striping is caused by calibration errors in the detector array that appear as a vertical striping pattern in the in-track direction. Because of the layout of the sensor array each spectral band exhibits it’s own characteristic striping pattern, each of which must be corrected independently. Many current Hyperion destriping algorithms focus on the correction of stripes by analyzing the column means and standard deviations of each band. The more effective algorithms utilize windowing of the column means and interband correlation of these window means. The approach taken in this paper achieves greater accuracy and effectiveness due to not only using local windowing in the across track dimension but also along the in‐track. This allows detection of the striping patterns in radiometrically homogeneous areas, providing improved detection accuracy.
Accurate accommodation of scan-mirror distortion in the registration of hyperspectral image cubes
To improve the spatial sampling of scanning hyperspectral cameras, it is often necessary to capture numerous overlapping image cubes and later mosaic them to form the overall image cube. For hyperspectral camera systems having broad-area coverage, whisk-broom scanning using an external mirror is often employed. Creating the final image cube mosaic requires sub-pixel correction of the scan-mirror distortion, as well as alignment of the individual image cubes. For systems lacking geo-positional information that relates sensor to scene, alignment of the image scans is nontrivial. Here we present a novel algorithm that removes scan distortion and aligns hyperspectral image cubes based on correlation of the cubes’ image content with a reference image.

The algorithm is able to provide robust results by recognizing that the cubes’ image content will not always match identically with that of the reference image. For example, in cultural heritage applications, the reference color image of the finished painting need not match the under-painting seen in the SWIR. Our approach is to identify a corresponding set of points between the cubes and the reference image, using a subset of wavelet scales, and then filtering out matches that are inconsistent with a map of the distortion. The filtering is performed by removing points iteratively according to their proximity to a function fit to their disparity (distance between the matched points). Our method will be demonstrated and our results validated using hyperspectral image cubes (976-1680 nm) and visible reference images from the fields of remote sensing and cultural heritage preservation.
Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)
Tim Bratcher, Robert Kroutil, André Lanouette, et al.
The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.
Estimating the pixel footprint distribution for image fusion by ray tracing lines of sight in a Monte Carlo scheme
T. Opsahl, T. V. Haavardsholm
Images from airborne cameras can be a valuable resource for data fusion, but this typically requires them to be georeferenced. This usually implies that the information of every pixel should be accompanied by a single geographical position describing where the center of the pixel is located in the scene. This geospatial information is well suited for tasks like target positioning and orthorectification. But when it comes to fusion, a detailed description of the area on the ground contributing to the pixel signal would be preferable over a single position. In this paper we present a method for estimating these regions. Simple Monte Carlo simulations are used to combine the influences of the main geometrical aspects of the imaging process, such as the point spread function, the camera’s motion and the topography in the scene. Since estimates of the camera motion are uncertain to some degree, this is incorporated in the simulations as well. For every simulation, a pixel’s sampling point in the scene is estimated by intersecting a randomly sampled line of sight with a 3D-model of the scene. Based on the results of numerous simulations, the pixel’s sampling region can be represented by a suitable probability distribution. This will be referred to as the pixel’s footprint distribution (PFD). We present results for high resolution hyperspectral pushbroom images of an urban scene.
Evaluation of the CASSI-DD hyperspectral compressive sensing imaging system
Maria Busuioceanu, David W. Messinger, John B. Greer, et al.
Compressive Sensing (CS) systems capture data with fewer measurements than traditional sensors assuming that imagery is redundant and compressible in the spatial and spectral dimensions. We utilize a model of the Coded Aperture Snapshot Spectral Imager-Dual Disperser (CASSI-DD) CS model to simulate CS measurements from HyMap images. Flake et al's novel reconstruction algorithm, which combines a spectral smoothing parameter and spatial total variation (TV), is used to create high resolution hyperspectral imagery.1 We examine the e ect of the number of measurements, which corresponds to the percentage of physical data sampled, on the delity of simulated data. The impacts of the CS sensor model and reconstruction of the data cloud and the utility for various hyperspectral applications are described to identify the strengths and limitations of CS.
Modeling satellite imaging sensors over optically complex bodies of water
Robert Nevins, Aaron Gerace
Although several currently operating remote sensing satellites can take effective data from case-1 waters, which are dominated by phytoplankton, few instruments have the appropriate spatial and radiometric resolution for taking effective data from Case 2 waters, which contain significant levels of chlorophyll, suspended material, and color-dissolved organic matter. The Operational Land Imager, which was launched on February 11th 2013, should have sufficient spatial and radiometric resolution to take useful data from Case 2 waters as well as the continental Earth. The purpose of this study was to compare the constituent retrieval accuracy of the Operational Land Imager over these waters to that of existing sensors. The models used to evaluate the sensors were based on signal-to-noise ratios calculated from image data, spectral response functions, and bit depths of each satellite. The sensor models were used to sample radiance spectra from different Hydrolight simulations, which were calculated based on user-specified levels of the Case 2 constituents. Then, the concentrations were retrieved for each satellite based on the sensor models, and the error was found with respect to the known levels for each spectral curve. Thus, we present an approximation of how effective the Operational Land Imager will be for monitoring Case 2 waters, compared to existing sensors.
Clustering and Classification
icon_mobile_dropdown
Spectral dependence of texture features integrated with hyperspectral data for area target classification improvement
Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the “Urban” class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the “Dirt Path” class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.
A semi-supervised classification algorithm using the TAD-derived background as training data
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
Scale profile as feature for quick satellite image object-based classification
David Dubois, Richard Lepage
With the increasing precision of recent spaceborne sensors, remotely sensed images have become exceedingly large. These images are being used more and more often in the preparation of emergency maps when a disaster occurs. Visual interpretation of these images is long and automatic pixel-based methods require a lot of memory, processing power and time. In this paper, we propose to use a fast level-set image transformation in order to obtain a hierarchical representation of image's objects. A scale profile is then extracted and included as a relevant feature for land-use classification in urban areas. The main contribution of this paper is the analysis of the scale profile for remote sensing applications. The data set from the earthquake that occurred on 12 January 2012 in Haiti is used.
Multi-scale vector tunnel classification algorithm for hyperspectral images
S. Demirci, I. Erer, Nu. Unaldi
Hyperspectral image (HSI) classification consists of a variety of algorithms involving supervised or unsupervised. In supervised classification, some reference data are used. Training data are not used in unsupervised classification methods. The type of a classification algorithm depends on the nature of the input and reference data.

The spectral matching, statistical and kernel based methods are the most widely known classification algorithms for hyperspectral imaging. Spectral matching algorithms try to identify the similarity of the unknown spectral signature of test pixels with the expected signature. Even though most spectra in real applications are random, the amount of training data with respect to the dimensionality affects the performances of the statistical classifiers substantially.

In this study, an efficient spectral similarity method employing Multi-Scale Vector Tunnel Algorithm (MS-VTA) for supervised classification of the materials in hyperspectral imagery is introduced. With the proposed algorithm, a simple spectral similarity based decision rule using limited amount of reference data or spectral signature is formed and compared with the Euclidian Distance (ED) and the Spectral Angle Map (SAM) classifiers. The prediction of multi-level upper and lower spectral boundaries of spectral signatures for all classes across spectral bands constitutes the basic principle of the proposed algorithm.
Poster Session
icon_mobile_dropdown
Progressive constrained energy minimization for subpixel detection
Constrained energy minimization (CEM) has been widely used for subpixel detection. It makes use of the sample correlation matrix R by suppressing the background thus enhancing detection of targets of interest. In many real world problems, implementing target detection on a timely basis is crucial, specifically moving targets. However, since the calculation of the sample correlation matrix R needs the complete data set prior to its use in detection, CEM is prevented from being implemented as a real time processing algorithm. In order to resolve this dilemma, the sample correlation matrix R must be replaced with a causal sample correlation matrix formed by only those data samples that have been visited and the currently being processed data sample. This causality is a pre-requisite to real time processing. By virtue of such causality, designing and developing a real time processing version of CEM becomes feasible. This paper presents a progressive CEM (PCEM) where the causal sample correlation matrix can be updated sample by sample. Accordingly, PCEM allows the CEM to be implemented as a causal CEM (C-CEM) as well as real time (RT) CEM via a recursive update equation in real time.
GPUs for parallel on-board hyperspectral image radiometric normalization
Yuanfeng Wu, Bing Zhang, Haina Zhao, et al.
This paper proposed a GPU-based implementation of radiometric normalization algorithms, which is used as a representative case study of on-board data processing techniques for hyperspectral image. Three algorithms of radiometric normalization based on the column average and standard deviation of raw image statistical characteristics were implemented and applied to real hyperspectral images for evaluating their performance. These algorithms have been implemented using the compute device unified architecture (CUDA), and tested on the NVidia Tesla C2075 architecture. The airborne Pushbroom Hyperspectral Imager (PHI) was flown to acquire the spectrally contiguous images as experimental datasets. The results show that MN worked best among the three methods and the speedups achieved by the GPU implementation over their CPU counterparts are outstanding.
Impact of spatial complexity preprocessing on hyperspectral data unmixing
Stefan A. Robila, Kimberly Pirate, Terrance Hall
For most of the success, hyperspectral image processing techniques have their origins in multidimensional signal processing with a special emphasis on optimization based on objective functions. Many of these techniques (ICA, PCA, NMF, OSP, etc.) have their basis on collections of single dimensional data and do not take in consideration any spatial based characteristics (such as the shape of objects in the scene). Recently, in an effort to improve the processing results, several approaches that characterize spatial complexity (based on the neighborhood information) were introduced.

Our goal is to investigate how spatial complexity based approaches can be employed as preprocessing techniques for other previously established methods. First, we designed for each spatial complexity based technique a step that generates a hyperspectral cube scaled based on spatial information. Next we feed the new cubes to a group of processing techniques such as ICA and PCA. We compare the results between processing the original and the scaled data. We compared the results on the scaled data with the results on the full data.

We built upon these initial results by employing additional spatial complexity approaches. We also introduced new hybrid approaches that would embed the spatial complexity step into the main processing stage.
Concealed target detection using hyperspectral imagers based on intersection kernel of SVM
This paper presents a concealed target detection based on the intersection kernel Support Vector Machine (SVM). Hyperspectral imagers are widely used in the field of target detection and material analysis. In military applications, it can be used to border protection, concealed target detection, reconnaissance and surveillance. If disguised enemies not detected in advance, the damage of allies will be catastrophic by unexpected attack. Concealed object detection using radar and terahertz method is widely used. However, these active techniques are easily exposed to the enemy. Electronic Optical Counter Counter Measures (EOCCM) using hyperspectral imagers can be a feasible solution. We use the band selected feature directly and the intersection kernel based SVM. Different materials show different spectrums although they look similar in CCD camera. We propose novel concealed target detection method that consist of 4 step, Feature band selection, Feature Extraction, SVM learning and target detection.
Fusion and quality analysis for remote sensing images using contourlet transform
Yoonsuk Choi, Ershad Sharifahmadian, Shahram Latifi
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.