Proceedings Volume 8537

Image and Signal Processing for Remote Sensing XVIII

cover
Proceedings Volume 8537

Image and Signal Processing for Remote Sensing XVIII

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 16 November 2012
Contents: 11 Sessions, 49 Papers, 0 Presentations
Conference: SPIE Remote Sensing 2012
Volume Number: 8537

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8537
  • Multiresolution Fusion
  • Techniques for Data Pre-Processing
  • Image Segmentation
  • Target Detection and Spectral Unmixing
  • Classification, Object Detection and Regression
  • Image Registration and Analysis of Temporal Data
  • 3D Processing and DEM Extraction
  • SAR Data Analysis I: Joint Session with Conferences 8536 and 8537
  • SAR Data Analysis II: Joint Session with Conferences 8536 and 8537
  • Poster Session
Front Matter: Volume 8537
icon_mobile_dropdown
Front Matter: Volume 8537
This PDF file contains the front matter associated with SPIE Proceedings Volume 8537, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Multiresolution Fusion
icon_mobile_dropdown
Color and spatial distortions of pan-sharpening methods in real and synthetic images
A. Medina, J. Marcello, F. Eugenio, et al.
Image fusion is the process of combining information from two or more images into a single composite image that is more informative for visual perception or additional processing. Pan-sharpening algorithms work either in the spatial or in the transform domain and the most popular and effective methods include arithmetic combinations (Brovey transform), the intensity-hue-saturation transform (IHS), principal component analysis (PCA) and different multiresolution analysis-based methods, typically wavelet transforms. In recent years, a number of image fusion quality assessment metrics have been proposed. Automatic quality assessment is necessary to evaluate the possible benefits of fusion, to determine an optimal setting of parameters, as well as to compare results obtained with different algorithms to check the improvement of spatial resolution while preserving the spectral content of the data. This work addresses the challenging topic of the quality evaluation of pan-sharpening methods. In particular, a database with a synthetic image and real GeoEye satellite data was created and several pan-sharpening methods were implemented and tested. Some interesting results about the color and the spatial distortions of each method were presented and it was demonstrated that some colors bands are more affected than others depending on the fusion techniques. After the evaluation of these fusion algorithms, we can conclude that, in general, the à trous wavelet-based methods achieve the best spectral performance while the IHS-based techniques attain the best spatial accuracy.
Advantages of Laplacian pyramids over ''à trous'' wavelet transforms for pansharpening of multispectral images
The advantages provided by the generalized Laplacian pyramid (GLP) over the widespread “`a trous” wavelet (ATW) transform for multispectral (MS) pansharpening based on multiresolution analysis (MRA) are investigated. The most notable difference depends on the way GLP and ATW deal with aliasing possibly occurring in the MS data, which is originated by insufficient sampling step size, or equivalently by a too high amplitude value of the modulation transfer function (MTF) at Nyquist frequency and may generate annoying jagged patterns that survive in the sharpened image. In this paper, it is proven that GLP is capable of compensating the aliasing of MS, unlike ATW, and analogously to component substitution (CS) fusion methods, thanks to the decimation and interpolation stages present in its flowchart. Experimental results will be presented in terms of quality/distortion global score indexes (SAM, ERGAS and Q4) for increasing amounts of aliasing, measured by the amplitude at Nyquist frequency of the Gaussian-like lowpass filter simulating the average MTF of the individual spectral channels of the instrument. GLP and ATW-based methods, both using the same MTF filters and the same global injection gain, will be compared to show the advantages of GLP over ATW in the presence of aliasing of the MS bands.
Multiresolution image fusion using compressive sensing and graph cuts
V. Harikumar, Manjunath V. Joshi, Mehul S. Raval, et al.
Multiresolution fusion refers to the enhancement of low spatial resolution (LR) of multispectral (MS) images to that of panchromatic (Pan) image without compromising on the spectral details. Many of the present day methods for multiresolution fusion require that the Pan and MS images are registered. In this paper we propose a new approach for multiresolution fusion which is based on the theory of compressive sensing and graph cuts. We first estimate a close approximation to the fused image by using the sparseness in the given Pan and MS images. Assuming that they have the same sparseness, the initial estimate of the fused image is obtained as the linear combination of the Pan blocks. The weights in the linear combination are estimated using the l1 minimization by making use of MS and the down sampled Pan image. The final solution is obtained by using a model based approach. The low resolution MS image is modeled as the degraded and noisy version of the fused image in which the degradation matrix entries are estimated by using the initial estimate and the MS image. Since the MS fusion is an ill-posed inverse problem, we use a regularization based approach to obtain the final solution. A truncated quadratic smoothness prior is used for the preservation of the discontinuities in the fused image. A suitable energy function is then formed which consists of data fitting term and the prior term and is minimized using a graph cuts based approach in order to obtain the fused image. The advantage of the proposed method is that it does not require the registration of Pan and MS data. The spectral characteristics are well preserved in the fused image since we are not directly operating on the Pan digital numbers. Effectiveness of the proposed method is illustrated by conducting experiments on synthetic as well as on real satellite images. Quantitative comparison of the proposed method in terms of Erreur Relative Globale Adimensionnelle de Synthase (ERGAS), Correlation Coefficient (CC), Relative Average Spectral Error (RASE) and Spectral Aangle Mapper (SAM) with the state of the art approaches indicate superiority of our approach.
Techniques for Data Pre-Processing
icon_mobile_dropdown
Multitemporal evaluation of topographic correction algorithms using synthetic images
I. Sola, M. González de Audícana, J. Álvarez-Mozos, et al.
Land cover classification and quantitative analysis of multispectral data in mountainous regions is considerably hampered by the influence of topography on the spectral response pattern. In the last years, different topographic correction (TOC) algorithms have been proposed to correct illumination differences between sunny and shaded areas observed by optical remote sensors. Although the available number of TOC methods is high, the evaluation of their performance usually relies on the existence of precise land cover information, and a standardised and objective evaluation procedure has not been proposed yet. Besides, previous TOC assessment studies only considered a limited set of illumination conditions, normally assuming favourable illumination conditions. This paper presents a multitemporal evaluation of TOC methods based on synthetically generated images in order to evaluate the influence of solar angles on the performance of TOC methods. These synthetic images represent the radiance an optical sensor would receive under specific geometric and temporal acquisition conditions and assuming a certain land-cover type. A method for creating synthetic images using state-of-the-art irradiance models has been tested for different periods of the year, which entails a variety of solar angles. Considering the real topography of a specific area a Synthetic Real image (SR) is obtained, and considering the relief of this area as being completely flat a Synthetic Horizontal image (SH) is obtained. The comparison between the corrected image obtained applying a TOC method to a SR image and the SH image of the same area, i.e. considered the ideal correction, allows assessing the performance of each TOC algorithm. This performance is quantitatively measured through the widely accepted Structural Similarity Index (SSIM) on four selected TOC methods, assessing their behaviour over the year. Among them, C- Correction has ranked first, giving satisfying results in the majority of the cases, while other algorithms show a good performance in summer but give worse results in winter.
An automated method for relative radiometric correction performed through spectral library based classification and comparison
C. D'Elia, S. Ruscino
In this paper we propose a method to perform automated radiometric correction of remotely sensed multispectral hyperspectral images. The effects of atmosphere, as well as the calibration errors which the satellite sensors may present, may be compensated by performing the radiometric correction operation in order to achieve good performances in different applications, such as classification and change detection. As far as the change detection is concerned, relative radiometric correction is particularly interesting since it deals with images which have to be compared and since in this context an absolute correction may be characterized by a high complexity. One method for performing radiometric correction of multispectral images can be based on a least-square approach: considering one image as the reference one and the other as a linearly scaled version of the reference one, the linear coefficients can be calculated by using a set of control points conveniently chosen. Unfortunately, the choice of control points is a tricky operation, strictly connected to the specific application. In this paper we propose an automated method for performing relative radiometric correction of multispectral remotely sensed images, in which the choice of the control points is based on a comparison of the spectral content of those images to the spectral response of known materials. Specifically, we perform a vector quantization of the images separately, considering N quantization levels represented by N known materials’ signatures properly selected. Then the quantized images are compared in order to identify the areas classified as belonging to the same class, so identified by the same quantization index which will make the subset of control points that should be used for performing relative radiometric correction. Experimental results showed that choosing points characterized by an homogeneous spectral content for radiometric correction improves the performances of specific image processing algorithms, such as change detection and classification algorithms.
A linear approach for radiometric calibration of full-waveform Lidar data
Andreas Roncat, Norbert Pfeifer, Christian Briese
During the past decade, small-footprint full-waveform lidar systems have become increasingly available, especially airborne. The primary output of these systems is high-resolution topographic information in the form of three-dimensional point clouds over large areas. Recording the temporal profile of the transmitted laser pulse and of its echoes enables to detect more echoes per pulse than in the case of discrete-return lidar systems, resulting in a higher point density over complex terrain. Furthermore, full-waveform instruments also allow for retrieving radiometric information of the scanned surfaces, commonly as an amplitude value and an echo width stored together with the 3D coordinates of the single points. However, the radiometric information needs to be calibrated in order to merge datasets acquired at different altitudes and/or with different instruments, so that the radiometric information becomes an object property independent of the flight mission and instrument parameters. State-of-the-art radiometric calibration techniques for full-waveform lidar data are based on Gaussian Decomposition to overcome the ill-posedness of the inherent inversion problem, i.e. deconvolution. However, these approaches make strong assumptions on the temporal profile of the transmitted laser pulse and the physical properties of the scanned surfaces, represented by the differential backscatter cross-section. In this paper, we present a novel approach for radiometric calibration using uniform B-splines. This kind of functions allows for linear inversion without constraining the temporal shape of the modeled signals. The theoretical derivation is illustrated by examples recorded with a Riegl LMS-Q560 and an Optech ALTM 3100 system, respectively.
Spectral discrimination based on the optimal informative parts of the spectrum
S. E. Hosseini Aria, M. Menenti, B. Gorte
Developments in sensor technology boost the information content of imagery collected by space- and airborne hyperspectral sensors. The sensors have narrow bands close to each other that may be highly correlated, which leads to data redundancy. This paper first shows a newly developed method to identify the most informative spectral regions of the spectrum with the minimum dependency with each other, and second evaluates the land cover class separability on the given scenes using the constructed spectral bands. The method selects the most informative spectral regions of the spectrum with defined accuracy. It is applied on hyperspectral images collected over three different types of land cover including vegetation, water and bare soil. The method gives different band combinations for each land cover showing the most informative spectral regions; then a discrimination analysis of the available classes in each scene is carried out. Different separability measures based on the distribution of the classes and scatter matrices were calculated. The results show that the produced bands are well-separated for the given classes.
A stripe noise removal method of interference hyperspectral imagery based on interferogram correction
An Spatially Modulated Fourier Transform Hyperspectral Imager (named HSI) aboard on the Chinese Huan Jing-1A (HJ-1A) satellite has a spectral coverage of 0.459-0.956μm with 115 spectral bands. In practical, periodical and directional stripe noise was found distributing in the HSI imagery, especially at the first twenty shortwave bands. To fully utilize all information contained in hyperspectral images, it is demanded to eliminate the stripe noise. This paper presents a new method to deal with this problem. Firstly, possible sources of HSI stripe noise are analyzed based on interference imaging mechanism. Traditional noises, e.g. device position changes due to launch, non-uniformity in the instrument itself and aging degradations, are directly recorded at the focal plane array and thereafter in the interferogram. After inverse Fourier transform exerted on the interferogram, the spatial dimension of the interference hyperspectral image is restored with complicated and untraceable stripe noises. Therefore traditional image processing methods based on spectral image will not be effective for removing the HSI stripe noise. In order to eliminate this effect, a stripe noise removal method based on interferogram correction is necessary. Then, the implementation process of the interferogram correction method is presented, which mainly contains three steps: 1) Establish relative radiometric correction model of the interferogram based on relatively homogeneous ground scenes as much as possible; 2) Correct the response inconsistency of CCD array by carrying out relative radiometric correction on the interferogram; 3) Convert the corrected interferogram to obtain the revised hyperspectral images. An experiment is conducted and the new method is compared with several traditional methods. The results show that the stripe noise of HSI image can be more effectively removed by the proposed method, and meanwhile the texture detail of original image and the correlation among different bands are finely reserved.
Image Segmentation
icon_mobile_dropdown
Active contours with edges: combining hyperspectral and grayscale segmentation
In this work, we introduce a method to segment hyperspectral images using a Chan-Vese framework. We utilize a modified l2 distance especially well-suited for hyperspectral classification problems. This distance considers spectral signal shape rather than illumination for the classification of objects. The practicality of multiple phase segmentation in this application is also demonstrated. We then use a high spatial resolution grayscale or color image and a high spectral, but low spatial resolution hyperspectral image to produce a fused segmentation result that is more accurate than segmentation on either image alone. Lastly, we show that the algorithm also gives a natural method for end member selection and apply this result to anomaly detection.
Automatic segmentation of textures on a database of remote-sensing images and classification by neural network
Philippe Durand, Luan Jaupi, Dariush Ghorbanzdeh
Analysis and automatic segmentation of texture is always a delicate problem. Objectively, one can opt, quite naturally, for a statistical approach. Based on higher moments, these technics are very reliable and accurate but expensive experimentally. We propose in this paper, a well-proven approach for texture analysis in remote sensing, based on geostatistics. The labeling of different textures like ice, clouds, water and forest on a sample test image is learned by a neural network. The texture parameters are extracted from the shape of the autocorrelation function, calculated on the appropriate window sizes for the optimal characterization of textures. A mathematical model from fractal geometry is particularly well suited to characterize the cloud texture. It provides a very fine segmentation between the texture and the cloud from the ice. The geostatistical parameters are entered as a vector characterize by textures. A neural network and a robust multilayer are then asked to rank all the images in the database from a learning set correctly selected. In the design phase, several alternatives were considered and it turns out that a network with three layers is very suitable for the proposed classification. Therefore it contains a layer of input neurons, an intermediate layer and a layer of output. With the coming of the learning phase the results of the classifications are very good. This approach can bring precious geographic information system. such as the exploitation of the cloud texture (or disposal) if we want to focus on other thematic deforestation, changes in the ice ...
Improved boundary tracking by off-boundary detection
This work discusses an improvement to the boundary tracking algorithm introduced by Chen et al 2011. This method samples points in an image locally and utilizes the CUSUM algorithm to reduce tracking problems due to noise or texture. However, when tracking problems do arise, the local nature of the algorithm does not give any mechanism in which to recover. This work introduces a second CUSUM algorithm to detect off-boundary movement, compensating for such movement by backtracking. Boundary tracking results comparing the two algorithms are presented, including both image data and a numerical comparison of the effectiveness of the algorithms.
Extending the fractional order Darwinian particle swarm optimization to segmentation of hyperspectral images
Pedram Ghamisi, Micael S. Couceiro, Jon Atli Benediktsson
Hyperspectral sensors generate detailed information about the earth’s surface and climate in numerous contiguous narrow spectral bands, being widely used in resource management, agriculture, environmental monitoring, and others. However, due to the high dimensionality of hyperspectral data, it is difficult to design accurate and efficient image segmentation algorithms for hyperspectral imagery. In this paper a new multilevel thresholding method for segmentation of hyperspectral images into different homogenous regions is proposed. The new method is based on the Fractional-Order Darwinian Particle Swarm Optimization (FODPSO) which exploits the many swarms of test solutions that may exist at any time. In addition, the concept of fractional derivative is used to control the convergence rate of particles. The FODPSO is used to solve the so-called Otsu problem for each channel of the hyperspectral data as a grayscale image that indicates the spectral response to a particular frequency in the electromagnetic spectrum. In other words, the problem of n-level thresholding is reduced to an optimization problem in order to search for the thresholds that maximize the between-class variance. Experimental results successfully compare the FODPSO with the traditional PSO for multi-level segmentation of hyperspectral images. The FODPSO acts better than the other method in terms of both CPU time and fitness, thus being able to find the optimal set of thresholds with a larger between-class variance in less computational time.
Target Detection and Spectral Unmixing
icon_mobile_dropdown
Target attribute-based false alarm rejection in small infrared target detection
Infrared search and track is an important research area in military applications. Although there are a lot of works on small infrared target detection methods, we cannot apply them in real field due to high false alarm rate caused by clutters. This paper presents a novel target attribute extraction and machine learning-based target discrimination method. Eight kinds of target features are extracted and analyzed statistically. Learning-based classifiers such as SVM and Adaboost are developed and compared with conventional classifiers for real infrared images. In addition, the generalization capability is also inspected for various infrared clutters.
Computationally efficient strategies to perform anomaly detection in hyperspectral images
In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.
Concentration measurements of complex mixtures of broadband absorbers by widely tunable optical parametric oscillator laser spectroscopy
K. Ruxton, N. A. Macleod, D. Weidmann, et al.
The ability to obtain accurate vapour parameter information from a compound’s absorption spectrum is an essential data processing application in order to quantify the presence of an absorber. Concentration measurements can be required for a variety of applications including environmental monitoring, pipeline leak detection, surface contamination and breath analysis. This work demonstrates sensitive concentration measurements of complex mixtures of volatile organic compounds (VOCs) using broadly tunable mid wave infrared (MWIR) laser spectroscopy. Due to the high absorption cross-sections, the MWIR spectral region is ideal to carry out sensitive concentration measurements of VOCs by tunable laser absorption spectroscopy (TLAS) methods. Absorption spectra of mixtures of VOCs were recorded using a MWIR optical parametric oscillator (OPO), with a tuning range covering 2.5 μm to 3.7 μm. The output of the MWIR OPO was coupled to a multi-pass astigmatic Herriott gas cell, maintained at atmospheric pressure that can provide up to 210 m of absorption path length, with the transmission output from the cell being monitored by a detector. The resulting spectra were processed by a concentration retrieval algorithm derived from the optimum estimation method, taking into account both multiple broadband absorbers and interfering molecules that exhibit narrow multi-line absorption features. In order to demonstrate the feasibility of the concentration measurements and assess the capability of the spectral processor, experiments were conducted on calibrated VOCs vapour mixtures flowing through the spectroscopic cell with concentrations ranging from parts per billion (ppb) to parts per million (ppm). This work represents as a first step in an effort to develop and apply a similar concentration fitting algorithm to hyperspectral images in order to provide concentration maps of the spatial distribution of multi-species vapours. The reported functionality of the novel fitting algorithm makes it a valuable addition to the existing data processing tools for parameter information recovery from recorded absorption data.
A regularization based method for spectral unmixing of imaging spectrometer data
Jignesh S. Bhatt, Manjunath V. Joshi, Mehul S. Raval
In spectral unmixing, the imaging spectrometer data is unmixed to yield underlying proportions (abundance maps) of the constituent materials after extracting (estimating) their spectral signatures (endmembers). Under linear mixing model, we consider an unmixing problem wherein given the extracted endmembers, the task is to estimate the abundances. This is a severely ill-posed problem, as the hyperspectral signatures are strongly correlated resulting in the ill-conditioned signature matrix which makes the estimation highly sensitive to the noise. Further, the acquired data often do not fully satisfy the simplex requirement imposed by the linearity, resulting in inaccurate extraction of endmembers. This in turn could lead to unstable solution in the subsequent estimation of abundances. In this paper, we adopt the regularization-based alternative to achieve stable solution by improving the conditioning of the problem. For this purpose, we propose to use Tikhonov regularization within the total least squares (TLS) estimation framework. The problem is formulated with a sum of the TLS as its data-term which takes care of the possible modeling errors in both abundances and endmembers, and the Tikhonov prior which imposes the smoothness constraints. The resultant energy function being convex is minimized by gradient based optimization technique wherein the solution space is restricted to yield nonnegative abundances. We show the analysis of the regularized solution and compare it with a TLS-based direct inversion. The experiments are conducted with different noise levels on the simulated data. The results are compared with the state-of-art approaches using different quantitative measures and observing the consistency within spatial patterns of the estimated abundances. Finally the proposed approach is applied on the AVIRIS data to obtain abundance maps of the constituent materials.
Classification, Object Detection and Regression
icon_mobile_dropdown
A novel active learning method for support vector regression to estimate biophysical parameters from remotely sensed images
This paper presents a novel active learning (AL) technique in the context of ε-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed AL method aims at selecting the most informative and representative unlabeled samples which have maximum uncertainty, diversity and density assessed according to the SVR estimation rule. This is achieved on the basis of two consecutive steps that rely on the kernel kmeans clustering. In the first step the most uncertain unlabeled samples are selected by removing the most certain ones from a pool of unlabeled samples. In SVR problems, the most uncertain samples are located outside or on the boundary of the ε-tube of SVR, as their target values have the lowest confidence to be correctly estimated. In order to select these samples, the kernel k-means clustering is applied to all unlabeled samples together with the training samples that are not SVs, i.e., those that are inside the ε-tube, (non-SVs). Then, clusters with non-SVs inside are rejected, whereas the unlabeled samples contained in the remained clusters are selected as the most uncertain samples. In the second step the samples located in the high density regions in the kernel space and as much diverse as possible to each other are chosen among the uncertain samples. The density and diversity of the unlabeled samples are evaluated on the basis of their clusters’ information. To this end, initially the density of each cluster is measured by the ratio of the number of samples in the cluster to the distance of its two furthest samples. Then, the highest density clusters are chosen and the medoid samples closest to the centers of the selected clusters are chosen as the most informative ones. The diversity of samples is accomplished by selecting only one sample from each selected cluster. Experiments applied to the estimation of single-tree parameters, i.e., tree stem volume and tree stem diameter, show the effectiveness of the proposed technique.
Reduction of training costs using active classification in fused hyperspectral and LiDAR data
Sebastian Wuttke, Hendrik Schilling, Wolfgang Middelmann
This paper presents a novel approach for the reduction of training costs in classification with co-registered hyperspectral (HS) and Light Detection and Ranging (LiDAR) data using an active classification framework. Fully automatic classification can be achieved by unsupervised learning, which is not suited for adjustment to specific classes. On the other hand, supervised classification with predefined classes needs a lot of training examples, which need to be labeled with the ground truth, usually at a significant cost. The concept of active classification alleviates these problems by the use of a selection strategy: only selected samples are ground truth labeled and used as training data. One common selection strategy is to incorporate in a first step the current state of the classification algorithm and choose only the examples for which the expected information gain is maximized. In the second step a conventional classification algorithm is trained using this data. By alternating between these two steps the algorithm reaches high classification accuracy results with less training samples and therefore lower training costs. The approach presented in this paper involves the user in the active selection strategy and the k-NN algorithm is chosen for classification. The results further benefit from fusing the heterogeneous information of HS and LiDAR data within the classification algorithm. For this purpose, several HS features, such as vegetation indices, and LiDAR features, such as relative height and roughness, are extracted. This increases the separability between different classes and reduces the dimensionality of the HS data. The practicability and performance of this framework is shown for the detection and separation of different kinds of vegetation, e.g. trees and grass in an urban area of Berlin. The HS data was obtained by the SPECIM AISA Eagle 2 sensor, LiDAR data by Riegl LMS Q560.
Detection of built-up area expansion in ASTER and SAR images using conditional random fields
Benson Kipkemboi Kenduiywo, Valentyn A. Tolpekin, Alfred Stein
The heterogenous land-cover structure in built-up areas challenges existing classification methods. This study developed a method for detecting such areas from SAR and ASTER images using conditional random fields (CRFs). A feature selection approach and a novel data dependent term were designed and used to classify image blocks. A new approach of discriminating classes using variogram features was introduced. Mean, standard deviation and variogram slope features were used to characterize training areas including spatial dependencies of classes. The association potential was designed using support vector machines (SVMs) and the inverse of transformed Euclidean distance used as a data dependent term of the interaction potential. The latter maintained a stable accuracy when subjected to a variation of a smoothness parameter while preserving class boundaries and aggregating similar labels during classification. In this way, a discontinuity adaptive model that moderated smoothing given data evidence was obtained. The accuracy of detecting built-up areas using CRF exceeded that of Markov Random Fields (MRF), SVM and maximum likelihood classification (MLC) by 1.13%, 2.22% and 8.23% respectively. It also had the lowest fraction of false positives. Application of the method showed that built-up areas increased by 98.9 ha while 26.7 ha was converted from built-up to non-built-up areas. We conclude that the new procedure can be used to detect and monitor built-up area expansion; in this way it provides timely spatial information to urban planners and other relevant professionals.
A new approach to automatic road extraction from satellite images using boosted classifiers
Umut Çinar, Ersin Karaman, Ekin Gedik, et al.
In this study, a supervised method for automatic road detection based on spectral indices and structural properties is proposed. The need of generalizing the spectral features for the images captured by different kinds of devices is investigated. Mean-shift segmentation algorithm is employed to partition the input multi-spectral image in addition to k-means which is used as a complementary method for structural feature generation. Adaboost learning algorithm is utilized with extracted features to distinguish roads from non-road regions in the satellite images. The proposed algorithm is tested on an image database containing both IKONOS and GEOEYE images to verify the achieved generalization. The empirical results show that the proposed road extraction method is promising and capable of finding the majority of the road network.
Image Registration and Analysis of Temporal Data
icon_mobile_dropdown
Automatic registration of multimodal views on large aerial images
F. Uccheddu, A. Pelagotti, P. Ferrara
A new automatic method capable of registering multimodal images like terrain maps and multispectral data is presented. In order to speed up the processing, given the large amount of data typical of such settings, the method exploits a multi-resolution approach, which may select different similarity measures in consideration of image resolution and size. In fact, the performances of Cross Correlation and Maximization of Mutual Information (MMI) on images of different resolution and size have been evaluated and are described The adaptive strategy adopted is designed to exploit the strengths and to overcome the limitations of the similarity criteria employed. In case multimodal views are to be registered on 3D models, MMI is to be preferred. The strategies to improve its performance also on smaller images are presented.
Unsupervised mis-registration noise estimation in multi-temporal hyperspectral images
In this work, we focus on Anomalous Change Detection (ACD), whose goal is the detection of small changes occurred between two hyperspectral images (HSI) of the same scene. When data are collected by airborne platforms, perfect registration between images is very difficult to achieve, and therefore a residual mis-registration (RMR) error should be taken into account in developing ACD techniques. Recently, the Local Co-Registration Adjustment (LCRA) approach has been proposed to deal with the performance reduction due to the RMR, providing excellent performance in ACD tasks. In this paper, we propose a method to estimate the first and second order statistics of the RMR. The RMR is modeled as a unimodal bivariate random variable whose mean value and covariance matrix have to be estimated from the data. In order to estimate the RMR statistics, a feature description of each image is provided in terms of interest points extending the Scale Invariant Feature Transform (SIFT) algorithm to hyperspectral images, and false matches between descriptors belonging to different features are filtered by means of a highly robust estimator of multivariate location, based on the Minimum Covariance Determinant (MCD) algorithm. In order to assess the performance of the method, an experimental analysis has been carried out on a real hyperspectral dataset with high spatial resolution. The results highlighted the effectiveness of the proposed approach, providing reliable and very accurate estimation of the RMR statistics.
Short-term change detection for UAV video
Günter Saur, Wolfgang Krüger
In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor’s footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.
3D Processing and DEM Extraction
icon_mobile_dropdown
A rooftop extraction method using color feature, height map information and road information
Yongzhou Xiang, Ying Sun, Chao Li
This paper presents a new method for rooftop extraction that integrates color features, height map, and road information in a level set based segmentation framework. The proposed method consists of two steps: rooftop detection and rooftop segmentation. The first step requires the user to provide a few example rooftops from which the color distribution of rooftop pixels is estimated. For better robustness, we obtain superpixels of the input satellite image, and then classify each superpixel as rooftop or non-rooftop based on its color features. Using the height map, we can remove those detected rooftop candidates with small height values. Level set based segmentation of each detected rooftop is then performed based on color and height information, by incorporating a shape-prior term that allows the evolving contour to take on the desired rectangle shape. This requires performing rectangle fitting to the evolving contour, which can be guided by the road information to improve the fitting accuracy. The performance of the proposed method has been evaluated on a satellite image of 1 km×1 km in area, with a resolution of one meter per pixel. The method achieves detection rate of 88.0% and false alarm rate of 9.5%. The average Dice's coefficient over 433 detected rooftops is 73.4%. These results demonstrate that by integrating the height map in rooftop detection and by incorporating road information and rectangle fitting in a level set based segmentation framework, the proposed method provides an effective and useful tool for rooftop extraction from satellite images.
Integration of photogrammetric DSM and advanced image analysis for the classification of urban areas
Mauro Dalla Mura, Francesco Nex, Fabio Remondino, et al.
In this paper, a technique for the integration of images and point cloud for urban areas classification is pre- sented. A set of aerial RGB overlapping images are used as input. A photogrammetric Digital Surface Model (DSM) is firstly generated by using advanced matching techniques. Subsequently, a thematic classification of the surveyed areas is performed considering simultaneously the surface’s reflectance in the visible spectrum of the image sequence, the altitude information (provided by the generated DSM) and additional spatial features (Attribute Profiles). Exploiting the geometrical constraints provided by the collinearity condition and the epipo- lar geometry between the images, the thematic classification of the land cover can be improved by considering simultaneously the height information and the reflectance values of the DSM. Examples and comments of the proposed classification algorithm are given using a set of aerial images over a dense urban area.
Performance evaluation of DTM area-based matching reconstruction of Moon and Mars
Cristina Re, Gabriele Cremonese, Elisa Dall'Asta, et al.
High resolution DTMs, suitable for geomorphological studies of planets and asteroids, are today among the main scientific goals of space missions. In the framework of the BepiColombo mission, we are experimenting the use of different matching algorithms as well as the use of different geometric transformation models between stereo pairs, assessing their performances in terms of accuracy and computational efforts. Results obtained with our matching software are compared with those of established software. The comparison of the performance of image matching being the main objective of this work, all other steps of the DTM generation procedure have been made independent of the matching software by using a common framework. Tests with different transformation models have been performed using computer generated images as well as real HiRISE and LROC NAC images. The matching accuracy for real images has been checked in terms of reconstruction error against DTMs of Mars and the Moon published online and produced by the University of Arizona.
Automatic generation of digital terrain models from LiDAR and hyperspectral data using Bayesian networks
Dominik Perpeet, Wolfgang Gross, Wolfgang Middelmann
Various tasks such as urban development, terrain mapping or waterway and drainage modeling depend on digital terrain models (DTM) from large scale remote sensing data. Usually, DTM generation is a task requiring extensive manual interference. Previous attempts for automation are mostly based on determining the non- ground regions via fixed thresholds followed by smoothing operations. Thus, we propose a novel approach to automatically deduce a DTM from a digital surface model (DSM) with the aid of hyperspectral data. For this, advantages of a line scanning LiDAR system and a pushbroom hyperspectral sensor are combined to improve the result. We construct a hybrid Bayesian network (HBN), where modeled nodes can be discrete or continuous, and incorporate our already determined features. Using this network we determine probability estimates whether each point is part of terrain obstructions. While using two different sensor types supplies robust features, Bayesian networks can be automatically trained and adapted to specific scenarios such as mountainous or urban regions.
SAR Data Analysis I: Joint Session with Conferences 8536 and 8537
icon_mobile_dropdown
Classification of polarimetric SAR data using dictionary learning
Jacob S. Vestergaard, Anders L. Dahl, Rasmus Larsen, et al.
This contribution deals with classification of multilook fully polarimetric synthetic aperture radar (SAR) data by learning a dictionary of crop types present in the Foulum test site. The Foulum test site contains a large number of agricultural fields, as well as lakes, wooded areas, natural vegetation, grasslands and urban areas, which makes it ideally suited for evaluation of classification algorithms. Dictionary learning centers around building a collection of image patches typical for the classification problem at hand. This requires initial manual labeling of the classes present in the data and is thus a method for supervised classification. The methods aim to maintain a proficient number of typical patches and associated labels. Data is consecutively classified by a nearest neighbor search of the dictionary elements and labeled with probabilities of each class. Each dictionary element consists of one or more features, such as spectral measurements, in a neighborhood around each pixel. For polarimetric SAR data these features are the elements of the complex covariance matrix for each pixel. We quantitatively compare the effect of using different representations of the covariance matrix as the dictionary element features. Furthermore, we compare the method of dictionary learning, in the context of classifying polarimetric SAR data, with standard classification methods based on single-pixel measurements.
A novel approach to building change detection in very high resolution SAR images
This paper proposes a novel approach to building change detection in Very High Resolution (VHR) Synthetic Aperture Radar (SAR) images. The proposed approach is based on three concepts: i) the selection of the proper scale of representation; ii) the extraction of information on changes associated with increase and decrease of backscattering at the selected building scale (hot-spots); and iii) the exploitation of the expected backscattering proprieties of buildings to detect new and fully destroyed buildings. Experimental results obtained on a data set made up of two COSMO-SkyMed (CSK®) spotlight images acquired in 2009 over the city of L'Aquila (Italy) before and after the earthquake that hit the region, demonstrated that the proposed approach allows an accurate identification of destroyed buildings while presents a low rate of false alarms.
Blind whitening of correlated speckle to enforce despeckling of single-look high-resolution SAR images
Alessandro Lapini, Tiziano Bianchi, Fabrizio Argenti, et al.
During the last three decades, several methods have been developed for the despeckling of synthetic aperture radar (SAR) imagery. While some of them are totally empirical, the majority of those relying on signal and noise models has been derived under the assumption of a fully–developed speckle model, in which the multiplicative fading term is supposed to be a white process. Unfortunately, the frequency response of the SAR system may introduce a statistical correlation, which decreases the capability of reducing speckle for filters that assume a white speckle model. In this work, an unsupervised method for whitening single–look complex (SLC) images produced by very–high resolution (VHR) SAR systems is proposed. By using the statistical properties of the SLC image and some likely assumptions, estimation of the frequency response of the SAR system is performed. After that, a decorrelation stage is applied to the complex image in order to yield uncorrelated speckle in the intensity/amplitude component. Strong scatterers are automatically detected and left unprocessed. After the whitening step, the complex image is detected and the resulting intensity/amplitude may be despeckled. Experimental results have been carried out both on optical images corrupted by synthetic correlated complex speckle and on true SLC images acquired by the COSMO-SkyMed SAR satellite constellation. Both advanced and classical despeckling filters achieve significantly better performance when they are preceded by the proposed whitening step. On homogeneous areas the equivalent number of looks (ENL) increases by four–five times. The sharpness of edges and strong textures is negligibly diminished by the whitening step; this effect is even less noticeable after the despeckling step has been performed. The radiometric characteristics of images are preserved by the whitening process to a large extent.
SAR Data Analysis II: Joint Session with Conferences 8536 and 8537
icon_mobile_dropdown
Maritime surveillance with synthetic aperture radar (SAR) and automatic identification system (AIS) onboard a microsatellite constellation
E. H. Peterson, R. E. Zee, G. Fotopoulos
New developments in small spacecraft capabilities will soon enable formation-flying constellations of small satellites, performing cooperative distributed remote sensing at a fraction of the cost of traditional large spacecraft missions. As part of ongoing research into applications of formation-flight technology, recent work has developed a mission concept based on combining synthetic aperture radar (SAR) with automatic identification system (AIS) data. Two or more microsatellites would trail a large SAR transmitter in orbit, each carrying a SAR receiver antenna and one carrying an AIS antenna. Spaceborne AIS can receive and decode AIS data from a large area, but accurate decoding is limited in high traffic areas, and the technology relies on voluntary vessel compliance. Furthermore, vessel detection amidst speckle in SAR imagery can be challenging. In this constellation, AIS broadcasts of position and velocity are received and decoded, and used in combination with SAR observations to form a more complete picture of maritime traffic and identify potentially non-cooperative vessels. Due to the limited transmit power and ground station downlink time of the microsatellite platform, data will be processed onboard the spacecraft. Herein we present the onboard data processing portion of the mission concept, including methods for automated SAR image registration, vessel detection, and fusion with AIS data. Georeferencing in combination with a spatial frequency domain method is used for image registration. Wavelet-based speckle reduction facilitates vessel detection using a standard CFAR algorithm, while leaving sufficient detail for registration of the filtered and compressed imagery. Moving targets appear displaced from their actual position in SAR imagery, depending on their velocity and the image acquisition geometry; multiple SAR images acquired from different locations are used to determine the actual positions of these targets. Finally, a probabilistic inference model combines the SAR target data with transmitted AIS data, taking into account nearest-neighbor position matches and uncertainty models of each observation.
GLRT-entropy joint location of low-RCS target in heavy sea clutter
Estimation accuracy of sea clutter plays an important role in target detection and location. In this paper, a generalized likelihood ratio test (GLRT)-entropy joint location approach for low radar cross section (RCS) target in heavy sea clutter is proposed, which takes use of both estimated probability and localized entropy of the range image. After performing detection of target based on GLRT, the proposed approach identifies the target range bin by comparing localized entropies, which are obtained before and after clutter suppression respectively. Simulation results demonstrate performance advantages of the proposed approach over the one where only probability or entropy is used.
Poster Session
icon_mobile_dropdown
Detection of hedges based on attribute filters
Gabrielle Cavallaro, Benoit Arbelot, Mathieu Fauvel, et al.
The detection of hedges is a very important task for the monitoring of a rural environment and aiding the management of their related natural resources. Hedges are narrow vegetated areas composed of shrubs and/or trees that are usually present at the boundaries of adjacent agricultural fields. In this paper, a technique for detecting hedges is presented. It exploits the spectral and spatial characteristics of hedges. In detail, spatial features are extracted with attribute filters, which are connected operators defined in the mathematical morphology framework. Attribute filters are flexible operators that can perform a simplification of a grayscale image driven by an arbitrary measure. Such a measure can be related to characteristics of regions in the scene such as the scale, shape, contrast etc. Attribute filters can be computed on tree representations of an image (such as the component tree) which either represent bright or dark regions (with respect to their surroundings graylevels). In this work, it is proposed to compute attribute filters on the inclusion tree which is an hierarchical dual representation of an image, in which nodes of the tree corresponds to both bright and dark regions. Specifically, attribute filters are employed to aid the detection of woody elements in the image, which is a step in the process aimed at detecting hedges. In order to perform a characterization of the spatial information of the hedges in the image, different attributes have been considered in the analysis. The final decision is obtained by fusing the results of different detectors applied to the filtered image.
A FPGA-based automatic bridge over water recognition in high-resolution satellite images
Sebastian Beulig, Maria von Schönermark, Felix Huber
In this paper a novel algorithm for recognizing bridges over water is presented. The algorithm is designed to run on a small reconfigurable microchip, a so called Field Programmable Gate Array (FPGA). Hence, the algorithm is computationally lightweight and high processing speeds can be reached. Furthermore no a-priory knowledge about a bridge is necessary. Even bridges with an irregular shape, e.g. with balconies, can be detected. As a result, the center point of the bridge is marked. Due to the low power consumption of the FPGA and the autonomous performance of the algorithm, it is suitable for an image analysis directly on-board of satellites. Meta-data like the coordinates of recognized bridges are immediately available. This could be useful, e.g. in case of a natural hazard, when quick information about the infrastructure is desired by the disaster management. The algorithm as well as experimental results on real satellite images are presented and discussed.
Web-based data acquisition and management system for GOSAT validation Lidar data analysis
Hiroshi Okumura, Shoichiro Takubo, Takeru Kawasaki, et al.
An web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data analysis is developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS ground-level meteorological data, Rawinsonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data.
A new coastline extraction in remote sensing images
While executing tasks such as ocean pollution monitoring, maritime rescue, geographic mapping, and automatic navigation utilizing remote sensing images, the coastline feature should be determined. Traditional methods are not satisfactory to extract coastline in high-resolution panchromatic remote sensing image. Active contour model, also called snakes, have proven useful for interactive specification of image contours, so it is used as an effective coastlines extraction technique. Firstly, coastlines are detected by water segmentation and boundary tracking, which are considered initial contours to be optimized through active contour model. As better energy functions are developed, the power assist of snakes becomes effective. New internal energy has been done to reduce problems caused by convergence to local minima, and new external energy can greatly enlarge the capture region around features of interest. After normalization processing, energies are iterated using greedy algorithm to accelerate convergence rate. The experimental results encompassed examples in images and demonstrated the capabilities and efficiencies of the improvement.
De-striping algorithm in ALOS satellite imagery based on adaptive frequency filter
Yutian Cao, Dongmei Yan, Gang Wang, et al.
In this paper, a de-striping algorithm based on adaptive frequency filter is proposed. It defines angle mapping in Log-Polar space for the satellite image to computer the approximate image rotation angle, develops the automatic effective area interception method and realizes the automatic image processing. This paper compares the algorithm results with official processing results by qualitative and quantitative evaluations. The results prove this algorithm can remove oblique stripe noises effectively in ALOS imagery and image quality improves adopting this developed de-striping algorithm.
Segmentation of vegetation scenes: the SIEMS method
This paper presents an unsupervised segmentation method dedicated to vegetation scenes with decametric or metric spatial resolutions. The proposed algorithm, named SIEMS, is based on the iterative use of the Expectation–Maximization algorithm and offers a good trade-off between oversegmentation and undersegmentation. Moreover, the choice of its input parameters is not image–dependent on the contrary to existing technics and its performances are not crucially determined by these input parameters. SIEMS consists in creating a coarse segmentation of the image by applying an edge detection method (typically the Canny–Deriche algorithm) and splitting iteratively the undersegmented areas with the Expectation–Maximization algorithm. The method has been applied on two images and shows satisfactory results. It notably allows to distinguish segments with slight radiometric variations without leading to oversegmentation.
Junction extraction on road masks by pruned skeletons
Umut Çinar, Ersin Karaman, Ekin Gedik, et al.
This study proposes a new method to detect road junctions from existing road masks obtained from geospatial databases. Moreover, this method can be used to extract junction points from the road masks generated by automatic or semiautomatic road extraction algorithms. The algorithm is intended to lower the false detection rate by refining the input road mask. Vector space analysis of the pruned road skeleton provides a simple yet robust detection and classification strategy. Empirical results demonstrate the success of the proposed junction extraction model.
The study of optical fiber communication technology for space optical remote sensing
Jun Zheng, Sheng-quan Yu, Xiao-hong Zhang, et al.
The latest trends of Space Optical Remote Sensing are high-resolution, multispectral, and wide swath detecting. High-speed digital image data transmission will be more important for remote sensing. At present, the data output interface of Space Optical Remote Sensing, after performing the image data compression and formatting, transfers the image data to data storage unit of the Spacecraft through LVDS circuit cables. But this method is not recommended for high-speed digital image data transmission. This type of image data transmission, called source synchronization, has the low performance for high-speed digital signal. Besides, it is difficult for cable installing and system testing in limited space of vehicle. To resolve these issues as above, this paper describes a high-speed interconnection device for Space Optical Remote Sensing with Spacecraft. To meet its objectives, this device is comprised of Virtex-5 FPGA with embedded high-speed series and power-efficient transceiver, fiber-optic transceiver module, the unit of fiber-optic connection and single mode optical fiber. The special communication protocol is performed for image data transferring system. The unit of fiber-optic connection with high reliability and flexibility is provided for transferring high-speed serial data with optical fiber. It is evident that this method provides many advantages for Space Optical Remote Sensing: 1. Improving the speed of image data transferring of Space Optical Remote Sensing; 2. Enhancing the reliability and safety of image data transferring; 3. Space Optical Remote Sensing will be reduced significantly in size and in weight; 4. System installing and system testing for Space Optical Remote Sensing will become easier.
Elastic band-to-band registration for airborne multispectral scanners with large field of view
Feng Li, ChuanRong Li, LingLi Tang, et al.
Multispectral line scanners with large field of view improve efficiency in Earth observation. Small volume of the instruments born with a short focal length, however, may bring a problem: there are different none-linear warping and local transformation between bands. Alignment accuracy of bands is a criteria factor impacting product quality in remote sensing. In this paper, a new elastic band-to-band image registration method is proposed for solving the problem. Rather than carry out registration between bands straightforwardly, corresponding featured images of each band are constructed and used to conduct an intensity based elastic image registration. In this method, the idea of the inverse compositional algorithm is borrowed and expended when dealing with local warping, and a smoothness constraint is also added in the procedure. Experimental results show that the proposed band-to-band registration method works well both visually and quantitatively.
Remote sensing image classification by mean shift and color quantization
Hind Taud, Stéphane Couturier, José Joel Carrillo-Rivera
Remote sensing imagery involves large amounts of data acquired by several kinds of airborne, sensors, wavelengths spatial resolutions, and temporal frequencies. To extract the thematic information from this data, many algorithms and techniques for segmentation and classification have been proposed. The representation of the different multispectral bands as true or false color imaging has been widely employed for visual interpretation and classification. On the other hand, the color quantization, which is a well-known method for data compression, has been utilized for color image segmentation and classification in computer vision application. The number of colors in the original image is reduced by minimizing the distortion between the quantified and the original image with the aim of conserving the pattern representation. Considering the density estimation in the color or feature space, similar samples are grouped together to identify patterns by any clustering techniques. Mean shift algorithm has been successfully applied to different applications as the basis for nonparametric unsupervised clustering techniques. Based on an iterative manner, mean shift detects modes in a probability density function. In this article, the contribution consists in providing an unsupervised color quantization method for image classification based on mean shift. To avoid its high computational cost, the integral image is used. The method is evaluated on Landsat satellite imagery as a case study to underline forest mapping. A comparison between the proposed method and the simple mean shift is carried out. The results prove that the proposed method is useful in multispectral remote sensing image classification study.
Object-based image analysis and data mining for building ontology of informal urban settlements
Dejrriri Khelifa, Malki Mimoun
During recent decades, unplanned settlements have been appeared around the big cities in most developing countries and as consequence, numerous problems have emerged. Thus the identification of different kinds of settlements is a major concern and challenge for authorities of many countries. Very High Resolution (VHR) Remotely Sensed imagery has proved to be a very promising way to detect different kinds of settlements, especially through the using of new objectbased image analysis (OBIA). The most important key is in understanding what characteristics make unplanned settlements differ from planned ones, where most experts characterize unplanned urban areas by small building sizes at high densities, no orderly road arrangement and Lack of green spaces. Knowledge about different kinds of settlements can be captured as a domain ontology that has the potential to organize knowledge in a formal, understandable and sharable way. In this work we focus on extracting knowledge from VHR images and expert’s knowledge. We used an object based strategy by segmenting a VHR image taken over urban area into regions of homogenous pixels at adequate scale level and then computing spectral, spatial and textural attributes for each region to create objects. A genetic-based data mining was applied to generate high predictive and comprehensible classification rules based on selected samples from the OBIA result. Optimized intervals of relevant attributes are found, linked with land use types for forming classification rules. The unplanned areas were separated from the planned ones, through analyzing of the line segments detected from the input image. Finally a simple ontology was built based on the previous processing steps. The approach has been tested to VHR images of one of the biggest Algerian cities, that has grown considerably in recent decades.
A parametric statistical model over spectral space for the unmixing of imaging spectrometer data
Jignesh S. Bhatt, Manjunath V. Joshi, Mehul S. Raval
The imaging spectrometer acquires hundreds of contiguous spectral measurements of an area. In many of the applications, the measured data is considered to be linear combinations of constituent materials signatures called endmembers. In practice the number of endmembers are usually much lesser than the total available spectral bands. One of the aims in the unmixing problem is to obtain underlying fractional abundances of every endmember at each location in the acquired scene. It should be noted that at every location, the nature of mixing among the endmembers is governed by the scene-content. It results in nonnegative abundance proportions for each endmember. Due to the presence of noise in the system the problem of unmixing becomes ill-posed. In this paper, we consider the variability in the mixing of endmembers over the acquired scene, for estimating the resultant abundances vectors given the data (measurements) and the endmembers. The relatively coarser instantaneous field of view (IFOV) covered by the hyperspectral imager are unmixed by using the spectral details available at each location in the scene. A Huber-Markov random field (HMRF) is considered across the contiguous spectral space, where a Huber function is defined to incorporate the abundances dependencies within the solution. As the Huber function imposes both the quadratic and the linear penalties, the HMRF preserves the smooth as well as sudden variations in the abundances. The problem is solved by using maximum a posteriori (MAP) estimation by incorporating the HMRF as the prior distribution on the abundances along with the likelihood function. The solution space is restricted to yield physically constrained abundances while optimizing. We conducted experiments on simulated data, constructed with regions having different classes of the mixtures of the endmembers signatures to evaluate the proposed approach. Then the execution is carried out by adding the noise in the data, and the results are compared with the state-of-art approach. Finally, the abundance maps are obtained for the well-known AVIRIS Cuprite mining site. The results validate the effectiveness of the proposed approach.
The inpainting of hyperspectral images: a survey and adaptation to hyperspectral data
In this work, we survey image reconstruction methods for hyperspectral imagery. First, a review of image interpolation methods, both linear and nonlinear, is given. Second, image inpainting methods, especially from the variational perspective, are analyzed with respect to their suitability for hyperspectral inpainting. The ability to connect edges through occlusions and the structure of the space in which the hyperspectral data lies are especially considered when propagating data into unknown regions. Finally, a general method for adapting image reconstruction methods to the hyperspectral case is presented.
Unsupervised classification of hyperspectral images using an adaptive vector tunnel classifier
S. Demirci, I. Erer
Hyperspectral image classification is one of the most popular information extraction methods in remote sensing applications. This method consists of variety of algorithms involving supervised, unsupervised or fuzzy classification, etc. In supervised classification, reference data which is known as a priori class information is used. On the other hand, computer based clustering algorithms are employed to group pixels which have similar spectral characteristics according to some statistical criteria in unsupervised classification. Among the most powerful techniques for hyperspectral image clustering, K-Means is one of the widely used iterative approaches. It is a simple though computationally expensive algorithm, particularly for clustering large hyperspectral images into many categories. During application of this technique, the Euclidian Distance (ED) measure is used to calculate the distances between pixel and local class centers. In this study, a new adaptive unsupervised classification technique is presented. It is a kind of vector tunnel around the randomly selected pixel spectra that changes according to spectral variation with respect to hyperspectral bands. Although vector tunnel classifier does not need training data or intensive mathematical calculation, classification results are comparable to K-Means Classification Algorithm.
A quaternion-based method for satellite images pan-sharpening
Chahira Serief, Habib Mahi
To date, several pan-sharpening methods have been proposed in the literature. However, conventional Pan-sharpening processes are based on color separation (RGB decomposition for example) of color (spectral) images resulting in significant loss of color information. The aim of this work is to investigate the potential of hypercomplex numbers representation of color images, in particular Quaternion model, for satellite images pan-sharpening. Quaternion model allows color images to be handled as a whole, rather than as color-separated components. That is, the values of the color components (R, G and B for example) of each pixel of a color image are represented as a single pure quaternion valued pixel. The proposed fusion method is based on Quaternion intensity-hue-saturation (QIHS) transform and consists of three steps. First, the color (spectral) image is represented as a pure quaternion image to take into account its particular nature. Then, the IHS transform is applied to the pure quaternion image resampled at the scale of panchromatic (Pan) image. Finally, the image fusion process is done by IHS-based component substitution. The efficiency of the proposed method is tested on Quickbird dataset. A comparative evaluation among the proposed technique and the standard IHSbased method available in the commercial software ENVI is then conducted.
Hierarchical watershed segmentation based on gradient image simplification
François Cokelaer, Mauro Dalla Mura, Jocelyn Chanussot
Watershed is one of the most widely used algorithms for segmenting remote sensing images. This segmentation technique can be thought as a flooding performed on a topographic relief in which the water catchment basins, separated by watershed lines, are the regions in the resulting segmentation. A popular technique for performing a watershed relies on the flooding of the gradient image in which high level values correspond to watershed lines and regional minima to the bottom of the catchment basins. Here we will refer as hierarchical segmentation a decomposition of the segmentation map respecting the nesting property from a finer to a coarser scale i.e. the set of partition lines at a coarser scale should be included in that of the finer scale. From the watershed lines or partitions lines of the gradient image, we propose to perform a simplification using novel operators of mathematical morphology for the filtering of thin and oriented features. By lowering the smallest edges, one can reach a coarser partition of the image. Then, by applying a sequence of progressively more aggressive filters it is possible to generate a hierarchy of segmentations.