A theoretical evaluation of aliasing and misregistration effects on pansharpening methods
Author(s):
Bruno Aiazzi;
Luciano Alparone;
Stefano Baronti;
Andrea Garzelli;
Massimo Selva
Show Abstract
The characteristics of multispectral (MS) and panchromatic (P) image fusion, or pansharpening, methods are
investigated. Depending on the way spatial details are extracted from P, such methods can be broadly labeled into
two main classes, roughly corresponding to component substitution (CS), also known as projection substitution,
and methods based on multiresolution analysis (MRA), i.e. on digital filtering. Theoretical and experimental
results carried out on QuickBird and Ikonos data sets evidence that CS-based fusion is far less sensitive than
MRA-based fusion to registration errors, i.e. spatial misalignments between MS and P images, possibly originated
by cartographic projection and resampling of individual data sets, and aliasing occurring in MS bands and
deriving from a modulation transfer function (MTF) of each MS channel that is excessively broad relatively to
the spatial sampling interval. Simulated misalignments carried out at full scale by means of a suitable quality
evaluation protocol have evidenced the quality-shift tradeoff of the two classes: MRA methods yield a slightly
superior quality in the absence of misalignments, but are more penalized, whenever shifts between MS and P
are present, than CS methods producing a slightly lower quality in the ideal case, but that are intrinsically more
shift tolerant.
Super-resolution mapping using multiple observations and Hopfield neural network
Author(s):
Anuar M. Muad;
Giles M. Foody
Show Abstract
Super-resolution mapping is used to produces thematic maps at a scale finer than the source images. This paper presents
a new super-resolution mapping approach that exploits the typically fine temporal resolution of coarse spatial resolution
images as it input and an adoption of an active threshold surface using Hopfield neural network as a means to map land
cover at a sub-pixel scale. The results demonstrated that the proposed technique is slightly more accurate than the
existence technique in terms of site specific accuracy and produce better visualization on individual land cover map.
Review of low-baseline stereo algorithms and benchmarks
Author(s):
N. Sabater;
G. Blanchet;
L. Moisan;
A. Almansa;
J.-M. Morel
Show Abstract
The purpose of this work is to review and evaluate the performance of several algorithms which have been
designed for satellite imagery in a geographic context. In particular we are interested in their performance with
low-baseline image pairs like those which will be produced by the Pleiades satellite. In this study local and global
state of the art algorithms have been considered and compared: CARMEN, MARC, MARC2 and MICMAC.
This paper aims also at proposing a new benchmark to compare stereo algorithms. A set of simulated stereo
images for which the ground truth is perfectly known will be presented. The obtained accuracy for the ground
truth is more than a hundredth of pixel. The existence of an accurate ground truth is a major improvement for
the community, allowing to quantify very precisely the disparity error in a realistic setting.
Alternating sequential filters with morphological attribute operators for the analysis of remote sensing images
Author(s):
Mauro Dalla Mura;
Jon Atli Benediktsson;
Lorenzo Bruzzone
Show Abstract
In this paper we propose Alternating Sequential Attribute Filters, which are Alternating Sequential Filters (ASFs)
computed with Attribute Filters. ASFs are obtained by the iterative subsequent application of morphological
opening and closing transformations and process an image by filtering both bright and dark structures. ASFs
are widely used for achieving a simplification of a scene and for the removal of noisy structures. However, ASFs
are not suitable for the analysis of very high geometrical resolution remote sensing images since they do not
preserve the geometrical characteristics of the objects in the image. For this reason, instead of the conventional
morphological operators, we propose to use attribute filters, which are morphological connected filters and process
an image only by merging flat regions. Thus, they are suitable for the analysis of very high resolution images.
Since the attribute selected for use in the analysis mainly defines the effects obtained by the morphological
filter, when applying attribute filters in an alternate composition (as the ASF) it is possible to obtain a different
image simplification according to the attribute considered. For example, if one considers the area as attribute,
an input image will be processed by progressively removing both larger dark and bright areas. When using an
attribute that measures the homogeneity of the regions (e.g., the standard deviation of the values of the pixels)
the scene can be simplified by merging progressively more homogeneous zones. Moreover, the computation of
the ASF with attribute filters can be performed with a reduced computational load by taking advantage of the
efficient representation of the image as min- and max-tree. The proposed alternating sequential attribute filters
are qualitatively evaluated on a panchromatic GeoEye-1 image.
A new generic method for semi-automatic extraction of river and road networks in low- and mid-resolution satellite images
Author(s):
Jacopo Grazzini;
Scott Dillard;
Pierre Soille
Show Abstract
This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite
images. For that purpose, we propose an approach combining concepts arising from mathematical morphology
and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their
tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following
two general assumptions, which are the minimum conditions for a road/river network to be identifiable and
are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the
network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric
constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While
this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature
in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks,
further directional information about the image structures is incorporated. Namely, an appropriate anisotropic
metric is designed by using both the characteristic features of the target network and the eigen-decomposition
of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed
with this metric is combined with hydrological operators for overland flow simulation to extract the paths which
contain most line evidence and identify them with the target network.
Parameter free image artifacts detection: a compression based approach
Author(s):
Avid Roman-Gonzalez;
Mihai Datcu
Show Abstract
The qualified Earth Observation (EO) images sometimes may present unexpected artifacts. These perturbations and
distortions can make more difficult the analysis of images and may decrease the efficiency of interpretation algorithms
because the information is distorted. Thus is necessary to implement methods able to detect these artifacts regardless of
the model which are formed, i.e. parameter free. In this article, we propose and present a method based on data
compression, whether lossy compression or lossless compression for detecting aliasing, strips, saturation, etc.
Object oriented image segmentation by means of multichannel mathematical morphology
Author(s):
Wuilian Torres;
Ramiro Salcedo
Show Abstract
The generation of agro-statistics in tropical regions, using remote sensing data, is a difficult task due to the presence of
high variety of crops at a given time over the same area that are not restricted by a crop calendar. The proposed
segmentation is developed into two phases: spatial and spectral. The spatial segmentation is based in using multidimensional
mathematical morphology for the delimitation of the objects that define each of the production segments or
plots. The basic multi-dimensional morphologic operators (dilatation, erosion and gradient), were defined to be adapted
to the study of agricultural areas using multi-spectral imagery. Meanwhile, the spectral segmentation is performed
assigning the dominant spectral signature to each spatial segment to reduce its number using a hierarchical grouping
method.
Hyperspectral unmixing: geometrical, statistical, and sparse regression-based approaches
Author(s):
José M. Bioucas-Dias;
Antonio Plaza
Show Abstract
Hyperspectral instruments acquire electromagnetic energy scattered within their ground instantaneous field view
in hundreds of spectral channels with high spectral resolution. Very often, however, owing to low spatial resolution
of the scanner or to the presence of intimate mixtures (mixing of the materials at a very small scale) in the scene,
the spectral vectors (collection of signals acquired at different spectral bands from a given pixel) acquired by the
hyperspectral scanners are actually mixtures of the spectral signatures of the materials present in the scene.
Given a set of mixed spectral vectors, spectral mixture analysis (or spectral unmixing) aims at estimating the
number of reference materials, also called endmembers, their spectral signatures, and their fractional abundances.
Spectral unmixing is, thus, a source separation problem where, under a linear mixing model, the sources are the
fractional abundances and the endmember spectral signatures are the columns of the mixing matrix. As such,
the independent component analysis (ICA) framework came naturally to mind to unmix spectral data. However,
the ICA crux assumption of source statistical independence is not satisfied in spectral applications, since the
sources are fractions and, thus, non-negative and sum to one. As a consequence, ICA-based algorithms have
severe limitations in the area of spectral unmixing, and this has fostered new unmixing research directions taking
into account geometric and statistical characteristics of hyperspectral sources.
This paper presents an overview of the principal research directions in hyperspectral unmixing. The presentations
is organized into four main topics: i) mixing models, ii) signal subspace identification, iii) geometrical-based
spectral unmixing, (iv) statistical-based spectral unmixing, and (v) sparse regression-based unmixing. In each
topic, we describe what physical or mathematical problems are involved and summarize state-of-the-art algorithms
to address these problems.
A non-parametric approach to anomaly detection in hyperspectral images
Author(s):
Tiziana Veracini;
Stefania Matteoli;
Marco Diani;
Giovanni Corsini;
Sergio U. de Ceglie
Show Abstract
In the past few years, spectral analysis of data collected by hyperspectral sensors aimed at automatic anomaly detection
has become an interesting area of research. In this paper, we are interested in an Anomaly Detection (AD) scheme for
hyperspectral images in which spectral anomalies are defined with respect to a statistical model of the background Probability
Density Function (PDF).The characterization of the PDF of hyperspectral imagery is not trivial. We approach the
background PDF estimation through the Parzen Windowing PDF estimator (PW). PW is a flexible and valuable tool for
accurately modeling unknown PDFs in a non-parametric fashion. Although such an approach is well known and has been
widely employed, its use within an AD scheme has been not investigated yet. For practical purposes, the PW ability to
estimate PDFs is strongly influenced by the choice of the bandwidth matrix, which controls the degree of smoothing of
the resulting PDF approximation. Here, a Bayesian approach is employed to carry out the bandwidth selection. The resulting
estimated background PDF is then used to detect spectral anomalies within a detection scheme based on the
Neyman-Pearson approach. Real hyperspectral imagery is used for an experimental evaluation of the proposed strategy.
Unmixing hyperspectral intimate mixtures
Author(s):
José M. P. Nascimento;
José M. Bioucas-Dias
Show Abstract
This paper addresses the unmixing of hyperspectral images, when intimate mixtures are present. In these
scenarios the light suffers multiple interactions among distinct endmembers, which is not accounted for by the
linear mixing model.
A two-step method to unmix hyperspectral intimate mixtures is proposed: first, based on the Hapke intimate
mixture model, the reflectance is converted into single scattering albedo average. Second, the mass fractions of
the endmembers are estimated by a recently proposed method termed simplex identification via split augmented
Lagrangian (SISAL). The proposed method is evaluated on a well known intimate mixture data set.
Gas plume quantification in downlooking hyperspectral longwave infrared images
Author(s):
Caroline S. Turcotte;
Michael R. Davenport
Show Abstract
Algorithms have been developed to support quantitative analysis of a gas plume using down-looking airborne
hyperspectral long-wave infrared (LWIR) imagery. The resulting gas quantification "GQ" tool estimates the quantity of
one or more gases at each pixel, and estimates uncertainty based on factors such as atmospheric transmittance,
background clutter, and plume temperature contrast. GQ uses gas-insensitive segmentation algorithms to classify the
background very precisely so that it can infer gas quantities from the differences between plume-bearing pixels and
similar non-plume pixels. It also includes MODTRAN-based algorithms to iteratively assess various profiles of air
temperature, water vapour, and ozone, and select the one that implies smooth emissivity curves for the (unknown)
materials on the ground. GQ then uses a generalized least-squares (GLS) algorithm to simultaneously estimate the most
likely mixture of background (terrain) material and foreground plume gases. Cross-linking of plume temperature to the
estimated gas quantity is very non-linear, so the GLS solution was iteratively assessed over a range of plume
temperatures to find the best fit to the observed spectrum. Quantification errors due to local variations in the camera-topixel
distance were suppressed using a subspace projection operator.
Lacking detailed depth-maps for real plumes, the GQ algorithm was tested on synthetic scenes generated by the Digital
Imaging and Remote Sensing Image Generation (DIRSIG) software. Initial results showed pixel-by-pixel gas
quantification errors of less than 15% for a Freon 134a plume.
Segmentation of very high spatial resolution panchromatic images based on wavelets and evidence theory
Author(s):
Antoine Lefebvre;
Thomas Corpetti;
Laurence Hubert Moy
Show Abstract
This paper is concerned with the segmentation of very high spatial resolution panchromatic images. We
propose a method for unsupervised segmentation of remotely sensed images based on texture information
and evidence theory. We first perform a segmentation of the image using a watershed on some coefficients
issued from a wavelet decomposition of the initial image. This yields an over-segmented map where the
similar objects, from a textural point of view, are aggregated together in a step forward. The information of
texture is obtained by analyzing the wavelet coefficients of the original image. At each band of the wavelet
decomposition, we compute an indicator of similarity between two objects. All the indicators are then fused
using some rules of evidence theory to derive a unique criterion of similarity between two objects.
Simultaneous hierarchical segmentation and vectorization of satellite images through combined non-uniform data sampling and anisotropic triangulation
Author(s):
Jacopo Grazzini;
Scott Dillard;
Lakshman Prasad
Show Abstract
The automatic detection, recognition, and segmentation of object classes in remote sensed images is of crucial
importance for scene interpretation and understanding. However, it is a difficult task because of the high
variability of satellite data. Indeed, the observed scenes usually exhibit a high degree of complexity, where
complexity refers to the large variety of pictorial representations of objects with the same semantic meaning and
also to the extensive amount of available details. Therefore, there is still a strong demand for robust techniques for
automatic information extraction and interpretation of satellite images. In parallel, there is a growing interest in
techniques that can extract vector features directly from such imagery. In this paper, we investigate the problem
of automatic hierarchical segmentation and vectorization of multispectral satellite images. We propose a new
algorithm composed of the following steps: (i) a non-uniform sampling scheme extracting most salient pixels in
the image, (ii) an anisotropic triangulation constrained by the sampled pixels taking into account both strength
and directionality of local structures present in the image, (iii) a polygonal grouping scheme merging, through
techniques based on perceptual information, the obtained segments to a smaller quantity of superior vectorial
objects. Besides its computational efficiency, this approach provides a meaningful polygonal representation for
subsequent image analysis and/or interpretation.
A waterfall segmentation algorithm for coastline detection in SAR images
Author(s):
Fernando Soares;
Giovanni Nico
Show Abstract
This work presents a morphological-based segmentation approach for coastline detection based on a waterfall
hierarchical scheme. Hierarchical waterfall is constrained my markers in each step on the hierarchical tree. In
this manner, a map of the waterfall minimum persistency is created to identify the coastline. The proposed
algorithm was tested on Envisat-ASAR and TerraSAR-X images acquired over the Lisbon region.
Object based and geospatial image analysis: a semi-automatic pre-operational system
Author(s):
Julien Michel;
Jordi Inglada;
Julien Malik
Show Abstract
High resolution optical remote sensing images allow to produce accurate land-cover maps. This is usually
achieved using an ad-hoc mixture of image segmentation and supervised classification. The main drawback of
this approach is that it does not scale for real world complete scenes. In this paper we present a framework
which allows to implement this kind of image analysis without scale issues.
Stochastic band selection method based on a spectral angle class separability criterion
Author(s):
Ph. Déliot;
M. Kervella
Show Abstract
Band selection methods often suppose normal distribution or, at least, a significant number of samples per class to
compute statistical parameters. In this paper, we propose a band selection technique that needs very few training samples
per class to be effective. To take into account the spectral variability inside the classes and to be independent on statistic
parameters, we propose to use a criterion which measures the separability between two classes. This criterion is based on
the extension of Spectral Angle Mapper (SAM) to a SAM within- and between-classes.
The proposed selection method consists in eliminating spectral bands from the original set, thanks to the previous
criterion that check the increase of separability measure between classes on the remaining bands subset. We use a
stochastic algorithm to choose, at each step, which band to eliminate. We proceed by successive elimination until we
reach the number of desired bands or the maximum of the criterion. This top-down method allows taking simultaneously
into account all the interesting bands during the whole process, instead of selecting them one by one.
The method provides, at the end, the selected spectral bands for a pair of classes. We expand this two-class selection
technique to the multiclass band selection. We improve the method by adding a pre-selection of interesting bands
considering a measure of a spectral signal on noise ratio.
Some examples are given to show the effectiveness of the method: as one main application is classification, we compare
the results of classification achieved after the data reduction made by different methods. We check the efficiency
according to the number of training samples.
Supervised super-resolution to improve the resolution of hyperspectral images classification maps
Author(s):
Alberto Villa;
Jocelyn Chanussot;
Jon Atli Benediktsson;
Christian Jutten
Show Abstract
Hyperspectral imaging is a continuously growing area of remote sensing. Hyperspectral data provide a wide
spectral range, coupled with a very high spectral resolution, and are suitable for detection and classification of
surfaces and chemical elements in the observed image. The main problem with hyperspectral data for these
applications is the (relatively) low spatial resolution, which can vary from a few to tens of meters. In the
case of classification purposes, the major problem caused by low spatial resolution is related to mixed pixels,
i.e., pixels in the image where more than one land cover class is within the same pixel. In such a case, the
pixel cannot be considered as belonging to just one class, and the assignment of the pixel to a single class
will inevitably lead to a loss of information, no matter what class is chosen. In this paper, a new supervised
technique exploiting the advantages of both probabilistic classifiers and spectral unmixing algorithms is proposed,
in order to produce land cover maps of improved spatial resolution. The method is in three steps. In a first
step, a coarse classification is performed, based on the probabilistic output of a Support Vector Machine (SVM).
Every pixel can be assigned to a class, if the probability value obtained in the classification process is greater
than a chosen threshold, or unclassified. In the proposed approach it is assumed that the pixels with a low
probabilistic output are mixed pixels and thus their classification is addressed in a second step. In the second
step, spectral unmixing is performed on the mixed pixels by considering the preliminary results of the coarse
classification step and applying a Fully Constrained Least Squares (FCLS) method to every unlabeled pixel, in
order to obtain the abundances fractions of each land cover type. Finally, in a third step, spatial regularization
by Simulated Annealing is performed to obtain the resolution improvement. Experiments were carried out on
a real hyperspectral data set. The results are good both visually and numerically and show that the proposed
method clearly outperforms common hard classification methods when the data contain mixed pixels.
Unbiased query-by-bagging active learning for VHR image classification
Author(s):
Loris Copa;
Devis Tuia;
Michele Volpi;
Mikhail Kanevski
Show Abstract
A key factor for the success of supervised remote sensing image classification is the definition of an efficient training
set. Suboptimality in the selection of the training samples can bring to low classification performance. Active
learning algorithms aim at building the training set in a smart and efficient way, by finding the most relevant
samples for model improvement and thus iteratively improving the classification performance. In uncertaintybased
approaches, a user-defined heuristic ranks the unlabeled samples according to the classifier's uncertainty
about their class membership. Finally, the user is asked to define the labels of the pixels scoring maximum
uncertainty. In the present work, an unbiased uncertainty scoring function encouraging sampling diversity is
investigated. A modified version of the Entropy Query by Bagging (EQB) approach is presented and tested
on very high resolution imagery using both SVM and LDA classifiers. Advantages of favoring diversity in the
heuristics are discussed. By the diverse sampling it enhances, the unbiased approach proposed leads to higher
convergence rates in the first iterations for both the models considered.
Multitask SVM learning for remote sensing data classification
Author(s):
Jose M. Leiva-Murillo;
Luis Gómez-Chova;
Gustavo Camps-Valls
Show Abstract
Many remote sensing data processing problems are inherently constituted by several tasks that can be solved
either individually or jointly. For instance, each image in a multitemporal classification setting could be taken
as an individual task but relation to previous acquisitions should be properly considered. In such problems,
different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and
test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning
methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across
tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The
proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through
matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization
schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion
of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of
cloud screening from multispectral MERIS images and for landmine detection.
Classification of filtered multichannel images
Author(s):
Dmitriy V. Fevralev;
Vladimir V. Lukin;
Nikolay N. Ponomarenko;
Benoit Vozel;
Kacem Chehdi;
Andriy Kurekin;
Lik-Kwan Shark
Show Abstract
A typical tendency in modern remote sensing (RS) is to apply multichannel systems. Images formed by them are in
more or less degree noisy. Thus, their pre-filtering can be used for different purposes, in particular, to improve
classification. In this paper, we consider methods of multichannel image denoising based on discrete cosine transform
(DCT) and analyze how parameters of these methods affect classification. Both component-wise and 3D denoising is
studied for three-channel Landsat test image. It is shown that for better determination of different classes, DCT based
filters, both component-wise and 3D variants are efficient, but with a different tuning of involved parameters. The
parameters can be optimized with respect to either standard MSE or metrics that characterize image visual quality. Best
results are obtained with 3D denoising. Although the main conclusions basically coincide for both considered
classifiers, Radial Basis Function Neural Network (RBF NN) and Support Vector Machine (SVM), the classification
results appear slightly better with RBF NN for the experiment carried out in this paper.
Classification of very high resolution SAR images of urban areas by dictionary-based mixture models, copulas, and Markov random fields using textural features
Author(s):
Aurélie Voisin;
Gabriele Moser;
Vladimir A. Krylov;
Sebastiano B. Serpico;
Josiane Zerubia
Show Abstract
This paper addresses the problem of the classification of very high resolution (VHR) SAR amplitude images of
urban areas. The proposed supervised method combines a finite mixture technique to estimate class-conditional
probability density functions, Bayesian classification, and Markov random fields (MRFs). Textural features, such
as those extracted by the greylevel co-occurrency method, are also integrated in the technique, as they allow
to improve the discrimination of urban areas. Copulas are applied to estimate bivariate joint class-conditional
statistics, merging the marginal distributions of both textural and SAR amplitude features. The resulting joint
distribution estimates are plugged into a hidden MRF model, endowed with a modified Metropolis dynamics
scheme for energy minimization. Experimental results with COSMO-SkyMed and TerraSAR-X images point out
the accuracy of the proposed method, also as compared with previous contextual classifiers.
A novel approach to land-cover maps updating in complex scenarios based on multitemporal remote sensing images
Author(s):
K. Bahirat;
F. Bovolo;
L. Bruzzone;
S. Chaudhuri
Show Abstract
Nowadays, an ever increasing number of multi-temporal images is available, giving the possibility of having with high
temporal frequency information about the land-cover evolution on the ground. In general, the production of accurate
land-cover maps requires the availability of reliable ground truth information on the considered area for each image to be
classified. Unfortunately the rate of ground truth information collection will never equal the remote sensing image
acquisition rate, making supervised classification unfeasible for land-cover maps updating. This problem has been faced
according to domain adaptation methods that update land-cover maps under the assumption that: i) training data are
available for one of the considered multi-temporal acquisitions while they are not for the others and ii) set of land-cover
classes is same for all considered acquisitions. In real applications, the latter assumption represents a constraint which is
often not satisfied due to possible changes occurred on the ground and associated with the presence of new classes or the
absence of old classes in the new images. In this work, we propose an approach that removes this constraint by
automatically identifying whether there exist differences between classes in multi-temporal images and properly
handling these differences in the updating process. Experimental results on a real multi-temporal remote sensing data set
confirm the effectiveness and the reliability of the proposed approach.
Gaussian process classification using automatic relevance determination for SAR target recognition
Author(s):
Xiangrong Zhang;
Limin Gou;
Biao Hou;
Licheng Jiao
Show Abstract
In this paper, a Synthetic Aperture Radar Automatic Target Recognition approach based on Gaussian process (GP)
classification is proposed. It adopts kernel principal component analysis to extract sample features and implements target
recognition by using GP classification with automatic relevance determination (ARD) function. Compared with
k-Nearest Neighbor, Naïve Bayes classifier and Support Vector Machine, GP with ARD has the advantage of automatic
model selection and hyper-parameter optimization. The experiments on UCI datasets and MSTAR database show that
our algorithm is self-tuning and has better recognition accuracy as well.
Linear and kernel methods for multi- and hypervariate change detection
Author(s):
Allan A. Nielsen;
Morton J. Canty
Show Abstract
The iteratively re-weighted multivariate alteration detection (IR-MAD) algorithm may be used both for unsuper-
vised change detection in multi- and hyperspectral remote sensing imagery as well as for automatic radiometric
normalization of multi- or hypervariate multitemporal image sequences. Principal component analysis (PCA) as
well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images,
both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change
background. The kernel versions are based on a dual formulation, also termed Q-mode analysis, in which the
data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products
of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature
space. Via kernel substitution, also known as the kernel trick, these inner products between the mappings are in
turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel
function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component
analysis (PCA), kernel MAF and kernel MNF analyses handle nonlinearities by implicitly transforming data into
high (even in¯nite) dimensional feature space via the kernel function and then performing a linear analysis in
that space.
In image analysis the Gram matrix is often prohibitively large (its size is the number of pixels in the image
squared). In this case we may sub-sample the image and carry out the kernel eigenvalue analysis on a set of
training data samples only. To obtain a transformed version of the entire image we then project all pixels, which
we call the test data, mapped nonlinearly onto the primal eigenvectors.
IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and
kernel PCA/MAF/MNF transformations have been written which function as transparent and fully integrated
extensions of the ENVI remote sensing image analysis environment. Also, Matlab code exists which allows for
fast data exploration and experimentation with smaller datasets. Computationally demanding kernelization of
test data with training data and kernel image projections have been programmed to run on massively parallel
CUDA-enabled graphics processors, when available, giving a tenfold speed enhancement. The software will be
available from the authors' websites in the near future.
A data example shows the application to bi-temporal RapidEye data covering the Garzweiler open pit mine
in the Ruhr area in Germany.
An automatic approach to the unsupervised detection of multiple changes in multispectral images
Author(s):
F. Bovolo;
S. Marchesi;
L. Bruzzone
Show Abstract
In this paper we present a technique for the detection of multiple changes in multitemporal and multispectral remote
sensing images. The technique is based on: i) the representation of the change detection problem in polar coordinates;
and ii) a 2-step decision strategy. First of all the change information present in the multitemporal dataset is represented
taking advantage from the framework for change detection in polar coordinates. Within this representation the Bayesian
decision theory is applied twice: the first time for distinguishing changed from unchanged pixels; and the second one for
discriminating different kinds of change within changed pixels. The procedure exploits the Expectation-Maximization
algorithm and is completely automatic and unsupervised. Experiments carried out on high and very high resolution
multispectral and multitemporal datasets confirmed the effectiveness of the proposed approach.
Multiresolution segmentation adapted for object-based change detection
Author(s):
Clemens Listner;
Irmgard Niemeyer
Show Abstract
In object-based change detection approaches using specified object features as change measures, segmentation
is the crucial step, especially when also shape changes are considered. In this paper we present an enhanced
segmentation procedure based on the multiresolution segmentation. The procedure segments the first image
using the multiresolution segmentation. The segmentation is then applied to the second image and checked for
its consistency. If a segment is found to be inconsistent with the second image, it is split up. The performance
of the proposed procedure is demonstrated based on simulated and real image data.
Unsupervised change detection by kernel clustering
Author(s):
Michele Volpi;
Devis Tuia;
Gustavo Camps-Valls;
Mikhail Kanevski
Show Abstract
This paper presents a novel unsupervised clustering scheme to find changes in two or more coregistered remote
sensing images acquired at different times. This method is able to find nonlinear boundaries to the change
detection problem by exploiting a kernel-based clustering algorithm. The kernel k-means algorithm is used in
order to cluster the two groups of pixels belonging to the 'change' and 'no change' classes (binary mapping). In
this paper, we provide an effective way to solve the two main challenges of such approaches: i) the initialization
of the clustering scheme and ii) a way to estimate the kernel function hyperparameter(s) without an explicit
training set. The former is solved by initializing the algorithm on the basis of the Spectral Change Vector (SCV)
magnitude and the latter is optimized by minimizing a cost function inspired by the geometrical properties of
the clustering algorithm. Experiments on VHR optimal imagery prove the consistency of the proposed approach.
Analysing multitemporal SAR images for forest mapping
Author(s):
Yasser Maghsoudi;
Michael J. Collins;
Donald G. Leckie
Show Abstract
The objective of this paper is twofold: first, to presents a generic approach for the analysis of Radarsat-1
multitemporal data and, second, to presents a multi classifier schema for the classification of multitemporal
images. The general approach consists of preprocessing step and classification. In the preprocessing stage, the
images are calibrated and registered and then temporally filtered. The resulted multitemporally filtered images
are subsequently used as the input images in the classification step. The first step in a classifier design is to
pick up the most informative features from a series of multitemporal SAR images. Most of the feature selection
algorithms seek only one set of features that distinguish among all the classes simultaneously and hence a limited
amount of classification accuracy. In this paper, a class-based feature selection (CBFS) was proposed. In this
schema, instead of using feature selection for the whole classes, the features are selected for each class separately.
The selection is based on the calculation of JM distance of each class from the rest of classes. Afterwards,
a maximum likelihood classifier is trained on each of the selected feature subsets. Finally, the outputs of the
classifiers are combined through a combination mechanism. Experiments are performed on a set of 34 Radarsat-1
images acquired from August 1996 to February 2007. A set of 9 classes in a forest area are used in this study.
Classification results confirm the effectiveness of the proposed approach compared with the case of single feature
selection. Moreover, the proposed process is generic and hence is applicable in different mapping purposes for
which a multitemporal set of SAR images are available.
A new type of remote sensors which allow directly forming certain statistical estimates of images
Author(s):
Boris Podlaskin;
Elena Guk;
Andrey Karpenko
Show Abstract
A new approach to the problems of statistical and structural pattern recognition, a signal processing and image analysis
techniques has been considered. These problems are extremely important for tasks being solved by airborne and space
borne remote sensing systems.
Development of new remote sensors for image and signal processing is inherently connected with a possibility of
statistical processing of images. Fundamentally new optoelectronic sensors "Multiscan" have been suggested in the
present paper. Such sensors make it possible to form directly certain statistical estimates, which describe completely
enough the different types of images. The sensors under discussion perform the Lebesgue-Stieltjes signal integration
rather than the Cauchy-Riemann one. That permits to create integral functionals for determining statistical features of
images. The use of the integral functionals for image processing provides a good agreement of obtained statistical
estimates with required image information features.
The Multiscan remote sensors allows to create a set of integral moments of an input image right up to high-order integral
moments, to form a quantile representation of an input image, which provides a count number limited texture, to form a
median, which provides a localisation of a low-contrast horizon line in fog, localisation of water flow boundary etc.
This work presents both the description of the design concept of the new remote sensor and mathematical apparatus
providing the possibility to create input image statistical features and integral functionals.
Real-time orthorectification by FPGA-based hardware acceleration
Author(s):
David Kuo;
Don Gordon
Show Abstract
Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and
ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the
large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a
computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive
value in the long processing cycle.
However, the computation on each pixel can be reduced substantially by using computational results of the neighboring
pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized
coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported
hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express
interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture.
The optimal partition between software and hardware, the timing profile among image I/O and computation, and the
highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image
production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware
orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time
from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for
their orthorectified imagery production will be presented. The potential candidacy of the image processing computation
that can be accelerated more efficiently by the same approach will also be analyzed.
Nonlinear retrieval of atmospheric profiles from MetOp-IASI and MTG-IRS data
Author(s):
Gustavo Camps-Valls;
Luis Guanter;
Jordi Muñoz-Marí;
Luis Gómez-Chova;
Xavier Calbet
Show Abstract
This paper evaluates the potential use of nonlinear retrieval methods to derive cloud, surface and atmospheric
properties from hyperspectral MetOp-IASI and MTG-IRS spectra. The methods are compared in terms of both
accuracy and speed with the current IASI and IRS L2 PPFP implementation, which consists of a principal component
extraction, typically referred as to Empirical Orthogonal Functions (EOF), and a subsequent canonical
linear regression. This research proposes the evaluation of some other methodological advances considering 1)
other linear feature extraction methods instead of EOF, such as (orthonormalized) partial least squares, and
2) the linear combination of nonlinear regression models in the form of committee of experts. The nonlinear
regression models considered in this work are artificial neural networks (NN) and kernel ridge regression (KRR)
as nonparametric multioutput powerful regression tools. Results show that, in general, nonlinear models outperform
the linear retrieval both in the presence of noise and noise-free settings, and for both IASI and IRS
synthetic and real data. The combination of models makes the retrieval more robust, improves the accuracy,
and decreases the estimated bias. These results confirm the validity of the proposed approach for retrieval of
atmospheric profiles.
Assessment of soil surface BRDF using an imaging spectrometer
Author(s):
Z. Wang;
C. A. Coburn;
X. Ren;
D. Mazumdar;
S. Myshak;
A. Mullin;
P. M. Teillet
Show Abstract
Ground reference data are important for understanding and characterizing angular effects on the images acquired by
satellite sensors with off-nadir capability. However, very few studies have considered image-based soil reference data for
that purpose. Compared to non-imaging instruments, imaging spectrometers can provide detailed information to
investigate the influence of spatial components on the bidirectional reflectance distribution function (BRDF) of a mixed
target. This research reported in this paper investigated soil spectral reflectance changes as a function of surface
roughness, scene components and viewing geometries, as well as wavelength. Soil spectral reflectance is of particular
interest because it is an essential factor in interpreting the angular effects on images of vegetation canopies. BRDF data
of both rough and smooth soil surfaces were acquired in the laboratory at 30° illumination angle using a Specim V10E
imaging spectrometer mounted on the University of Lethbridge Goniometer System version 2.5 (ULGS-2.5).
The BRDF results showed that the BRDF of the smooth soil surface was dominated by illuminated pixels, whereas the
shaded pixels were a larger component of the BRDF of the rough surface. In the blue, green, red, and near-infrared
(NIR), greater BRDF variation was observed for the rough than for the smooth soil surface. For both soil surface
roughness categories, the BRDF exhibited a greater range of values in the NIR than in the blue, green, or red. The
imaging approach allows the characterization of the impact of spatial components on soil BRDF and leads to an
improved understanding of soil reflectance compared to non-imaging BRDF approaches. The imaging spectrometer is an
important sensor for BRDF investigations where the effects of individual spatial components need to be identified.
A PolSAR image despeckle filter based on evidence theory
Author(s):
Saïd Kharbouche
Show Abstract
Images issued from a SAR (Synthetic Aperture Radar) sensor are effected by a specific noise called speckle;
therefore, many studies have been dedicated to modulate this noise with the aim to be able to reduce its
effects. But, studies in the area of polarimetric SAR (PolSAR) images despeckling are still poor and don't take
advantage correctly of polarimetric information. In this way, this paper describes an original and efficient method
of despeckling PolSAR images in order to improve the visualization and the extraction of planimetric features.
The proposed filter, takes into account all polarization modes for each polarization mode despeckling. So, for a
pixel in a single polarization mode, the modification of its radiometric value will be supervised by it adjacent
pixels in the same polarization mode and also by their equivalent pixels in other polarization modes. Furthermore,
to avoid error propagation, the filter will be very cautious in modification of radiometric values in such way that
it runs in many iterations modifying the less ambiguous pixels firstly and leaves the rest of the pixels for the
next iterations for a possible modification. To combine the information resulting from each polarization mode
and make a decision, the proposed filter calls some rules of the Evidence Theory. The experimentation was done
on Radarsat-2 images of the Arctic and Quebec regions of Canada, and the results show clearly the benefit and
the high performance of this despeckling approach.
Waterline extraction in optical images and InSAR coherence maps based on the geodesic time concept
Author(s):
Fernando Soares;
Giovanni Nico
Show Abstract
An algorithm for waterline extraction from SAR images is presented based on the estimation of the geodesic path,
or minimal path (MP) between two pixels on the waterline. For two given pixels, geodesic time is determined
in terms of the time shortest path, between them. The MP is determined by estimating the mean value for
all pairs of neighbor pixels that can be part of a possible path connecting the initial given pixels. A MP is
computed as the sum of those two geodesic image functions. In general, a MP is obtained with the knowledge
of two end pixels. Based on the 2-dimensional spreading of the estimated geodesic time function, the concepts
of propagation energy and strong pixels are introduced and tested for the waterline extraction by marking only
one pixel in the image.
Site-specific land clutter modelling based on radar remote sensing images and digital terrain data
Author(s):
Andriy Kurekin;
Lik-Kwan Shark;
Kenneth Lever;
Darren Radford;
Dave Marshall
Show Abstract
This paper extends the range of radar remote sensing applications by considering the application of remote sensing radar
images for site-specific land clutter modelling. Data fusion plays a central role in our approach, and enables effective
combination of remote sensing radar measurements with incomplete information about the Earth's surface provided by
optical sensors and digital terrain maps. The approach uses airborne remote sensing radar measurements to predict clutter
intensity for different terrain coordinates and utilises an empirical backscattering model to interpolate radar
measurements to grazing angles employed by land-based radar sensor. The practical aspects of the methodology
application for real-life remote sensing data and generation of a land clutter map of the test site at X-band are discussed.
Fringe detection in SAR interferograms
Author(s):
Fernando Soares;
Giovanni Nico
Show Abstract
In last decades many interferometric Synthetic Aperture Radar (SAR) applications have been developed aiming
at measuring terrain morphology or deformations. The geodesic information is carried by the interferometric
phase. However, this can be observed only in the principal interval giving the so called wrapped phase. To extract
the interesting information, e.g., height surface or terrain deformation, the absolute phase should be estimated
from the wrapped phase observations. In this work we present an approach to solve PU relying on the local
analysis of the wrapped phase signal gradients to recover fringes and phase jumps. The proposed fringe detection
algorithm is tested on both synthetic and real data. Synthetic phase surfaces are generated characterized by
different signal-to-noise ratio and using a real topographic scene. Real interferograms obtained by processing
ENVISAT and TerraSAR-X SAR images are also used to test the above algorithm. First results show that the
proposed approach is able to recognize and reconstruct fringes in noisy interferograms.
Integration between calibrated time-of-flight camera data and multi-image matching approach for architectural survey
Author(s):
F. Chiabrando;
F. Nex;
D. Piatti;
F. Rinaudo
Show Abstract
In this work, the integration between data provided by Time-of-Flight cameras and a multi-image matching technique for
metric surveys of architectural elements is presented. The main advantage is given by the quickness in the data
acquisition (few minutes) and the reduced cost of the instruments. The goal of this approach is the automatic extraction
of the object breaklines in a 3D environment using a photogrammetric process, which is helpful for the final user
exigencies for the reduction of the time needed for the drawing production. The results of the performed tests on some
architectural elements will be reported in this paper.
Study on the capabilities of morphological attribute profiles in change detection on VHR images
Author(s):
Nicola Falco;
Mauro Dalla Mura;
Francesca Bovolo;
Jon Atli Benediktsson;
Lorenzo Bruzzone
Show Abstract
The analysis of changes occurred in multi-temporal images acquired by the same sensor on the same geographical
area at different dates is usually done by performing a comparison of the two images after co-registration. When
one considers very high resolution (VHR) remote sensing images, the spatial information of the pixels becomes
very important and should be included in the analysis. However, taking into account spatial features for change
detection in VHR images is far from being straightforward, due to effects such as seasonal variations, differences
in illumination condition, residual mis-registration, different acquisition angles, etc., which make the comparison
of the structures in the scene complex to achieve from a spatial perspective. In this paper we propose a change
detection technique based on morphological Attribute Profiles (APs) suitable for the analysis of VHR images.
In greater detail, this work aims at detecting the changes occurred on the ground between the two acquisitions
by comparing the APs computed on the image of each date. The experimental analysis has been carried out on
two VHR multi-temporal images acquired by the Quickbird sensor on the city of Bam, Iran, before and after
the earthquake occurred on Dec. 26, 2003. The experiments confirm that the APs computed at different dates
show different behaviors for changed and unchanged areas. The change detection maps obtained by the proposed
technique are able to detect changes in the morphology of the correspondent regions at different dates regardless
their spectral variations.
Infrared stationary object acquisition and moving object tracking
Author(s):
Sengvieng Amphay;
David Gray
Show Abstract
Currently, there is much interest in developing electro-optic and infrared stationary and moving object
acquisition and tracking algorithms for Intelligence, Surveillance, and Reconnaissance (ISR) and other
applications. Many of the existing EO/IR object acquisition and tracking techniques work well for goodquality
images, when object parameters such as size are well-known. However, when dealing with noisy
and distorted imagery many techniques are unable to acquire stationary objects nor acquire and track
moving objects.
This paper will discuss two inter-related problems: (1) stationary object detection and segmentation
and (2) moving object acquisition and tracking in a sequence of images that are acquired via an IR sensor
mounted on both stationary and moving platforms.
1. A stationary object detection and segmentation algorithm called "Weighted Adaptive Iterative
Statistical Threshold (WAIST)" will be described. The WAIST algorithm takes any intensity image and
separates object pixels from the background or clutter pixels. Two common image processing techniques
are nearest neighbors clustering and statistical thresholding. The WAIST algorithm uses both techniques
iteratively, making best use of both techniques. Statistical threshold takes advantage of the fact that object
pixels will exist above a threshold based on the statistical properties of the known noise pixels in the image.
The nearest neighbor technique takes advantage of the fact that when many neighboring pixels are known
object pixels, the pixel in question is more likely to be a object pixel. The WAIST algorithm initializes the
nearest neighbor parameters and statistical threshold parameters and adjusts them iteratively to converge to
an optimal solution. Each iteration of the algorithm conservatively declares a pixel to be noise as the
statistical threshold is raised. This algorithm has proven to segment objects of interest from noisy
backgrounds and clutter. Results of the effort are presented.
2. For moving object detection and tracking we identify the challenges that the user faces in this
problem; in particular, blind geo-registration of the acquired spatially-warped imagery and their calibration.
For moving object acquisition and tracking we present an adaptive signal/image processing approach that
utilizes multiple frames of the acquired imagery for geo-registration and sensor calibration. Our method
utilizes a cost function to associate detected moving objects in adjacent frames and these results are used to
identify the motion track of each moving object in the imaging scene. Results are presented using a
ground-based panning IR camera.
Total variation restoration of the defocus image based on spectral priors
Author(s):
Peng Liu;
Dingsheng Liu;
Zhiwen Liu
Show Abstract
In this article, we de-blur one of the out of focus image among several multispectral (MS) remote sensing images by
total variation method. The no blur images are used as priors in the restoration of the out of focus image. Although the
distributions of the pixel intensity of the multimodal image of different CCD sensors are greatly different form each
other, the directions of their edges are very similar. Then, these similar structures and edge information are used as the
important priors or constraints in the total variation image restoration. The steps are: first, the PAN (panchromatic)
image is denoted approximately as the weighted sum of all the bands of MS images, and the weight parameters of the
relationship between the PAN image and the MS images are computed by least square method; Second by the
relationship and the weight parameters, an initial estimation of the out of focus image is calculated; third, the total
variation image restoration is local linearized by fixed point iterative method; fourth, the initial estimation for the out of
focus image in the third step is brought to the fixed point iteration. At last, by introduce the new priors from the
relationship between MS and PAN image, the new total variation image restoration frame is constructed. The edge and
gradient information from the no blur images of other channels make the total variation regularization better suppress the
noise in de-convolution. The comprehensive experiments are done by using different images with different level of
noise. The higher PSNR is acquired by proposed method when it is compared with some other state of art methods.
Experiments confirm that the algorithm is very effective especially when the noise in blur remote sensing image is
relative large.
Comparison and evaluation of correspondence finding methods in 3D measurement systems using fringe projection
Author(s):
Christian Bräuer-Burchardt;
Max Möller;
Christoph Munkelt;
Peter Kühmstedt;
Gunther Notni
Show Abstract
Three different methods to realize point correspondences in 3D measurement systems based on fringe projection are
described and compared concerning accuracy, sensitivity, and handling. Advantages and disadvantages of the three
techniques are discussed. A suggestion is made to combine the principles in order to achieve an improved completeness
of the measurements.
The principle of a virtual image point raster which is the basis of the combination of the methods is explained. A model
to describe the random error of a 3D point measurement for the three methods is established and described. Simulations
and real measurements confirm this error model. Experiments are described and results are presented.
Modeling and simulation of high-resolution SAR clutter data
Author(s):
Hong Zhang;
Yanzhao Wu;
Chao Wang;
Xiaoyang Wen
Show Abstract
The modeling and simulation of SAR clutter is important for Radar system design and signal processing. This paper
deals with this problem in high-resolution ground clutter. Based on the theory of clutter modeling, comparisons between
different probability distribution function (PDF) models and autocorrelation function (ACF) models are made, and the
best model are found taking TerraSAR-X data as example. Finally simulation is made through Memoryless Non-Linear
Transformation (MNLT) method based on this best fitting model. The result reveals that a mix of Rayleigh and
LogNormal distribution can fit homogeneous, heterogeneous and extremely heterogeneous clutter regions. The
simulation shows that MNLT method works well, and PDF and ACF model can represent SAR clutter well based on the
homogeneity of texture.
Semantic structure tree with application to remote sensing image segmentation
Author(s):
Xiangrong Zhang;
Xian Pan;
Biao Hou;
Licheng Jiao
Show Abstract
This paper presents a new method based on Semantic Structure Tree (SST) for remote sensing image segmentation, in
which, the semantic image analysis is used to construct the SST of the image. The leaves of the SST represent the
semantics of the image and serve as human semantic understanding of the image. The root of the tree is the whole image.
The SST uses grammar rules to construct a hierarchy structure of the image and gives a complete high-level semantics
contents description of the image. Experimental results show that the tree can give efficient description of the semantic
content of the remote sensing image, and can be well used in remote sensing image segmentation.
Experimental research on image motion measurement using optical joint transform correlator for space camera application
Author(s):
Hui Zhao;
Hongwei Yi;
Yingcai Li;
Chicheng Che;
Xiao Xiao
Show Abstract
The optical joint transform correlator (JTC) is an effective way to measure the image motion that is hard to be eliminated
for space camera. In this manuscript, the principle of JTC is briefly introduced and then a static experiment is designed to
demonstrate the suitability of JTC for space camera application. The results demonstrate that the RMS (root-mean-square)
error of motion determination can be controlled below 0.15 pixels in both the horizontal and vertical direction, which is
good enough to satisfy the space camera requirements. Besides that, the performance of two methods used to locate the
cross-correlation peak is also evaluated. Compared with the traditional centroid computing method, Intensity Weighting
Centroiding (IWC) is superior because it can not only reduce the sensitiveness of cross-correlation peak to window size
selection, but also improves measurement accuracy.
Post-earthquake road damage assessment using region-based algorithms from high-resolution satellite images
Author(s):
A. Haghighattalab;
A. Mohammadzadeh;
M. J. Valadan Zoej;
M. Taleai
Show Abstract
Receiving accurate and comprehensive knowledge about the conditions of roads after earthquake strike are crucial in
finding optimal paths and coordinating rescue missions. Continuous coverage of the disaster region and rapid access of
high-resolution satellite images make this technology as a useful and powerful resource for post-earthquake damage
assessment and the evaluation process. Along with this improved technology, object-oriented classification has become a
promising alternative for classifying high-resolution remote sensing imagery, such as QuickBird, Ikonos. Thus, in this
study, a novel approach is proposed for the automatic detection and assessment of damaged roads in urban areas based
on object based classification techniques using post-event satellite image and vector map. The most challenging phase of
the proposed region-based algorithm is the segmentation procedure. The extracted regions are then classified using
nearest neighbor classifier making use of textural parameters. Then, an appropriate fuzzy inference system (FIS) is
proposed for road damage assessment. Finally, the roads are correctly labeled as 'Blocked road' or 'Unblocked road' in
the road damage assessment step. The proposed method was tested on QuickBird pan-sharpened image of Bam, Iran,
concerning the devastating earthquake that occurred in December 2003. The visual investigation of the obtained results
demonstrates the efficiency of the proposed approach.
House damage assesment based on supervised learning method: case study on Haiti
Author(s):
Yoriko Kazama;
Tao Guo
Show Abstract
Assessing the damage caused by natural disasters requires fast and reliable information. Satellite imagery, especially
high-resolution imagery, is recognized as an important source for wide-range and immediate data acquisition. Disaster
assessment using satellite imagery is required worldwide. To assess damage caused by an earthquake, house changes or
landslides are detected by comparing images taken before and after the earthquake. We have developed a method that
performs this comparison using vector data instead of raster data. The proposed method can detect house changes
without having to rely on various image acquisition situations and shapes of house shadows. We also developed a houseposition
detection method based on machine learning. It uses local features including not only pixel-by-pixel differences
but also the shape information of the object area. The result of the house-position detection method indicates the
likelihood of a house existing in that area, and changes to this likelihood between multi-temporal images indicate the
damaged house area.
We evaluated our method by testing it on two WorldView-2 panchromatic images taken before and after the 2010
earthquake in Haiti. The highly accurate results demonstrate the effectiveness of the proposed method.
An automatic stain removal algorithm of series aerial photograph based on flat-field correction
Author(s):
Gang Wang;
Dongmei Yan;
Yang Yang
Show Abstract
The dust on the camera's lens will leave dark stains on the image. Calibrating and compensating the intensity of the
stained pixels play an important role in the airborne image processing. This article introduces an automatic compensation
algorithm for the dark stains. It's based on the theory of flat-field correction. We produced a whiteboard reference image
by aggregating hundreds of images recorded in one flight and use their average pixel values to simulate the uniform
white light irradiation. Then we constructed a look-up table function based on this whiteboard image to calibrate the
stained image. The experiment result shows that the proposed procedure can remove lens stains effectively and
automatically.
Improvement of urban land use and land cover classification approach in arid areas
Author(s):
Jing Qian;
Qiming Zhou;
Xi Chen
Show Abstract
Extraction of urban land-use information is base step of urban change detection. However, challenges remain in
automatic delineation of urban areas and differentiation of finer inner-city land cover types. The extraction accuracy of
built-up area is still unsatisfactory. This is mainly due to the heterogeneity nature of urban areas, where continuous and
discrete elements occur side by side. Another reason is the mixed pixel problem, which is particularly serious in an urban
environment. The built-up areas in arid areas may confuse with nearby bare soil and stony desert, which present very
similar spectral characteristics as construction materials such as concrete, while they are often surrounded by farmland.
This study focuses on improving urban land use and land cover classification approach in typical city of China's west
arid areas using multi-sensor data. Pixel-based classification of the NDBI and Maximum Likelihood Classification
(MLC) and object-oriented image classification were used in the study and the classification dataset including Landsat
ETM (1999), CBERS (2005), and Beijing-1 (2006). The accuracy is assessed using high-resolution images, aerial
photograph and field investigation data. The traditional pixel-based classification approach typically yield large
uncertainty in the classification results. Object-oriented processing techniques are becoming more popular compared to
traditional pixel-based image analysis.
A methodology for the detection of land cover changes: application to the Toulouse southwestern region
Author(s):
Danielle Ducrot;
Antoine Masse;
Eric Ceschia;
Claire Marais-Sicre;
Daniel Krystof
Show Abstract
A methodology to highlight changes in the landscape based on satellite image classification has been developed
involving unsupervised and supervised approaches.
With past acquisitions, ground truth data are in general not known, therefore the classification can only be unsupervised.
These classifications provide labels but not surface types. The main difficulty lies in the interpretation of these classes.
An automatic interpretation method has been developed to allocate semantics to classes thanks to a radiometric value
catalogue. However, it requires radiometrically comparable images. After radiometric correction, the images are not free
from defects; this is why a normalization method has been developed.
We propose a specific methodology to evaluate changes consisting in regrouping classes of the same theme, smoothing
and eroding contours without taking "mixels" into account and comparing the classified images to provide statistics and
image changes. The different steps of the process are essential to avoid false changes and to quantify land cover change
with a high degree of accuracy. Various statistical results are given: changes or no changes, types of changes, and crop
rotations over several years.
Land use /cover change (LUCC) can provide an estimate of carbon capture and storage. Reforestation, changing land use
and best practices can increase carbon sequestration in biomass and soils for a period of several decades, which may
constitute a significant contribution to the fight against the greenhouse effect. Deforestation, conversely, can lead to
significant levels of CO2 emission.
By application to the South-West region of Toulouse, we observe significant land cover changes over 11 years (1991-
2002). The crop rotations are given for 4 years (year per year 2002-2005).
Characteristic analysis of IR signatures for different optical surface properties by computer modeling and field measurement
Author(s):
Jun-Hyuk Choi;
Tae-Kuk Kim
Show Abstract
This paper is a part of developing a program that generates IR images by considering the surface emitted radiance from
objects. The spectral radiance received by a remote sensor is consisted of the self-emitted component directly from the
target surface, the reflected component of the solar irradiance at the target surface, and the scattered component by the
atmosphere without ever reaching the object surface. The radiance self-emitted from a surface can be calculated by using
the temperature and optical property of the surface together with the spectral atmospheric transmittance. The thermal
modeling is essential for identifying objects on the scenes obtained from the satellites. And the temperature distribution
on the object is necessary to obtain their infrared images in contrast to the background. We considered the composite
heat transfer modes including conduction, convection and spectral solar radiation for objects within a scene to calculate
the surface temperature distribution. The object considered is assumed to be consisted of several different materials with
different properties, such as conductivity, absorption, density, and specific heat etc. We get the weather conditions (air
temperatures, wind directions, wind velocity, relative humidity and atmospheric pressure), solar irradiance and surface
temperatures of the test plates. The measured diurnal emitted radiance from the test plate with several different surface
properties are fairly well compared with the modeled results obtained from the software developed in this study.
Vegetation cover estimation from CASI and AHS image sensors
Author(s):
Tomás J. Arnau;
Filiberto Pla;
J. M. Sotoca
Show Abstract
The fraction of surface covered by green vegetation is an essential biophysical parameter for addressing land surface
processes in the terrestrial climate system. This parameter has been used in many agronomic, ecological and
meteorological applications, e.g. for modelling weather predictions, estimation of urban vegetation abundance,
etc. In this work, we collect several measures of vegetal cover indexes and relate them to the reflected light
signal. The aim is to estimate vegetal cover from hyperspectral images using Support Vector Machine regression
of vegetal cover in which every pixel represents an index of this measure from hyperspectral data.
Comparative study of multi-data fusion techniques in mapping geological features: Wadi Ghoweiba, Northwest Gulf of Suez, Egypt
Author(s):
S. M. Hassan;
B. M. El Leithy
Show Abstract
In this study SPOT-panchromatic image with 10 m spatial resolution was fused with ASTER-band ratio images with 30
m spatial resolution. The fusion of SPOT image with ASTER band-ratio data using PC, Brovey, HPF and IHS
transform techniques proved to be excellent for both lithological and structural mapping as it preserves the spectral
information of ASTER and SPOT data. By visually comparing of these data fusion, the HIS and CNT, methods produce
high color distortion with respect to the original image, while it preserve a perfect spatial resolution. The PCA fusion
method produce very low color distortion but it dose not preserve all the spatial information. The HPF fusion method
produce very low color distortion as well as preserve all the spatial information which look sharper than the other
images. This study revealed that, the HPF fusion method looks the best method comparing with the other methods in
terms of the quality of spectral and spatial information. By quantitatively analyzed using the correlation coefficient, The
CC is ranging from 0.406 to 0.455 using HIS fusion method. While by using Brovey transform, the CC ranging from
0.955 to 0.988. Wherever, The CC between the multispectral input data and the output fused image is ranging from
0.988 to 0.996 using automatic PCA fusion technique. By using manual PCA fusion technique, the CC is ranging from
0.978 to 0.997, so there is no big different between the automatic and the manual PCA methods. The best CC between
the multispectral input data and the output fused image is ranging from 0.989 to 0.999 using HPF fusion technique.
SAR signature analysis for TerraSAR-X-based ship monitoring
Author(s):
Günter Saur;
Michael Teutsch
Show Abstract
It is expected, that ship detection and classification in SAR satellite imagery will be part of future downstream
services for various applications, e.g. surveillance of fishery zones or tracking of cargo ships. Due to the
requirements of operational services and due to the potential of high resolution SAR (e.g. TerraSAR-X), there
is a need for composing, optimization, and validation of specific fully automated image processing chains.
The presented processing chain covers all steps from land masking, screening, object segmentation, feature
extraction to classification and parameter estimation. The chain is base for experiments with both open sea
and harbor scenes for ship detection and monitoring. Within this chain, a classification component for SAR
ship and non-ship decision is investigated. Based on many extracted image features and numerous image chips
for training and test, some promissing results are presented and discussed. Since the classification can reduce
the false alarms of the screening component, the processing chain is expected to work on images with less
good weather and signal conditions and to extract ships with lower reflexions.
Selection of regularization parameter based on generalized cross-validation in total variation remote sensing image restoration
Author(s):
Peng Liu;
Dingsheng Liu
Show Abstract
In this article, we apply the total variation method to remote sensing image restoration. A new method to calculate the
regularization parameter by using an improved Generalized Cross-Validation (GCV) method is proposed. Classical GCV
can not be directly used in total variation regularization due to the nonlinearity of the total variation. In our method, the
GCV method and the fixed point iterative method are combined. In order to use the GCV method, we separate a new
linear regularization operator from the definition of the fixed point iterative method. Based on the linear regularization
operator, we change the form of the classical GCV function and make it suitable for total variation regularization. A new
GCV function suitable for total variation regularization is constructed. By using the new GCV method, the regularization
parameter is automatically changing in total variation remote sensing image restoration and a higher signal-noise-ratio is
acquired. Experiments confirm that the adaptability and the stability of the total variation remote sensing image
restoration are improved.