Geocoding uncertainty analysis for the automated processing of Sentinel-1 data using Sentinel-1 Toolbox software
Author(s):
Alena Dostálová;
Vahid Naeimi;
Wolfgang Wagner;
Stefano Elefante;
Senmao Cao;
Henrik Persson
Show Abstract
One of the major advantages of the Sentinel-1 data is its capability to provide very high spatio-temporal coverage allowing the mapping of large areas as well as creation of dense time-series of the Sentinel-1 acquisitions. The SGRT software developed at TU Wien aims at automated processing of Sentinel-1 data for global and regional products. The first step of the processing consists of the Sentinel-1 data geocoding with the help of S1TBX software and their resampling to a common grid. These resampled images serve as an input for the product derivation. Thus, it is very important to select the most reliable processing settings and assess the geocoding uncertainty for both backscatter and projected local incidence angle images. Within this study, selection of Sentinel-1 acquisitions over 3 test areas in Europe were processed manually in the S1TBX software, testing multiple software versions, processing settings and digital elevation models (DEM) and the accuracy of the resulting geocoded images were assessed. Secondly, all available Sentinel-1 data over the areas were processed using selected settings and detailed quality check was performed. Overall, strong influence of the used DEM on the geocoding quality was confirmed with differences up to 80 meters in areas with higher terrain variations. In flat areas, the geocoding accuracy of backscatter images was overall good, with observed shifts between 0 and 30m. Larger systematic shifts were identified in case of projected local incidence angle images. These results encourage the automated processing of large volumes of Sentinel-1 data.
Statistical power of intensity- and feature-based similarity measures for registration of multimodal remote sensing images
Author(s):
M. Uss;
B. Vozel;
V. Lukin;
K. Chehdi
Show Abstract
This paper investigates performance characteristics of similarity measures (SM) used in image registration domain to discriminate between aligned and not-aligned reference and template image (RI and TI) fragments. The study emphasizes registration of multimodal remote sensing images including optical-to-radar, optical-to-DEM, and radar-to- DEM scenarios. We compare well-known area-based SMs such as Mutual Information, Normalized Correlation Coefficient, Phase Correlation, and feature-based SM using SIFT and SIFT-OCT descriptors. In addition, a new SM called logLR based on log-likelihood ratio test and parametric modeling of a pair of RI and TI fragments by the Fractional Brownian Motion model is proposed. While this new measure is restricted to linear intensity change between RI and TI (assumption somewhat restrictive for multimodal registration), it takes explicitly into account noise properties of RI and TI and multivariate mutual distribution of RI and TI pixels. Unlike other SMs, distribution of logLR measure for the null hypothesis does not depend on registration scenario or fragments size and follows closely chi-squared distribution according to Wilks’s theorem. We demonstrate that a SM utility for image registration purpose can be naturally represented in (True Positive Rate, Positive Likelihood Rate) coordinates. Experiments on real images show that overall the logLR SM outperforms the other SMs in terms of area under the ROC curve, denoted AUC. It also provides the highest Positive Likelihood Rate for True Positive Rate values below 0.4-0.6. But for certain registration problem types, logLR can be second or third best after MI or SIFT SMs.
Evaluation of georeferencing methods with respect to their suitability to address unsimilarity between the image to be referenced and the reference image
Author(s):
Stefan Brüstle;
Bastian Erdnüß
Show Abstract
In recent years, operational costs of unmanned aircraft systems (UAS) have been massively decreasing. New sensors satisfying weight and size restrictions of even small UAS cover many different spectral ranges and spatial resolutions. This results in airborne imagery having become more and more available. Such imagery is used to address many different tasks in various fields of application. For many of those tasks, not only the content of the imagery itself is of interest, but also its spatial location. This requires the imagery to be properly georeferenced.
Many UAS have an integrated GPS receiver together with some kind of INS device acquiring the sensor orientation to provide the georeference. However, both GPS and INS data can easily become unavailable for a period of time during a flight, e.g. due to sensor malfunction, transmission problems or jamming. Imagery gathered during such times lacks georeference. Moreover, even in datasets not affected by such problems, GPS and INS inaccuracies together with a potentially poor knowledge of ground elevation can render location information accuracy less than sufficient for a given task.
To provide or improve the georeference of an image affected by this, an image to reference registration can be performed if a suitable reference is available, e.g. a georeferenced orthophoto covering the area of the image to be georeferenced. Registration and thus georeferencing is achieved by determining a transformation between the image to be referenced and the reference which maximizes the coincidence of relevant structures present both in the former and the latter.
Many methods have been developed to accomplish this task. Regardless of their differences they usually tend to perform the better the more similar an image and a reference are in appearance. This contribution evaluates a selection of such methods all differing in the type of structure they use for the assessment of coincidence with respect to their ability to tolerate unsimilarity in appearance. Similarity in appearance is mainly dependent on the following aspects, namely
the similarity of abstraction levels (Is the reference e.g. an orthophoto or a topographical map?),
the similarity of sensor types and spectral bands (Is the image e.g. a SAR image and the reference a passively sensed one? Was e.g. a NIR sensor used capturing the image while a VIS sensor was used in the reference?),
the similarity of resolutions (Is the ground sampling distance of the reference comparable to the one of the image?),
the similarity of capture parameters (Are e.g. the viewing angles comparable in the image and in the reference?) and
the similarity concerning the image content (Was there e.g. snow coverage present when the image was captured while this was not the case when the reference was captured?).
The evaluation is done by determining the performance of each method with a set of image to be referenced and reference pairs representing various degrees of unsimilarity with respect to each of the above mentioned aspects of similarity.
Single band atmospheric correction tool for thermal infrared data: application to Landsat 7 ETM+
Author(s):
Joan Miquel Galve;
César Coll;
Juan Manuel Sánchez;
Enric Valor;
Raquel Niclòs;
Lluís Pérez-Planells;
Carolina Doña;
Vicente Caselles
Show Abstract
Atmospheric correction of Thermal Infrared (TIR) remote sensing data is a key process in order to obtain accurate land surface temperatures (LST). Single band atmospheric correction methods are used for sensors provided with a single TIR band. Which employs a radiative transfer model using atmospheric profiles over the study area as inputs to estimate the atmospheric transmittances and emitted radiances.
Currently, TIR data from Landsat 5-TM, Landsat 7-ETM+ and Landsat 8-TIRS can be atmospherically corrected using the on-line Atmospheric Correction Parameter Calculator (ACPC, http://atmcorr.gsfc.nasa.gov). For specific geographical coordinates and observation time, the ACPC provides the atmospheric transmittance, and both upwelling and downwelling radiances, which are calculated from MODTRAN4 radiative transfer simulations with NCEP atmospheric profiles as inputs. Since the ACPC provides the atmospheric parameters for a single location, it does not account for their eventual variability within the full Landsat scene.
The new Single Band Atmospheric Correction (SBAC) tool provides the geolocated atmospheric parameters for every pixel taking into account their altitude. SBAC defines a three-dimensional grid with 1°×1° latitude/longitude spatial resolution, corresponding to the location of NCEP profiles, and 13 altitudes from sea level to 5000 meters. These profiles are entered in MODTRAN5 to calculate the atmospheric parameters corresponding to a given pixel are obtained by weighted spatial interpolation in the horizontal dimensions and linear interpolation in the vertical dimension.
In order to compare both SBAC and ACPC tools, we have compared with ground measurements the Landsat-7/ETM+ LST obtained using both tools over the Valencia ground validation site.
Efficiency analysis for 3D filtering of multichannel images
Author(s):
Ruslan A. Kozhemiakin;
Oleksii Rubel;
Sergey K. Abramov;
Vladimir V. Lukin;
Benoit Vozel;
Kacem Chehdi
Show Abstract
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches – by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
Pansharpening remotely sensed data by using nonnegative matrix factorization and spectral-spatial degradation models
Author(s):
Nezha Farhi;
Moussa Sofiane Karoui;
Khelifa Djerriri;
Issam Boukerch
Show Abstract
In this paper, a new pansharpening method, which uses nonnegative matrix factorization, is proposed to enhance the spatial resolution of remote sensing multispectral images. This method, based on the linear spectral unmixing concept and called joint spatial-spectral variables nonnegative matrix factorization, optimizes, by new iterative and multiplicative update rules, a joint-variables criterion that exploits spatial and spectral degradation models between the considered images. This criterion considers only two unknown high spatial-spectral resolutions variables. The proposed method is tested on synthetic and real datasets and its effectiveness, in spatial and spectral domains, is evaluated with established performance criteria. Results show the good performances of the proposed approach in comparison with other standard literature ones.
Scatter-plot-based method for noise characteristics evaluation in remote sensing images using adaptive image clustering procedure
Author(s):
Victoriya V. Abramova;
Sergey K. Abramov;
Vladimir V. Lukin;
Benoit Vozel;
Kacem Chehdi
Show Abstract
Several modifications of scatter-plot-based method for mixed noise parameters estimation are proposed. The modifications relate to the stage of image segmentation and they are intended to adaptively separate image blocks into clusters taking into account image peculiarities and to choose a required number of clusters. Comparative performance analysis of the proposed modifications for images from TID2008 database is performed. It is shown that the best estimation accuracy is provided by a method with automatic determination of a required number of clusters followed by block separation into clusters using k-means method. This modification allows improving the accuracy of noise characteristics estimation by up to 5% for both signal-independent and signal-dependent noise components in comparison to the basic method. The results for real-life data are presented.
Resolution enhancement of tri-stereo remote sensing images by super resolution methods
Author(s):
Caglayan Tuna;
Alper Akoguz;
Gozde Unal;
Elif Sertel
Show Abstract
Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.
A novel method to detect shadows on multispectral images
Author(s):
Hazan Dağlayan Sevim;
Yasemin Yardımcı Çetin;
Didem Özışık Başkurt
Show Abstract
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
Tree detection in orchards from VHR satellite images using scale-space theory
Author(s):
Milad Mahour;
Valentyn Tolpekin;
Alfred Stein
Show Abstract
This study focused on extracting reliable and detailed information from very High Resolution (VHR) satellite images for the detection of individual trees in orchards. The images contain detailed information on spectral and geometrical properties of trees. Their scale level, however, is insufficient for spectral properties of individual trees, because adjacent tree canopies interlock. We modeled trees using a bell shaped spectral profile. Identifying the brightest peak was challenging due to sun illumination effects caused 1 by differences in positions of the sun and the satellite sensor. Crown boundary detection was solved by using the NDVI from the same image. We used Gaussian scale-space methods that search for extrema in the scale-space domain. The procedures were tested on two orchards with different tree types, tree sizes and tree observation patterns in Iran. Validation was done using reference data derived from an UltraCam digital aerial photo. Local extrema of the determinant of the Hessian corresponded well to the geographical coordinates and the size of individual trees. False detections arising from a slight asymmetry of trees were distinguished from multiple detections of the same tree with different extents. Uncertainty assessment was carried out on the presence and spatial extents of individual trees. The study demonstrated how the suggested approach can be used for image segmentation for orchards with different types of trees. We concluded that Gaussian scale-space theory can be applied to extract information from VHR satellite images for individual tree detection. This may lead to improved decision making for irrigation and crop water requirement purposes in future studies.
Clairvoyant fusion: a new methodology for designing robust detection algorithms
Author(s):
Alan Schaum
Show Abstract
Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors (“clairvoyants”) associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions–or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.
Demonstration of multispectral target locator using collocated RF antenna/LWIR joint sensor system and datacube
Author(s):
Woo-Yong Jang;
James Park;
George Kakas;
Michael Noyola
Show Abstract
Recently, we configured RF antennas and a LWIR camera connected to an actuator system to form a collocated sensor system. We also developed a GUI which directly controls both RF and IR systems, azimuth motion, as well as performs post-processing for data integration and location finding. RF range data and LWIR images were collected simultaneously by using our configured sensor system as azimuth was varied from 0 to 70°. Series of collected RF data was transformed into a single 2-D radar image showing range profile of targets against azimuth. For LWIR, data was aligned into a single panoramic image as a function of azimuth by incorporating shift parameters observed in the measurements. Both RF/IR images were then arranged into a 3-D datacube, having azimuth as a common domain, and this datacube directly provided locational information of targets. For demonstration, we successfully located objects such as a corner reflector and a blackbody source under a dark background. In addition, we highlight some additional features available in our sensor system including target classification using both Euclidean and SVM based multi-classifier techniques, and tracking capability for region of interest on moving targets. Future work would be to improve the current system for outdoor measurement to locate distant targets.
Iterative matched filtering for detection of non-rare target materials in hyperspectral imagery
Author(s):
Kwang-Eun Kim;
Sung-Soon Lee;
Hyun-Seob Baik
Show Abstract
Matched filter, which models background variability using the statistics of the entire image with the assumption of rare and small targets, often fails when the target materials are frequently present in the image data. In this study, an iterative matched filtering technique is proposed which can effectively reduce the contamination of background statistics by target signal without any complicated spectral or spatial pre-processing. It applies matched filter iteratively with gradual exclusion of target-like pixels from background characterization based on the matched filtered score. Experimental results using the real airborne hyperspectral image data and simulated data with artificial mineral targets show that the proposed method can dramatically improve the detection performance. Though the statistical complexity of background materials is not investigated, it is expected to be used as a simple and practical technique for improving the detection performance of matched filter by reducing target leakage effect when the target materials are frequently present in the image data. This technique also can be directly adopted by other extensions of matched filters such as constrained energy minimization (CEM) and adaptive cosine estimator (ACE).
A rule-based classification from a region-growing segmentation of airborne lidar
Author(s):
Jorge Martínez;
Francisco F. Rivera;
José C. Cabaleiro;
David L. Vilariño;
Tomás F. Pena;
David Miranda B.
Show Abstract
Light Detection and Ranging (LiDAR) has attracted the interest of the research community in many fields, including object classification of the earth surface. In this paper we present an object-based classification method for airborne LiDAR that distinguishes three main classes (buildings, vegetation and ground) based only on LiDAR information. The key components of our proposal are the following: First, the LiDAR point cloud is stored in an octree for its efficient processing and the normal vector of each point is estimated using an adaptive neighborhood algorithm. Then, the points are segmented using a two-phase region growing algorithm where planar and non-planar objects are handled differently. The utilization of an epicenter point is introduced to allow regions to expand without losing homogeneity. Finally, a ruled-based procedure is performed to classify the segmented clusters. In order to evaluate our approach, a building detection was carried out, and results were obtained in terms of accuracy and computational time.
A novel feature extraction methodology for region classification in lidar data
Author(s):
Nina M. Varney;
Vijayan K. Asari;
Garrett C. Sargent
Show Abstract
LiDAR is a remote sensing method used to produce precise point clouds with millions of geo-spatially located 3D
data points. The challenge comes when trying to accurately and efficiently segment and classify objects, especially
in instances of occlusion and where objects are in close local proximity. The goal of this paper is to propose a more
accurate and efficient way of performing segmentation and extracting features of objects in point clouds. Normal
Octree Region Merging (NORM) is a segmentation technique based on surface normal similarities, and it subdivides
the object points into clusters. The idea behind the technique of surface normal calculation is that for a given
neighborhood around each point, the normal of a plane which best fits that set of points can be considered to be the
surface normal at that particular point. Next, an octree-based segmentation approach is applied by dividing the entire
scene into eight bins, 2 x 2 x 2 in the X, Y, and Z direction. Then for each of these bins, the variance of all the
elevation angles corresponding to the surface normal within that bin is calculated and if the elevation angle falls
below a certain threshold, the bin is divided into eight more bins. This process is repeated until the entire scene
consists of different sized bins, all containing surface normals with elevation variances below a given threshold.
However, the octree-based segmentation process produces obvious over segmentation of most of the objects. In
order to correct for this over segmentation, a region merging approach is applied. This region merging approach
works much like the automatic seeded region growing technique, which is an already well known technique, with
the exception that instead of using height to measure similarity, a histogram signature is used. Each cluster generated
from the previous NORM segmentation technique is then run through a Shape-based Eigen Local Feature (SELF)
algorithm, where the focus is on calculating normalized histograms to describe the local shape and curvature of the
points as well as using Principal Component Analysis (PCA) in order to determine meaningful relationships between
points, primarily using eigenvalues. These extracted features are then applied as the input to a cascade of classifiers,
where an object is classified and results are compared to datasets which have been manually ground-truthed. The
NORM segmentation technique was implemented on two datasets and outperformed other state of the art
algorithms, such as automatic region growing and strip histogram grid methods. The proposed SELF method is
performed on each of the segmented clusters and looks to combine previous research by concentrating on extracting
the global features of each cluster, while simultaneously collecting information about each point on a local level.
The combination of the two novel algorithms, NORM and SELF, prove their effectiveness in classifying five classes
of objects in the scenes. Future work involves improvement of the feature vector to help distinguish between
subclasses such as vehicles of various types, buildings of different roof structures, and vegetations.
A novel approach to internal crown characterization for coniferous tree species classification
Author(s):
A. Harikumar;
F. Bovolo;
L. Bruzzone
Show Abstract
The knowledge about individual trees in forest is highly beneficial in forest management. High density small foot- print multi-return airborne Light Detection and Ranging (LiDAR) data can provide a very accurate information about the structural properties of individual trees in forests. Every tree species has a unique set of crown structural characteristics that can be used for tree species classification. In this paper, we use both the internal and external crown structural information of a conifer tree crown, derived from a high density small foot-print multi-return LiDAR data acquisition for species classification. Considering the fact that branches are the major building blocks of a conifer tree crown, we obtain the internal crown structural information using a branch level analysis. The structure of each conifer branch is represented using clusters in the LiDAR point cloud. We propose the joint use of the k-means clustering and geometric shape fitting, on the LiDAR data projected onto a novel 3-dimensional space, to identify branch clusters. After mapping the identified clusters back to the original space, six internal geometric features are estimated using a branch-level analysis. The external crown characteristics are modeled by using six least correlated features based on cone fitting and convex hull. Species classification is performed using a sparse Support Vector Machines (sparse SVM) classifier.
Graph-based segmentation of airborne lidar point clouds
Author(s):
David L. Vilariño;
Jorge Martínez;
Francisco F. Rivera;
José C. Cabaleiro;
Tomás F. Pena
Show Abstract
In this paper, a graph-based technique originally intended for image processing has been tailored for the segmentation of airborne LiDAR points, that are irregularly distributed. Every LiDAR point is labeled as a node and interconnected as a graph extended to its neighborhood and defined in a 4D feature space (x, y, z, and the reflection intensity). The interconnections between pairs of neighboring nodes are weighted based on the distance in the feature space. The segmentation consists in an iterative process of classification of nodes into homogeneous groups based on their similarity. This approach is intended to be part of a complete system for classification of structures from LiDAR point clouds in applications needing fast response times. In this sense, a study of the performance/accuracy trade-off has been performed, extracting some conclusions about the benefits of the proposed solution.
Building footprint extraction from digital surface models using neural networks
Author(s):
Ksenia Davydova;
Shiyong Cui;
Peter Reinartz
Show Abstract
Two-dimensional building footprints are a basis for many applications: from cartography to three-dimensional building models generation. Although, many methodologies have been proposed for building footprint extraction, this topic remains an open research area. Neural networks are able to model the complex relationships between the multivariate input vector and the target vector. Based on these abilities we propose a methodology using neural networks and Markov Random Fields (MRF) for automatic building footprint extraction from normalized Digital Surface Model (nDSM) and satellite images within urban areas. The proposed approach has mainly two steps. In the first step, the unary terms are learned for the MRF energy function by a four-layer neural network. The neural network is learned on a large set of patches consisting of both nDSM and Normalized Difference Vegetation Index (NDVI). Then prediction is performed to calculate the unary terms that are used in the MRF. In the second step, the energy function is minimized using a maxflow algorithm, which leads to a binary building mask. The building extraction results are compared with available ground truth. The comparison illustrates the efficiency of the proposed algorithm which can extract approximately 80% of buildings from nDSM with high accuracy.
Domain adaptation based on deep denoising auto-encoders for classification of remote sensing images
Author(s):
Emanuele Riz;
Begüm Demir;
Lorenzo Bruzzone
Show Abstract
This paper investigates the effectiveness of deep learning (DL) for domain adaptation (DA) problems in the classification of remote sensing images to generate land-cover maps. To this end, we introduce two different DL architectures: 1) single-stage domain adaptation (SS-DA) architecture; and 2) hierarchal domain adaptation (H-DA) architecture. Both architectures require that a reliable training set is available only for one of the images (i.e., the source domain) from a previous analysis, whereas it is not for another image to be classified (i.e., the target domain). To classify the target domain image, the proposed architectures aim to learn a shared feature representation that is invariant across the source and target domains in a completely unsupervised fashion. To this end, both architectures are defined based on the stacked denoising auto-encoders (SDAEs) due to their high capability to define high-level feature representations. The SS-DA architecture leads to a common feature space by: 1) initially unifying the samples in source and target domains; and 2) then feeding them simultaneously into the SDAE. To further increase the robustness of the shared representations, the H-DA employs: 1) two SDAEs for learning independently the high level representations of source and target domains; and 2) a consensus SDAE to learn the domain invariant high-level features. After obtaining the domain invariant features through proposed architectures, the classifier is trained by the domain invariant labeled samples of the source domain, and then the domain invariant samples of the target domain are classified to generate the related classification map. Experimental results obtained for the classification of very high resolution images confirm the effectiveness of the proposed DL architectures.
Classification of remote sensed images using random forests and deep learning framework
Author(s):
S. Piramanayagam;
W. Schwartzkopf;
F. W. Koehler;
E. Saber
Show Abstract
In this paper, we explore the use of two machine learning algorithms: (a) random forest for structured labels and (b) fully convolutional neural network for the land cover classification of multi-sensor remote sensed images. In random forest algorithm, individual decision trees are trained on features obtained from image patches and corresponding patch labels. Structural information present in the image patches improves the classification performance when compared to just utilizing pixel features. Random forest method was trained and evaluated on the ISPRS Vaihingen dataset that consist of true ortho photo (TOP: near IR, R, G) and Digital Surface Model (DSM) data. The method achieves an overall accuracy of 86.3% on the test dataset. We also show qualitative results on a SAR image. In addition, we employ a fully convolutional neural network framework (FCN) to do pixel-wise classification of the above multi-sensor image. TOP and DSM data have individual convolutional layers with features fused before the fully convolutional layers. The network when evaluated on the Vaihingen dataset achieves an overall classification accuracy of 88%.
The true false ground truths: What interest?
Author(s):
K. Chehdi;
C. Cariou
Show Abstract
The existence of a few unreliable ground truth (GT) data sets which are often used as reference by the remote sensing community for the assessment and comparison of classification results is really problematic and poses a number of questions. Two of these ground truth data sets can be cited: "Pavia University" and "Indian Pine". A rigorous analysis of spectral signatures of pixels in these images shows that some classes which are considered as homogeneous from the ground truth are clearly not, since the pixels which belong to the same classes have different spectral signatures, and probably do not belong to the same category.
The persistence in using data sets from a biased ground truth does not allow objective comparisons between classification methods and does not contribute to providing explanation of physical phenomena that images are supposed to reflect.
In this communication, we present a fine and complete analysis of the spectral signatures of pixels within each class for the two ground truth data sets mentioned above. The metrics used show some incoherence and inaccuracy of these data which wrongly serve as references in several classification comparative studies.
Regions-of-interest extraction from remote sensing imageries using visual attention modelling
Author(s):
Hui Li Tan;
Jiayuan Fan;
Maria Toomik;
Shijian Lu
Show Abstract
Processing and analysing large volume of remote sensing data is both labour intensive and time consuming. Therefore, there is a need to effectively and efficiently identify meaningful regions in these remote sensing data for timely resource management. In this paper, we propose a visual attention model for identifying regions-of-interest in remote sensing data. The proposed model incorporates both bottom-up spatial saliency and top-down objectness, by fusing a co-occurrence histogram saliency model with the BING objectness model. The co-occurrence histogram saliency model is constructed by first building a 2D co-occurrence histogram that captures co-occurrence and occurrence of image intensities, and then using the 2D co-occurrence histogram to model local and global saliency. On the other hand, the BING objectness model is constructed by resizing image intensities in variable-sized windows to 8x8 windows, and then using the norms of the gradients in the 8x8 windows as features to train a generic objectness measure. Our experimental results show that the proposed model can effectively and efficiently identify regions-of-interest in remote sensing data. The proposed model may be applied in various remote sensing applications such as anomaly detection, urban area detection, target detection, or land use classification.
Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)
Author(s):
Huan Xie;
Xin Luo;
Xiong Xu;
Chen Wang;
Haiyan Pan;
Xiaohua Tong;
Shijie Liu
Show Abstract
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently.
In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation.
Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve .
Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window.
The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).
Accuracy assessment of blind and semi-blind restoration methods for hyperspectral images
Author(s):
Mo Zhang;
Benoit Vozel;
Kacem Chehdi;
Mykhail Uss;
Sergey Abramov;
Vladimir Lukin
Show Abstract
Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present or the original image, blind restoration methods must be considered. Otherwise, when a partial information is needed, semi-blind restoration methods can be considered. Numerous semi-blind and quite advanced methods are available in the literature. So to get better insights and feedback on the applicability and potential efficiency of a representative set of four semi-blind methods recently proposed, we have performed a comparative study of these methods in objective terms of blur filter and original image error estimation accuracy. In particular, we have paid special attention to the accurate recovering in the spectral dimension of original spectral signatures. We have analyzed peculiarities and factors restricting the applicability of these methods. Our tests are performed on a synthetic hyperspectral image, degraded with various synthetic blurs (out-of-focus, gaussian, motion) and with signal independent noise of typical levels such as those encountered in real hyperspectral images. This synthetic image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic reference spectral signatures to recover after synthetic degradation. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.
Unsupervised component reduction of hyperspectral images and clustering without performance loss: application to marine algae identification
Author(s):
B. Chen;
K. Chehdi;
E. De Oliveira;
C. Cariou;
B. Charbonnier
Show Abstract
We propose in this communication a classification method adapted to the grouping of spectral components which provide a similar information. After this step, a single band is automatically selected for every band class in order to cluster the pixels of the images. This method is completely unsupervised.
The proposed reduction approach is deterministic and iterative. It includes a connectivity criterion between bands which uses the Manhattan distance. This criterion allows the automatic partitioning of
M spectral bands, leading to an identification of the most relevant spectral bands to keep in the further pixel classification process. Moreover, the use of this criterion avoids classes with only one band. The spectral band selected to represent a given class is the closest to all the other bands of this class, with respect to the used metric.
The spectral bands reduction developed has been evaluated and validated with our unsupervised descending hierarchical classification pixel method (UDHC), with the addition of a regularization step. A real hyperspectral image composed of 100 spectral bands has been used for the experimental study.
Exploring the impact of wavelet-based denoising in the classification of remote sensing hyperspectral images
Author(s):
Pablo Quesada-Barriuso;
Dora B. Heras;
Francisco Argüello
Show Abstract
The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.
Shadow extraction for urban area based on hyperspherical color sharpening information distortion
Author(s):
Qing Guo;
Qu Wang;
Hongqun Zhang
Show Abstract
A shadow extraction method for urban area is presented based on the hyperspherical color transform (HCT) fusion information distortion. We use the near-infrared band of WorldView-2 data to detect the shadow, because the near-infrared band as the long-wave band is more sensitive to shadow comparing to the short-wave band. In the hyperspherical color sharpening (HCS), n input bands are transformed from an n-dimensional Cartesian space to an n-dimensional hyperspherical color space to generate a single intensity component and n-1 angles, and then the intensity component is replaced with the adjusted panchromatic (Pan) image. After HCT, the information amount of the intensity is larger than that of the Pan band. When using the Pan to replace the intensity to get the fused multispectral (MS) image, the information amount is lost. To assess the information distortion of the fusion result, it is found that the shadow is sensitive to the difference index. Hence, the relative difference index is constructed to enhance the shadow information. More specifically, the relative difference index values are made high for shadow area while they are made low for non-shadow area. However, for the original MS image, the digital number values are low for the shadow area while they are high for non-shadow area. Then, by thresholding, the possible shadow area is separated from the non-shadow area. The experimental results show that this shadow extraction method is simple and accurate; not only the shadow of high building but also the little shadows of low trees and between buildings are all detected.
Spectral-spatial classification of hyperspectral images with semi-supervised graph learning
Author(s):
Renbo Luo;
Wenzhi Liao;
Hongyan Zhang;
Youguo Pi;
Wilfried Philips
Show Abstract
In this paper, we propose a novel semi-supervised graph leaning method to fuse spectral (of original hyperspectral (HS) image) and spatial (from morphological features) information for classification of HS image. In our proposed semi-supervised graph, samples are connected according to either label information (labeled samples) or their k-nearest spectral and spatial neighbors (unlabeled samples). Furthermore, we link the unlabeled sample with all labeled samples in one class which is the closest to this unlabeled sample in both spectral and spatial feature spaces. Thus, the connected samples have similar characteristics on both spectral and spatial domains, and have high possibilities to belong to the same class. By exploiting the fused semi-supervised graph, we then get transformation matrices to project high-dimensional HS image and morphological features to their lower dimensional subspaces. The final classification map is obtained by concentrating the lower-dimensional features together as an input of SVM classifier. Experimental results on a real hyperspectral data demonstrate the efficiency of our proposed semi-supervised fusion method. Compared to the methods using unsupervised fusion or supervised fusion, the proposed semi-supervised fusion method enables improved performances on classification. Moreover, the classification performances keep stable even when a small number of labeled training samples is available.
Ship classification in terrestrial hyperspectral data
Author(s):
Göksu Keskin;
Hendrik Schilling;
Andreas Lenz;
Wolfgang Groß;
Wolfgang Middelmann
Show Abstract
This work analyzes the applicability of using hyperspectral data for ship classification in coastal or harbor environment. An approach for hyperspectral feature selection based on bag-of-words method was developed. Nearest neighbor and random forest classifiers were used for evaluating hyperspectral bag-of-words features. The evaluation dataset was self-acquired at the Kiel Harbor in Germany, using Aisa Eagle in VNIR and Aisa Hawk in SWIR sensors. The dataset included 547 samples of 72 objects ranging from passenger ferries to sailing boats in different illumination conditions. An object library was created from the dataset and bag-of-words features were extracted. Two different separation strategies for separating training and test sets were selected: Random subsets and chronologically separated subsets. Chronological separation was more challenging than the random separation for both classifiers. In order to allow a future sliding window operation for object detection, the training and the classification were performed additionally on rectangular windows including background pixels. The performance of nearest neighbor classifier dropped whereas the performance of random forest classifier slightly improved. Overall performance of random forest classifier is better than nearest neighbor classifier; however it requires a more comprehensive dataset for training. The evaluations indicated that the bag-of-words feature selection is feasible for the given application.
M-estimation for robust sparse unmixing of hyperspectral images
Author(s):
Maria Toomik;
Shijian Lu;
James D. B. Nelson
Show Abstract
Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an ℓp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.
Spectral-spatial hyperspectral image classification using super-pixel-based spatial pyramid representation
Author(s):
Jiayuan Fan;
Hui Li Tan;
Maria Toomik;
Shijian Lu
Show Abstract
Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.
A novel multi-temporal approach to wet snow retrieval with Sentinel-1 images (Conference Presentation)
Author(s):
Carlo Marin;
Mattia Callegari;
Claudia Notarnicola
Show Abstract
Snow is one of the most relevant natural water resources present in nature. It stores water in winter and releases it in spring during the melting season. Monitoring snow cover and its variability is thus of great importance for a proactive management of water-resources. Of particular interest is the identification of snowmelt processes, which could significantly support water administration, flood prediction and prevention.
In the past years, remote sensing has demonstrated to be an essential tool for providing accurate inputs to hydrological models concerning the spatial and temporal variability of snow. Even though the analysis of snow pack can be conducted in the visible, near-infrared and short-wave infrared spectrum, the presence of clouds during the melting season, which may be pervasive in some parts of the World (e.g., polar regions), renders impossible the regular acquisition of information needed for the operational purposes.
Therefore, the use of the microwave sensors, which signal can penetrate the clouds, can be an asset for the detection of snow proprieties. In particular, the SAR images have demonstrated to be effective and robust measurements to identify the wet snow. Among the several methods presented in the literature, the best results in wet snow mapping have been achieved by the bi-temporal change detection approach proposed by Nagler and Rott [1], or its slight improvements presented afterwards (e.g., [2]). Nonetheless, with the introduction of the Sentinel-1 by ESA, which provides free-of-charge SAR images every 6 days over the same geographical area with a resolution of 20m, the scientists have the opportunity to better investigate and improve the state-of-the-art methods for wet snow detection.
In this work, we propose a novel method based on a supervised learning approach able to exploit both the experience of the state-of-the-art algorithms and the high multi-temporal information provided by the Sentinel-1 data. In detail, this is done by training the proposed method with examples extracted by [1] and refine this information by deriving additional training for the complex cases where the state-of-the-art algorithm fails. In addition, the multi-temporal information is fully exploited by modelling it as a series of statistical moments. Indeed, with a proper time sampling, statistical moments can describe the shape of the probability density function (pdf) of the backscattering time series ([3-4]). Given the description of the shape of the multi-temporal VV and VH backscattering pdfs, it is not necessary to explicitly identify which time instants in the time series are to be assigned to the reference image as done in the bi-temporal approach. This information is implicit in the shape of the pdf and it is used in the training procedure for solving the wet snow detection problem based on the available training samples.
The proposed approach is designed to work in an alpine environment and it is validated considering ground truth measurements provided by automatic weather stations that record snow depth and snow temperature over 10 sites deployed in the South Tyrol region in northern Italy.
References:
[1] Nagler, T.; Rott, H., “Retrieval of wet snow by means of multitemporal SAR data,” in Geoscience and Remote Sensing, IEEE Transactions on , vol.38, no.2, pp.754-765, Mar 2000.
[2] Storvold, R., Malnes, E., and Lauknes, I., “Using ENVISAT ASAR wideswath data to retrieve snow covered area in mountainous regions”, EARSeL eProceedings 4, 2/2006
[3] Inglada, J and Mercier, G., “A New Statistical Similarity Measure for Change Detection in Multitemporal SAR Images and Its Extension to Multiscale Change Analysis,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 5, pp. 1432-1445, May 2007.
[4] Bujor, F., Trouve, E., Valet, L., Nicolas J. M., and Rudant, J. P., “Application of log-cumulants to the detection of spatiotemporal discontinuities in multitemporal SAR images,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 42, no. 10, pp. 2073-2084, Oct. 2004.
An end-user-oriented framework for RGB representation of multitemporal SAR images and visual data mining
Author(s):
Donato Amitrano;
Francesca Cecinati;
Gerardo Di Martino;
Antonio Iodice;
Pierre-Philippe Mathieu;
Daniele Riccio;
Giuseppe Ruello
Show Abstract
In this paper, we present a new framework for the generation of two new classes of RGB products derived from multitemporal SAR data. The aim of our processing chain is to provide products characterized by a high degree of interpretability (thanks to a consistent rendering of the underlying electromagnetic scattering mechanisms) and by the possibility to be exploited in combination with simple algorithms for information extraction. The physical rationale of the proposed RGB products is presented through examples highlighting their principal properties. Finally, the suitability of these products with applications is demonstrated through two examples dealing with feature extraction and classification activities.
A novel framework for change detection in bi-temporal polarimetric SAR images
Author(s):
Davide Pirrone;
Francesca Bovolo;
Lorenzo Bruzzone
Show Abstract
Last years have seen relevant increase of polarimetric Synthetic Aperture Radar (SAR) data availability, thanks to satellite sensors like Sentinel-1 or ALOS-2 PALSAR-2. The augmented information lying in the additional polarimetric channels represents a possibility for better discriminate different classes of changes in change detection (CD) applications. This work aims at proposing a framework for CD in multi-temporal multi-polarization SAR data. The framework includes both a tool for an effective visual representation of the change information and a method for extracting the multiple-change information. Both components are designed to effectively handle the multi-dimensionality of polarimetric data. In the novel representation, multi-temporal intensity SAR data are employed to compute a polarimetric log-ratio. The multitemporal information of the polarimetric log-ratio image is represented in a multi-dimensional features space, where changes are highlighted in terms of magnitude and direction. This representation is employed to design a novel unsupervised multi-class CD approach. This approach considers a sequential two-step analysis of the magnitude and the direction information for separating non-changed and changed samples. The proposed approach has been validated on a pair of Sentinel-1 data acquired before and after the flood in Tamil-Nadu in 2015. Preliminary results demonstrate that the representation tool is effective and that the use of polarimetric SAR data is promising in multi-class change detection applications.
A segmentation-based approach to SAR change detection and mapping
Author(s):
Andrea Garzelli;
Claudia Zoppetti
Show Abstract
The potentials of SAR sensors in change detection applications have been recently strengthened by the high spatial resolution and the short revisit time provided by the new generation SAR-based missions, such as COSMO- SkyMed, TerraSAR-X, and RadarSat 3. Classical pixel-based change detection methods exploit first-order statistics variations in multitemporal acquisitions. Higher-order statistics may improve the reliability of the results, while plain object-based change detection are rarely applied to SAR images due to the low signal-to-noise ratio which characterizes 1-look VHR SAR image products. The paper presents a hybrid approach considering both a pixel-based selection of likely-changed pixels and a segmentation-driven step based on the assumption that structural changes correspond to some clusters in a multiscale amplitude/texture representation. Experiments on simulated and true SAR image pairs demonstrate the advantages of the proposed approach.
Change detection in a time series of polarimetric SAR data by an omnibus test statistic and its factorization (Conference Presentation)
Author(s):
Allan A. Nielsen;
Knut Conradsen;
Henning Skriver
Show Abstract
Test statistics for comparison of real (as opposed to complex) variance-covariance matrices exist in the statistics literature [1].
In earlier publications we have described a test statistic for the equality of two variance-covariance matrices following the complex Wishart distribution with an associated p-value [2]. We showed their application to bitemporal change detection and to edge detection [3] in multilook, polarimetric synthetic aperture radar (SAR) data in the covariance matrix representation [4]. The test statistic and the associated p-value is described in [5] also. In [6] we focussed on the block-diagonal case, we elaborated on some computer implementation issues, and we gave examples on the application to change detection in both full and dual polarization bitemporal, bifrequency, multilook SAR data.
In [7] we described an omnibus test statistic Q for the equality of k variance-covariance matrices following the complex Wishart distribution. We also described a factorization of Q = R2 R3 … Rk where Q and Rj determine if and when a difference occurs. Additionally, we gave p-values for Q and Rj. Finally, we demonstrated the use of Q and Rj and the p-values to change detection in truly multitemporal, full polarization SAR data.
Here we illustrate the methods by means of airborne L-band SAR data (EMISAR) [8,9]. The methods may be applied to other polarimetric SAR data also such as data from Sentinel-1, COSMO-SkyMed, TerraSAR-X, ALOS, and RadarSat-2 and also to single-pol data.
The account given here closely follows that given our recent IEEE TGRS paper [7].
Selected References
[1] Anderson, T. W., An Introduction to Multivariate Statistical Analysis, John Wiley, New York, third ed. (2003).
[2] Conradsen, K., Nielsen, A. A., Schou, J., and Skriver, H., “A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data," IEEE Transactions on Geoscience and Remote Sensing 41(1): 4-19, 2003.
[3] Schou, J., Skriver, H., Nielsen, A. A., and Conradsen, K., “CFAR edge detector for polarimetric SAR images," IEEE Transactions on Geoscience and Remote Sensing 41(1): 20-32, 2003.
[4] van Zyl, J. J. and Ulaby, F. T., “Scattering matrix representation for simple targets," in Radar Polarimetry for Geoscience Applications, Ulaby, F. T. and Elachi, C., eds., Artech, Norwood, MA (1990).
[5] Canty, M. J., Image Analysis, Classification and Change Detection in Remote Sensing,with Algorithms for ENVI/IDL and Python, Taylor & Francis, CRC Press, third revised ed. (2014).
[6] Nielsen, A. A., Conradsen, K., and Skriver, H., “Change detection in full and dual polarization, single- and multi-frequency SAR data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8(8): 4041-4048, 2015.
[7] Conradsen, K., Nielsen, A. A., and Skriver, H., "Determining the points of change in time series of polarimetric SAR data," IEEE Transactions on Geoscience and Remote Sensing 54(5), 3007-3024, 2016.
[9] Christensen, E. L., Skou, N., Dall, J., Woelders, K., rgensen, J. H. J., Granholm, J., and Madsen, S. N., “EMISAR: An absolutely calibrated polarimetric L- and C-band SAR," IEEE Transactions on Geoscience and Remote Sensing 36: 1852-1865 (1998).
A real-time focused SAR algorithm on the Jetson TK1 board
Author(s):
K. Radecki;
P. Samczynski;
K. Kulpa;
J. Drozdowicz
Show Abstract
In this paper the authors present a solution based on a small and lightweight computing platform equipped with a graphics processing unit (GPU) which allows the possibility of performing a real-time fully focused SAR algorithm. The presented system is dedicated for airborne SAR applications including small SAR systems dedicated for medium-sized unmanned aerial vehicle (UAV) platforms. The proposed solution also reduces the need for a storage system. In the paper real SAR results obtained using a Frequency Modulation Continuous Wave (FMCW) radar demonstrator operating at 35 GHz carrier frequency with 1GHz bandwidth are presented. As a radar carrier an airborne platform was used. The presented SAR radar demonstrator was developed by the Warsaw University of Technology in cooperation with the Air Force Institute of Technology, Warsaw, Poland.
A simulation-based approach towards automatic target recognition of high resolution space borne radar signatures
Author(s):
H. Anglberger;
T. Kempf
Show Abstract
Specific imaging effects that are caused mainly by the range measurement principle of a radar device, its much lower frequency range as compared to the optical spectrum, the slanted imaging geometry and certainly the limited spatial resolution complicates the interpretation of radar signatures decisively. Especially the coherent image formation which causes unwanted speckle noise aggravates the problem of visually recognizing target objects. Fully automatic approaches with acceptable false alarm rates are therefore an even harder challenge.
At the Microwaves and Radar Institute of the German Aerospace Center (DLR) the development of methods to implement a robust overall processing workflow for automatic target recognition (ATR) out of high resolution synthetic aperture radar (SAR) image data is under progress. The heart of the general approach is to use time series exploitation for the former detection step and simulation-based signature matching for the subsequent recognition. This paper will show the overall ATR chain as a proof of concept for the special case of airplane recognition on image data from the space borne SAR sensor TerraSAR-X.
Oil spill characterization in the hybrid polarity SAR domain using log-cumulants
Author(s):
Martine M. Espeseth;
Stine Skrunes;
Camilla Brekke;
Arnt-Børre Salberg;
Cathleen E. Jones;
Benjamin Holt
Show Abstract
Log-cumulants have proven to be an interesting tool for evaluating the statistical properties of potential oil spills in polarimetric Synthetic Aperture Radar (SAR) data within the common horizontal (H) and vertical (V) polarization basis. The use of first, second, and third order sample log-cumulants has shown potential for evaluating the texture and the statistical distributions, as well as discriminating oil from look-alikes. Log-cumulants are cumulants derived in the log-domain and can be applied to both single-polarization and multipolarization SAR data. This study is the first to investigate the differences between hybrid-polarity (HP) and full-polarimetric (FP) modes based on the sample log-cumulants of various oil slicks and open water from nine Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) scenes acquired off the coast of Norway in 2015.
The sample log-cumulants calculated from the HP intensities show similar statistical behavior to the FP ones, resulting in a similar interpretation of the sample log-cumulants from HP and FP. Approximately eight hours after release the sample log-cumulants representing emulsion slicks have become more similar to the open water compared to plant oil. We find that the sample log-cumulants of the various oil slicks and open water varies between the scenes and also between the slicks and open water. This might be due to changes in ocean and wind condition, the initial slick properties, and/or the difference in the weathering process of the oil slicks.
Capability of geometric features to classify ships in SAR imagery
Author(s):
Haitao Lang;
Siwen Wu;
Quan Lai;
Li Ma
Show Abstract
Ship classification in synthetic aperture radar (SAR) imagery has become a new hotspot in remote sensing community for its valuable potential in many maritime applications. Several kinds of ship features, such as geometric features, polarimetric features, and scattering features have been widely applied on ship classification tasks. Compared with polarimetric features and scattering features, which are subject to SAR parameters (e.g., sensor type, incidence angle, polarization, etc.) and environment factors (e.g., sea state, wind, wave, current, etc.), geometric features are relatively independent of SAR and environment factors, and easy to be extracted stably from SAR imagery. In this paper, the capability of geometric features to classify ships in SAR imagery with various resolution has been investigated. Firstly, the relationship between the geometric feature extraction accuracy and the SAR imagery resolution is analyzed. It shows that the minimum bounding rectangle (MBR) of ship can be extracted exactly in terms of absolute precision by the proposed automatic ship-sea segmentation method. Next, six simple but effective geometric features are extracted to build a ship representation for the subsequent classification task. These six geometric features are composed of length (f1), width (f2), area (f3), perimeter (f4), elongatedness (f5) and compactness (f6). Among them, two basic features, length (f1) and width (f2), are directly extracted based on the MBR of ship, the other four are derived from those two basic features. The capability of the utilized geometric features to classify ships are validated on two data set with different image resolutions. The results show that the performance of ship classification solely by geometric features is close to that obtained by the state-of-the-art methods, which obtained by a combination of multiple kinds of features, including scattering features and geometric features after a complex feature selection process.
Estimation of ice sheet attenuation by using radar sounder and ice core data
Author(s):
Ana-Maria Ilisei;
Jilu Li;
Sivaprasad Gogineni;
Lorenzo Bruzzone
Show Abstract
Due to their great impact on the environment and society, the study of the ice sheets has become a major concern of the scientific community. In particular, the estimation of the ice attenuation is crucial since it enables a more precise characterization of the ice and basal conditions. Although such problem has been often addressed in the literature, the assessment of the ice attenuation is subject to several hypotheses and uncertainties, resulting in a wide range of possible interpretations of the properties of the ice. In this paper, we propose a method for constraining the ice attenuation profiles in the vicinity of an ice core by jointly using coincident radar sounder (RS) data (radargrams) and dielectric profile (DEP) data. Radargrams contain measurements of radar reflected power from ice subsurface dielectric discontinuities (layers) on wide areas. DEP data contain ice dielectric permittivity measurements collected at an ice core. The method relies on the detection of ice layers in the radargrams, the estimation of their depth and reflectivity from the DEP data, and the use of the radar equation for the estimation of ice attenuation through the whole ice column and locally at each layer position. The method has been applied to RS and DEP data acquired at the NEEM core site in Greenland. Experimental results confirm the effectiveness of the proposed method.
A bat inspired technique for clutter reduction in radar sounder systems
Author(s):
L. Carrer;
L. Bruzzone
Show Abstract
Radar Sounders are valuable instruments for subsurface investigation. They are widely employed for the study of planetary bodies around the solar system. Due to their wide antenna beam pattern, off-nadir surface reflections (i.e. clutter) of the transmitted signal can compete with echoes coming from the subsurface thus masking them. Different strategies have been adopted for clutter mitigation. However, none of them proved to be the final solution for this specific problem. Bats are very well known for their ability in discriminating between a prey and unwanted clutter (e.g. foliage) by effectively employing their sonar. According to recent studies, big brown bats can discriminate clutter by transmitting two different carrier frequencies. Most interestingly, there are many striking analogies between the characteristics of the bat sonar and the one of a radar sounder. Among the most important ones, they share the same nadir acquisition geometry and transmitted signal type (i.e. linear frequency modulation). In this paper, we explore the feasibility of exploiting frequency diversity for the purpose of clutter discrimination in radar sounding by mimicking unique bats signal processing strategies. Accordingly, we propose a frequency diversity clutter reduction method based on specific mathematical conditions that, if verified, allow the disambiguation between the clutter and the subsurface signal to be performed. These analytic conditions depend on factors such as difference in central carrier frequencies, surface roughness and subsurface material properties. The method performance has been evaluated by different simulations of meaningful acquisition scenarios which confirm its clutter reduction effectiveness.
An approach for SLAR images denoising based on removing regions with low visual quality for oil spill detection
Author(s):
Beatriz Alacid;
Pablo Gil
Show Abstract
This paper presents an approach to remove SLAR (Side-Looking Airborne Radar) image regions with low visual quality to be used for an automatic detection of oil slicks on a board system. This approach is focused on the detection and labelling of SLAR image regions caused by a poor acquisition from two antennas located on both sides of an aircraft. Thereby, the method distinguishes ineligible regions which are not suitable to be used on the steps of an automatic detection process of oil slicks because they have a high probability of causing false positive results in the detection process. To do this, the method uses a hybrid approach based on edge-based segmentation aided by Gabor filters for texture detection combined with a search algorithm of significant grey-level changes for fitting the boundary lines in each of all the bad regions. Afterwards, a statistical analysis is done to label the set of pixels which should be used for recognition of oil slicks. The results show a successful detection of the ineligible regions and consequently how the image is partitioned in sub-regions of interest in terms of detecting the oil slicks, improving the accuracy and reliability of the oil slick detection.
Full-aspect 3D target reconstruction of interferometric circular SAR
Author(s):
Yun Lin;
Qian Bao;
Liying Hou;
Lingjuan Yu;
Wen Hong
Show Abstract
Circular SAR has several attractive features, such as full-aspect observation, high resolution, and 3D target reconstruction capability, thus it has important potential in fine feature description of typical targets. However, the 3D reconstruction capability relies on the scattering persistence of the target. For target with a highly directive scattering property, the resolution in the direction perpendicular to the instantaneous slant plane is very low compared to the range and azimuth resolutions, and the 3D structure of target can hardly be obtained. In this paper, an Interferometric Circular SAR (InCSAR) method is proposed to reconstruct the full-aspect 3D structure of typical targets. InCSAR uses two sensors with a small incident angle difference to collect data in a circular trajectory. The method proposed in this paper calculates the interferometric phase difference (IPD) of the image pair at equally spaced height slices, and mask the original image with an IPD threshold. The main principle is that when a scatterer is imaged at a wrong height, the image pair has an offset, which results in a nonzero IPD, and only when the scatterer is correctly imaged at its true height, the IPD is near zero. The IPD threshold is used to retain scatterers that is correctly imaged at the right height, and meanwhile eliminate scatterers that is imaged at a wrong height, thus the 3D target structure can be retrieved. The proposed method is validated by real data processing, both the data collected in the microwave chamber and the GOTCHA airborne data.
Adaptive sidelobe reduction in SAR and INSAR COSMO-SkyMed image processing
Author(s):
Rino Lorusso;
Nunzia Lombardi;
Giovanni Milillo
Show Abstract
The main lobe and the side lobes of strong scatterers are sometimes clearly visible in SAR images. Sidelobe reduction is of particular importance when imaging scenes contain objects such as ships and buildings having very large radar cross sections. Amplitude weighting is usually used to suppress sidelobes of the images at the expense of broadening of mainlobe, loss of resolution and degradation of SAR images. The Spatial Variant Apodization (SVA) is an Adaptive SideLobe Reduction (ASLR) technique that provides high effective suppression of sidelobes without broadening mainlobe. In this paper, we apply SVA to process COSMO-SkyMed (CSK) StripMap and Spotlight X-band data and compare the images with the standard products obtained via Hamming window processing. Different test sites have been selected in Italy, Argentina, California and Germany where corner reflectors are installed. Experimental results show clearly the resolution improvement (20%) while sidelobe kept to a low level when SVA processing is applied compared with Hamming windowing one. Then SVA technique is applied to Interferometric SAR image processing (INSAR) using a CSK StripMap interferometric tandem-like data pair acquired on East-California. The interferometric coherence of image pair obtained without sidelobe reduction (SCS_U) and with sidelobe reduction performed via Hamming window and via SVA are compared. High resolution interferometric products have been obtained with small variation of mean coherence when using ASLR products with respect to hamming windowed and no windowed one.
Analysis of the electronic crosstalk effect in Terra MODIS long-wave infrared photovoltaic bands using lunar images
Author(s):
Truman Wilson;
Aisheng Wu;
Xu Geng;
Zhipeng Wang;
Xiaoxiong Xiong
Show Abstract
The Moderate Resolution Imaging Spectroradiometer (MODIS) is one of the key sensors among the suite of remote sensing instruments on board the Earth Observing System Terra and Aqua spacecrafts. For each MODIS spectral band, the sensor degradation has been measured using a set of on-board calibrators. MODIS also uses lunar observations from nearly monthly spacecraft maneuvers, which bring the Moon into view through the space- view port, helping to characterize the scan mirror degradation at a different angles of incidence. Throughout the Terra mission, contamination of the long-wave infrared photovoltaic band (LWIR PV, bands 2730) signals has been observed in the form of electronic crosstalk, where signal from each of the detectors among the LWIR PV bands can leak to the other detectors, producing a false signal contribution. This contamination has had a noticeable effect on the MODIS science products since 2010 for band 27, and since 2012 for bands 28 and 29. Images of the Moon have been used effectively for determining the contaminating bands, and have also been used to derive correction coefficients for the crosstalk contamination. In this paper, we introduce an updated technique for characterizing the crosstalk contamination among the LWIR PV bands using data from lunar calibration events. This approach takes into account both the in-band" and out-of-band" contribution to the signal contamination for each detector in bands 2730, which is not considered in previous works. The crosstalk coefficients can be derived for each lunar calibration event, providing the time dependence of the crosstalk contamination. Application of these coefficients to Earth-view image data results in a significant reduction in image contamination and a correction of the scene radiance for bands 27 30. Also, this correction shows a significant improvement to certain threshold tests in the MODIS Level-2 Cloud Mask. In this paper, we will detail the methodology used to identify and correct the crosstalk contamination for the LWIR PV bands in Terra MODIS. The derived time-dependent crosstalk coefficients will also be discussed. Finally, the impact of the correction on the downstream data products will be analyzed.
Processing of high spatial resolution information obtained from satellites of Resource-P series according to the level 1
Author(s):
V. Eremeev;
A. Kuznetcov;
V. Poshekhonov;
O. Presniakov;
V. Zenin;
P. Svetelkin;
A. Kochergin
Show Abstract
The present paper has described main functioning principles of imagery instruments of high spatial resolution of Russian satellites “Resource-P”. Processing of images obtained from these instruments according to the level 1 includes: relative radiometric correction, stitching of video data obtained from separate CCD-matrices, geometric matching of multitemporal multispectral images from optoelectronic converters (ОЕС), pansharpening, saving of results in distribution formats. Stages for acquisition of a high-precision model for the Earth surface imagery being a base of processing are considered. Descriptions of algorithms for realization of mentioned processing types, examples of their practical usage and also precise characteristics of outputs are described.
Investigating the performance of a low-cost thermal imager for forestry applications
Author(s):
M. Smigaj;
R. Gaulton;
S. L. Barr;
J. C. Suarez
Show Abstract
Thermography can be used for monitoring changes in the physiological state of plants. This is due to stress factors influencing emissions in the thermal infrared part of electromagnetic spectrum, and in effect changing the thermal properties of plants. However, there has been limited research into the use of thermal remote sensing approaches for tree health monitoring in the UK. This is due to a need for high spatial resolution data, which is usually obtained with low temporal frequency. Newly emerging technologies, such as unmanned aerial vehicles (UAVs), could supplement aerial data acquisition, but sensor development is still in the early stages. This paper investigates the performance of a low-cost microbolometer thermal infrared camera, which was to be deployed on a UAV platform. First the camera was tested in a laboratory environment to investigate whether camera temperature changes have a significant impact on the image quality. Tests suggested that a rapid camera’s temperature change is reflected in future images, but the expected temperature change rate experienced during UAV launch and altitude gain would not have significant effect on the quality of thermal imagery. Further field-based experiment showed that obtaining absolute temperatures of non-blackbody objects can be accurately performed with such camera, providing the emissivity of surfaces is accurately known. The variation in the target’s surface temperature throughout time was also well reflected.
A star identification algorithm for large FOV observations
Author(s):
Yu Duan;
Zhaodong Niu;
Zengping Chen
Show Abstract
Due to the broader extent of observation and higher detection probability of space targets, large FOV (field of vision) optical instruments are widely used in astronomical applications.. However, the high density of observed stars and the distortion of the optical system often bring about inaccuracy in star locations. So in large FOV observations, many conventional star identification algorithms do not show very good performance. In this paper, we propose a star identification method with a low requirement for observation accuracy and thus suitable for large FOV circumstances. The proposed method includes two stages. The former is based on the match group algorithm, in addition to which we exploit the information of differential angles of inclination for verification. The inclinations of satellite stars are computed by reference to the selected pole stars. Then we obtain a set of identified stars for further recognition. The latter stage involves four steps. First, we derive the relationship between the rectangular coordinates of catalog stars and sensor stars with the identified locations obtained. Second, we transform the sensor coordinates to the catalog coordinates and find the catalog stars at close range as candidates. Third, we calculate the angle of inclination of each unidentified sensor star in relation to the nearest previously identified one, and the angular separation between them as well, to compare with those of the candidates. At last, candidates satisfying the limitations are considered the appropriate correspondences. The experimental results show that in large FOV observations, the proposed method presents better performance in comparison with several typical star identification methods in open literature.
Towards real-time change detection in videos based on existing 3D models
Author(s):
Boitumelo Ruf;
Tobias Schuchert
Show Abstract
Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.
Remote sensing imagery classification using multi-objective gravitational search algorithm
Author(s):
Aizhu Zhang;
Genyun Sun;
Zhenjie Wang
Show Abstract
Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.
Water bodies extraction from high resolution satellite images using water indices and optimal threshold
Author(s):
Alya AlMaazmi
Show Abstract
Over the past years, remote sensing imagery made the earth monitoring more effective and valuable through developing different algorithms for feature extraction. One of the significant features are water surfaces. Water features extraction such as pools, lakes and gulfs gained a considerable attention over the past years, as water plays critical role for surviving, planning and protecting water resources. Past worth efforts in water extraction from remote sensed images mainly faced the challenge of misclassification, especially with shadows. Shadows are typical noise objects for water, extraction, as they have almost identical spectrum characteristics, which result difficulty to discriminate between water and shadows in a remote sensing image, especially in the urban region such as Dubai.
Therefore, water extraction algorithm is developed in order to extract water surfaces accurately with shadows elimination. The detection is based on spectral information such as water indices (WIs), and morphological operations. Water indices are used to discriminate water surfaces from lands based on combining two or more water indices such as Normalized Difference Water Index (NDWI), Modified Normalized Difference Water Index (MNDWI), and Normalized Saturation-value Difference Index (NSVDI), used at an optimum threshold. The morphological operators will be performed using opening by reconstruction to discriminate between water and shadows at an optimum threshold. Both Water Indices and morphological operation results will be infused together in one image that result a binary image of water objects.
The algorithm and final results are compared with ground truth image for accuracy assessment, the results were satisfactory with an accuracy of 95% and higher and very minimum negligible shadows appeared. Moreover the resultant image transformed into vector features in order to create a shape file that can be used and viewed in google earth and Geo software.
Vegetation extraction from high-resolution satellite imagery using the Normalized Difference Vegetation Index (NDVI)
Author(s):
Meera R. AlShamsi
Show Abstract
Over the past years, there has been various urban development all over the UAE. Dubai is one of the cities that experienced rapid growth in both development and population. That growth can have a negative effect on the surrounding environment. Hence, there has been a necessity to protect the environment from these fast pace changes. One of the major impacts this growth can have is on vegetation. As technology is evolving day by day, there is a possibility to monitor changes that are happening on different areas in the world using satellite imagery. The data from these imageries can be utilized to identify vegetation in different areas of an image through a process called vegetation detection. Being able to detect and monitor vegetation is very beneficial for municipal planning and management, and environment authorities. Through this, analysts can monitor vegetation growth in various areas and analyze these changes. By utilizing satellite imagery with the necessary data, different types of vegetation can be studied and analyzed, such as parks, farms, and artificial grass in sports fields. In this paper, vegetation features are detected and extracted through SAFIY system (i.e. the Smart Application for Feature extraction and 3D modeling using high resolution satellite ImagerY) by using high-resolution satellite imagery from DubaiSat-2 and DEIMOS-2 satellites, which provide panchromatic images of 1m resolution and spectral bands (red, green, blue and near infrared) of 4m resolution. SAFIY system is a joint collaboration between MBRSC and DEIMOS Space UK. It uses image-processing algorithms to extract different features (roads, water, vegetation, and buildings) to generate vector maps data. The process to extract green areas (vegetation) utilize spectral information (such as, the red and near infrared bands) from the satellite images. These detected vegetation features will be extracted as vector data in SAFIY system and can be updated and edited by end-users, such as governmental entities and municipalities.
Image navigation and registration for the geostationary lightning mapper (GLM)
Author(s):
Roel W. H. van Bezooijen;
Howard Demroff;
Gregory Burton;
Donald Chu;
Shu S. Yang
Show Abstract
The Geostationary Lightning Mappers (GLM) for the Geostationary Operational Environmental Satellite (GOES) GOES-R series will, for the first time, provide hemispherical lightning information 24 hours a day from longitudes of 75 and 137 degrees west. The first GLM of a series of four is planned for launch in November, 2016. Observation of lightning patterns by GLM holds promise to improve tornado warning lead times to greater than 20 minutes while halving the present false alarm rates. In addition, GLM will improve airline traffic flow management, and provide climatology data allowing us to understand the Earth’s evolving climate.
The paper describes the method used for translating the pixel position of a lightning event to its corresponding geodetic longitude and latitude, using the J2000 attitude of the GLM mount frame reported by the spacecraft, the position of the spacecraft, and the alignment of the GLM coordinate frame relative to its mount frame. Because the latter alignment will experience seasonal variation, this alignment is determined daily using GLM background images collected over the previous 7 days. The process involves identification of coastlines in the background images and determination of the alignment change necessary to match the detected coastline with the coastline predicted using the GSHHS database.
Registration is achieved using a variation of the Lucas-Kanade algorithm where we added a dither and average technique to improve performance significantly. An innovative water mask technique was conceived to enable self-contained detection of clear coastline sections usable for registration. Extensive simulations using accurate visible images from GOES13 and GOES15 have been used to demonstrate the performance of the coastline registration method, the results of which are presented in the paper.
A particle filter for multi-target tracking in track before detect context
Author(s):
Naima Amrouche;
Ali Khenchaf;
Daoud Berkani
Show Abstract
The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm’s successful performance is demonstrated using a simulated two target example.
Monitoring of surface movement in a large area of the open pit iron mines (Carajás, Brazil) based on A-DInSAR techniques using TerraSAR-X data
Author(s):
José C. Mura;
Waldir R. Paradella;
Fabio F. Gama;
Guilherme G. Silva
Show Abstract
PSI (Persistent Scatterer Interferometry) analysis of large area is always a challenging task regarding the removal of the atmospheric phase component. This work presents an investigation of ground deformation measurements based on a combination of DInSAR Time-Series (DTS) and PSI techniques, applied in a large area of open pit iron mines located in Carajás (Brazilian Amazon Region), aiming at detect high rates of linear and nonlinear ground deformation. These mines have presented a historical of instability and surface monitoring measurements over sectors of the mines (pit walls) have been carried out based on ground based radar and total station (prisms). By using a priori information regarding the topographic phase error and phase displacement model derived from DTS, temporal phase unwrapping in the PSI processing and the removal of the atmospheric phases can be performed more efficiently. A set of 33 TerraSAR-X-1 images, acquired during the period from March 2012 to April 2013, was used to perform this investigation. The DTS analysis was carried out on a stack of multi-look unwrapped interferogram using an extension of SVD to obtain the Least-Square solution. The height errors and deformation rates provided by the DTS approach were subtracted from the stack of interferogram to perform the PSI analysis. This procedure improved the capability of the PSI analysis to detect high rates of deformation as well as increased the numbers of point density of the final results. The proposed methodology showed good results for monitoring surface displacement in a large mining area, which is located in a rain forest environment, providing very useful information about the ground movement for planning and risks control.
An experimental comparison of standard stereo matching algorithms applied to cloud top height estimation from satellite IR images
Author(s):
Anna Anzalone;
Francesco Isgrò
Show Abstract
The JEM-EUSO (Japanese Experiment Module-Extreme Universe Space Observatory) telescope will measure Ultra High Energy Cosmic Ray properties by detecting the UV fluorescent light generated in the interaction between cosmic rays and the atmosphere. Cloud information is crucial for a proper interpretation of these data. The problem of recovering the cloud-top height from satellite images in infrared has struck some attention over the last few decades, as a valuable tool for the atmospheric monitoring. A number of radiative methods do exist, like C02 slicing and Split Window algorithms, using one or more infrared bands. A different way to tackle the problem is, when possible, to exploit the availability of multiple views, and recover the cloud top height through stereo imaging and triangulation. A crucial step in the 3D reconstruction is the process that attempts to match a characteristic point or features selected in one image, with one of those detected in the second image. In this article the performance of a group matching algorithms that include both area-based and global techniques, has been tested. They are applied to stereo pairs of satellite IR images with the final aim of evaluating the cloud top height. Cloudy images from SEVIRI on the geostationary Meteosat Second Generation 9 and 10 (MSG-2, MSG-3) have been selected. After having applied to the cloudy scenes the algorithms for stereo matching, the outcoming maps of disparity are transformed in depth maps according to the geometry of the reference data system. As ground truth we have used the height maps provided by the database of MODIS (Moderate Resolution Imaging Spectroradiometer) on-board Terra/Aqua polar satellites, that contains images quasi-synchronous to the imaging provided by MSG.
Modeling the coupling effect of jitter and attitude control on TDICCD camera imaging
Author(s):
Yulun Li;
Zhen Yang;
Xiaoshan Ma;
Wei Ni
Show Abstract
The vibration has an important influence on space-borne TDICCD imaging quality. It is generally aroused by an interaction between satellite jitter and attitude control. Previous modeling for this coupling relation is mainly concentrating on accurate modal analysis, transfer path and damping design, etc. Nevertheless, when controlling attitude, the coupling terms among three body axes are usually ignored. This is what we try to study in this manuscript. Firstly, a simplified formulation dedicated to this problem is established. Secondly, we use Dymola 2016 to execute the simulation model profiting Modelica synchronous feature, which has been proposed in recent years. The results demonstrate that the studied effect can introduce additional oscillatory modes and lead the attitude stabilization process slower. In addition, when fully stabilized, there seems time-statistically no difference but it still intensifies the motion-blur by a tiny amount. We state that this effect might be worth considering in image restoration.
Development of image processing method to detect noise in geostationary imagery
Author(s):
Konstantin V. Khlopenkov;
David R. Doelling
Show Abstract
The Clouds and the Earth’s Radiant Energy System (CERES) has incorporated imagery from 16 individual geostationary (GEO) satellites across five contiguous domains since March 2000. In order to derive broadband fluxes uniform across satellite platforms it is important to ensure a good quality of the input raw count data. GEO data obtained by older GOES imagers (such as MTSAT-1, Meteosat-5, Meteosat-7, GMS-5, and GOES-9) are known to frequently contain various types of noise caused by transmission errors, sync errors, stray light contamination, and others. This work presents an image processing methodology designed to detect most kinds of noise and corrupt data in all bands of raw imagery from modern and historic GEO satellites. The algorithm is based on a set of different approaches to detect abnormal image patterns, including inter-line and inter-pixel differences within a scanline, correlation between scanlines, analysis of spatial variance, and also a 2D Fourier analysis of the image spatial frequencies. In spite of computational complexity, the described method is highly optimized for performance to facilitate volume processing of multi-year data and runs in fully automated mode. Reliability of this noise detection technique has been assessed by human supervision for each GEO dataset obtained during selected time periods in 2005 and 2006. This assessment has demonstrated the overall detection accuracy of over 99.5% and the false alarm rate of under 0.3%. The described noise detection routine is currently used in volume processing of historical GEO imagery for subsequent production of global gridded data products and for cross-platform calibration.
Data mining tools for Sentinel 1 and Sentinel 2 data exploitation
Author(s):
Daniela Espinoza Molina;
Mihai Datcu
Show Abstract
With the new planned Sentinel missions, the availability of Earth Observation data is increasing everyday offering a larger number of applications that can be created using these data. Currently, three of the five missions were launched and they are delivering a wealth of data and imagery of the Earth's surface as, for example, the Sentinel-1 carries an advanced radar instrument to provide an all-weather, day-and-night supply of Earth imagery. The second mission, the Sentinel-2, carries an optical instrument payload that will sample 13 spectral bands at different resolutions. Even though, we count on tools for automated loading and visual exploration of the Sentinel data, we still face the problem of extracting relevant structures from the images, finding similar patterns in a scene, exploiting the data, and creating final user applications based on these processed data. In this paper, we present our approach for processing radar and multi-spectral Sentinel data. Our approach is mainly composed of three steps: 1) the generation of a data model that explains the information contained in a Sentinel product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback methods.
Pansharpening in coastal ecosystems using Worldview-2 imagery
Author(s):
Edurne Ibarrola-Ulzurrun;
Javier Marcello-Ruiz;
Consuelo Gonzalo-Martin
Show Abstract
Both climate change and anthropogenic pressure impacts are producing a declining in ecosystem natural resources. In this work, a vulnerable coastal ecosystem, Maspalomas Natural Reserve (Canary Islands, Spain), is analyzed. The development of advanced image processing techniques, applied to new satellites with very high resolution sensors (VHR), are essential to obtain accurate and systematic information about such natural areas. Thus, remote sensing offers a practical and cost-effective means for a good environmental management although some improvements are needed by the application of pansharpening techniques. A preliminary assessment was performed selecting classical and new algorithms that could achieve good performance with WorldView-2 imagery. Moreover, different quality indices were used in order to asses which pansharpening technique gives a better fused image. A total of 7 pansharpening algorithms were analyzed using 6 spectral and spatial quality indices. The quality assessment was implemented for the whole set of multispectral bands and for those bands covered by the wavelength range of the panchromatic image and outside of it. After an extensive evaluation, the most suitable algorithm was the Weighted Wavelet ‘à trous’ through Fractal Dimension Maps technique which provided the best compromise between the spectral and spatial quality for the image. Finally, Quality Map Analysis was performed in order to study the fusion in each band at local level. As conclusion, novel analysis has been conducted covering the evaluation of fusion methods in shallow water areas. Hence, the excellent results provided by this study have been applied to the generation of challenging thematic maps of coastal and dunes protected areas.
Group sparsity based airborne wide angle SAR imaging
Author(s):
Zhonghao Wei;
Bingchen Zhang;
Hui Bi;
Yun Lin;
Yirong Wu
Show Abstract
In this paper, we develop a group sparsity based wide angle synthetic aperture radar (WASAR) imaging model and propose a novel algorithm called backprojection based group complex approximate message passing (GCAMP-BP) to recover the anisotropic scene. Compare to conventional backprojection based complex approximate message passing (CAMP-BP) algorithm for the recovery of isotropic scene, the proposed method accommodates aspect dependent scattering behavior better and can produce better imagery. Simulated and experimental results are presented to demonstrate the validity of the proposed algorithm.
Spectral curvature correction method based on inverse distance weighted interpolation
Author(s):
Juanjuan Jing;
Jinsong Zhou;
Yacan Li;
Lei Feng
Show Abstract
Spectral curvature (smile effect) is universally existed in dispersive imaging spectrometer. Since most image processing systems considered all spatial pixels having the same wavelength, spectral curvature destroys the response consistence of the radiation energy in spatial dimension, it is necessary to correct the spectral curvature based on the spectral calibration data of the imaging spectrometer. Interpolation is widely used in resampling the measured spectra at the non-offset wavelength, but it is not versatile because the accuracy is different due to the spectral resolution changed. In the paper, we introduce the inverse distance weighted(IDW) method in spectrum resampling. First, calculate the Euclidean distance between the non-offset wavelength and the points near to it, the points number can be two, three, four or five, as many as you define. Then use the Euclidean distance to calculate the weight value of these points. Finally calculate the radiation of non-offset wavelength using the weight value and its corresponding radiation. The results turned out to be effective with the practical data acquired by the instrument, and it has the characteristics of versatility, simplicity, and fast.
Gravitational self-organizing map-based seismic image classification with an adaptive spectral-textural descriptor
Author(s):
Yanling Hao;
Genyun Sun
Show Abstract
Seismic image classification is of vital importance for extracting damage information and evaluating disaster losses. With the increasing availability of high resolution remote sensing images, automatic image classification offers a unique opportunity to accommodate the rapid damage mapping requirements. However, the diversity of disaster types and the lack of uniform statistical characteristics in seismic images increase the complexity of automated image classification. This paper presents a novel automatic seismic image classification approach by integrating an adaptive spectral-textural descriptor into gravitational self-organizing map (gSOM). In this approach, seismic image is first segmented into several objects based on mean shift (MS) method. These objects are then characterized explicitly by spectral and textural feature quantization histograms. To objectify the image object delineation adapt to various disaster types, an adaptive spectral-textural descriptor is developed by integrating the histograms automatically. Subsequently, these objects as classification units are represented by neurons in a self-organizing map and clustered by adjacency gravitation. By moving the neurons around the gravitational space and merging them according to the gravitation, the object-based gSOM is able to find arbitrary shape and determine the class number automatically. Taking advantage of the diversity of gSOM results, consensus function is then conducted to discover the most suitable classification result. To confirm the validity of the presented approach, three aerial seismic images in Wenchuan covering several disaster types are utilized. The obtained quantitative and qualitative experimental results demonstrated the feasibility and accuracy of the proposed seismic image classification method.
Downscaling soil moisture using multisource data in China
Author(s):
Ru An;
Hui-Lin Wang;
Jia-jun You;
Ying Wang;
Xiao-ji Shen;
Wei Gao;
Yi-nan Wang;
Yu Zhang;
Zhe Wang;
Jonathan Arthur Quaye-Ballardd;
Yuehong Chen
Show Abstract
Soil moisture plays an important role in the water cycle within the surface ecosystem and it is the basic condition for the growth and development of plants. Currently, the spatial resolution of most soil moisture data from remote sensing ranges from ten to several tens of kilometres whilst those observed in situ and simulated for watershed hydrology, ecology, agriculture, weather and drought research are generally less than 1 kilometre. Therefore, the existing coarse resolution remotely sensed soil moisture data needs to be down-scaled. In this paper, a universal soil moisture downscaling model through stepwise regression with moving window suitable for large areas and multi temporal has been established. Datasets comprise land surface, brightness temperature, precipitation, soil and topographic parameters from high resolution data, and active/passive microwave remotely sensed soil moisture data from Essential Climate Variables (ECV_SM) with 25 km spatial resolution were used. With this model, a total of 288 soil moisture maps of 1 km resolution from the first ten-day of January 2003 to the last tenth-day of December 2010 were derived. The in situ observations were used to validate the down-scaled ECV_SM for different land cover and land use types and seasons. In addition, various errors comparative analysis was also carried out for the down-scaled ECV_SM and original one. In general, the down-scaled soil moisture for different land cover and land use types is consistent with the in situ observations. The accuracy is relatively high in autumn and winter. The validation results show that downscaled soil moisture can be improved not only on spatial resolution, but also on estimation accuracy.
Prediction of object detection, recognition, and identification [DRI] ranges at color scene images based on quantifying human color contrast perception
Author(s):
Ephi Pinsky;
Ilia Levin;
Ofer Yaron
Show Abstract
We propose a novel approach to predict, for specified color imaging system and for objects with known characteristics, their detection, recognition, identification (DRI) ranges in a colored dynamic scene, based on quantifying the human color contrast perception.
The method refers to the well established L*a*b*, 3D color space. The nonlinear relations of this space are intended to mimic the nonlinear response of the human eye. The metrics of L*a*b* color space is such that the Euclidian distance between any two colors in this space is approximately proportional to the color contrast as perceived by the human eye/brain. The result of this metrics leads to the outcome that color contrast of any two points is always greater (or equal) than their equivalent grey scale contrast. This meets our sense that looking on a colored image, contrast is superior to the gray scale contrast of the same image. Yet, color loss by scattering at very long ranges should be considered as well.
The color contrast derived from the distance between the colored object pixels and to the nearby colored background pixels, as derived from the L*a*b* color space metrics, is expressed in terms of gray scale contrast. This contrast replaces the original standard gray scale contrast component of that image. As expected, the resulted DRI ranges are, in most cases, larger than those predicted by the standard gray scale image. Upon further elaboration and validation of this method, it may be combined with the next versions of the well accepted TRM codes for DRI predictions.
Consistent prediction of DRI ranges implies a careful evaluation of the object and background color contrast reduction along the range. Clearly, additional processing for reconstructing the objects and background true colors and hence the color contrast along the range, will further increase the DRI ranges.
Region of interest extraction based on saliency detection and contrast analysis for remote sensing images
Author(s):
Jing Lv;
Libao Zhang;
Shuang Wang
Show Abstract
Region of Interest (ROI) extraction is an important component in remote sensing images processing, which is useful for further practical applications such as image compression, image fusion, image segmentation and image registration. Traditional ROI extraction methods are usually prior knowledge-based and depend on a global searching solution which are time consuming and computational complex. Saliency detection which is widely used for ROI extraction from natural scene images in these years can effectively solve the problem of high computation complexity in ROI extraction for remote sensing images as well as retain accuracy. In this paper, a new computational model is proposed to improve the accuracy of ROI extraction in remote sensing images. Considering the characteristics of remote sensing images, we first use lifting wavelet transform based on adaptive direction evaluation (ADE) to obtain multi-scale orientation contrast feature map (MF). Secondly, the features of color are exploited using the information content analysis to provide a color information map (CIM). Thirdly, feature fusion is used to integrate multi-scale orientation contrast features and color information for generating a saliency map. Finally, an adaptive threshold segmentation algorithm is employed to obtain the ROI. Compared with existing models, our method can not only effectively extract detail of the ROIs, but also effectively remove mistaken detection of the inner parts of the ROIs.
Variable size small targets detection using density-based clustering combined with backtracking strategy
Author(s):
Haiying Zhang;
Yonggui Lin;
Fangxiong Xiao
Show Abstract
The series problem of infrared small target detection in heavy clutter is a challenging work in active vision. During different imaging environments the size and gray intensity of target will keep changing which lead to unstable detection. Focus on mining more robust feature of small targets and following the sequential detection framework, we propose a novel research scheme based on density-based clustering and backtracking strategy in this paper. First, point of interest is extracted by the speeded up robust feature (SURF) detector for its better performance in digging features invariant to uniform scaling, orientation and illumination changes. Second, due to the local aggregation property of target trajectory in space, a new proposed density-based clustering method is introduced to segment the target trajectory, so that the target detection problem is transformed into the extract the target trajectory. Then, In order to keep the integral and independence of the trace as much as possible, two factors: percent and are exploited to help deciding the clustering granularity. Later, the backtracking strategy is adopted to search for the target trajectory with pruning function on the basis of the consistence and continuity of the short-time target trajectory in temporal-spatial. Extended experiments show the validity of our method. Compared with the data association methods executed on the huge candidate trajectory space, the time-consuming is reduced obviously. Additional, the feature detection is more stable for the use of SURF and the false alarm suppression rate is superior to most baseline and state-of-arts methods.