Proceedings Volume 5573

Image and Signal Processing for Remote Sensing X

cover
Proceedings Volume 5573

Image and Signal Processing for Remote Sensing X

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 10 November 2004
Contents: 11 Sessions, 49 Papers, 0 Presentations
Conference: Remote Sensing 2004
Volume Number: 5573

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Analysis of High Geometrical Resolution Images
  • Multiresolution Fusion and Super-Resolution
  • Image Analysis
  • Classification and Clustering
  • Hyperspectral Image Analysis I
  • Hyperspectral Image Analysis II
  • Target Tracking, Data Compression, and Watermarking
  • SAR and GPR
  • Poster Session
  • Image Registration and Data Interpolation
  • Change Detection and Multitemporal Data Analysis
  • Poster Session
  • Multiresolution Fusion and Super-Resolution
  • Poster Session
  • Target Tracking, Data Compression, and Watermarking
  • Poster Session
  • Classification and Clustering
  • Poster Session
  • Analysis of High Geometrical Resolution Images
  • Poster Session
  • Hyperspectral Image Analysis II
  • Analysis of High Geometrical Resolution Images
Analysis of High Geometrical Resolution Images
icon_mobile_dropdown
Classification of high spatial resolution images by means of a Gabor wavelet decomposition and a support vector machine
Andrea Baraldi, Lorenzo Bruzzone
Very high spatial resolution satellite images, acquired by third-generation commercial remote sensing (RS) satellites (like Ikonos and QuickBird), are characterized by a tremendous spatial complexity, i.e. surface objects are described by a combination of spectral, textural and shape information. Potentially capable of dealing with the spatial complexity of such images, context-sensitive data mapping systems, e.g. employing filter sets designed for texture feature analysis/synthesis, have been extensively studied in pattern recognition literature in recent years. In this work, four implementations of a two-stage classification scheme for the analysis of high spatial resolution images are compared. Competing first stage (feature extraction) implementations of increasing complexity are: 1) a standard multi-scale dyadic Gaussian pyramid image decomposition, and 2) an original almost complete (near-orthogonal) basis for the Gabor wavelet transform of an input image at selected spatial frequencies (i.e. band-pass filter central frequency and filter orientation pairs). The second stage of the classification scheme consists of: a) an ensemble of pixel-based two-class support vector machines (SVMs) applied to the multi-class classification problem according to the one-against-one strategy, exploiting the well-known SVM's capability of dealing with high dimensional mapping problems; and b) a traditional two-phase supervised learning pixel-based Radial Basis Function (RBF) network. In a badly-posed Ikonos image classification experiment, SVM combined with the two filter sets provide an interesting compromise between ease of use (i.e. easy free parameter selection), classification accuracy, robustness to changes in surface properties, capability of detecting genuine, but small, image details as well as linear structures. Qualitatively and quantitatively, the multi-scale multi-orientation almost complete Gabor wavelet transform appears superior to the dyadic multi-scale Gaussian pyramid image decomposition, in line with theoretical expectations. Further experiments confirm that the novel implementation of a sample-based SVM classifier combined with the multi-scale Gabor wavelet transform provides a viable strategy for dealing with the spatial complexity of high spatial resolution RS image mapping problems.
Multiresolution Fusion and Super-Resolution
icon_mobile_dropdown
Pansharp vs. wavelet vs. PCA fusion technique for use with Landsat ETM panchromatic and multispectral data
In this study we compare the efficiency of PCA, Pansharp and Wavelet fusion techniques for the fusion of Landsat ETM data. A Landsat 7 ETM cloud free subscene was used in this comparative study. The nearest neighborhood method has been used for the resampling and the fused images have a 15 meters pixel size. For each merged image we have examined: a) the optical qualitative result, using an ASTER vnir image with 15 meters resolution for comparison, b) the statistical parameters of the histograms of the various frequency bands, especially the standard deviation. All the fusion techniques improve the resolution and the optical result. The PCA merging technique seems better in discriminating between the coastal zone, the urban area and the rural area, maintains the natural colors but corrupts the statistical parameters. The Pansharp and the Wavelet merging technique give the best results without changing at all the statistical parameters of the original images.
Multitemporal and multiresolution fusion of wide field of view and high spatial resolution images through morphological pyramid
To study vegetation from space, both high spatial resolution and high temporal frequency images are needed. However, a satellite sensor, for technological reasons, cannot provide such images. But merge several kinds of images coming from several sensors enables to overgo this problem. In this article, we propose a fusion method based on pyramid algorithms and on morphological filtering to create synthesis images having both high spatial resolution and high temporal frequency. This process is validated by high resolution reference images. The study uses the ADAM database provided with courtesy of Centre National d'Etude Spatiale (CNES - French Space Agency).
Information measure for assessing pixel-level fusion methods
An objective measure for evaluating the performance of pixel level fusion methods is introduced in this work. The proposed measure employs mutual information and conditional mutual information in order to assess and represent the amount of information transferred from the source images to the final fused greyscale image. Accordingly, the common information contained in the source images is considered only once in the formation of the final image. The measure can be used regardless the number of source images or the assumptions about the intensity values and there is no need for an ideal or test image. The experimental results clarify the usefulness of the proposed measure.
Passive millimeter-wave imaging with superresolution
Yury A. Pirogov, Valeriy V. Gladun M.D., Dmitriy A. Tischenko M.D., et al.
In this paper we present the overview of problems and difficulties, which are common in passive millimeter-wave imaging or radiovision. The central part of the article is dedicated to mathematical resolution enhancement methods - superresolution. We consider several algorithms and discuss benefits and drawbacks they have. Performance of the algorithms is demonstrated using a set of test images, both simulated and real-life. Such advanced questions as subpixeling technique and artifact suppression using the wavelet transform are also reflected in the paper. The illustrations are included in order to demonstrate significant improvement of the resolution along with artifact suppression achieved on real-life observed images. Influence of side lobes of PSF on image quality is considered.
Image Analysis
icon_mobile_dropdown
Survey and assessment of new trends in image processing for Earth observation
Arthur E. C. Pece, Peter Johansen, Michael Schultz Rasmussen, et al.
As more and more Earth Observation (EO) data becomes available, the need to automate at least some aspects of data processing is apparent. The SURF project was funded by the European Space Agency (ESA) to provide a survey of image-processing methods for EO and an in-depth analysis and prototyping of some of the most promising methods. The survey has included (1) a list of application areas within EO; (2) the development of criteria for the evaluation of methods; (3) a classification of image processing tasks within EO, independent of the applications; (4) single-page descriptions of a wide range of methods. Based on this background work, a dozen methods were selected for further analysis and considered for prototyping. The next stage of the project consists in prototyping four of the methods subjected to in-depth analysis. This paper presents the results of the survey and a brief review of the methods selected for prototyping.
Evaluation of thresholding techniques applied to oceanographic remote sensing imagery
In many image processing applications, the gray levels of pixels belonging to the object are quite different from the levels belonging to the background. Thresholding becomes then a simple but effective tool to separate objects from the background. This segmentation tool is being used in many research and operational applications, so attempts to automate thresholding has been a permanent area of interest. However, several difficulties impede to achieve in all the situations the desired results, so for any specific problem, the different techniques will have to be tested in order to select those providing the best performance. In this paper we have conducted a survey of image thresholding methods with a view to assess their performance when applied to remote sensing images and especially in oceanographic applications. Those algorithms have been categorized into two groups, local and global thresholding techniques, and the global ones again classified according to the information they are exploiting. This classification has lead to histogram shape-based methods, clustering-based methods, entropy-based methods, object attribute-based methods and spatial methods. After the application of a total of 36 techniques to visible, IR and microwave (synthetic aperture radar) remote sensing images, the optimum methods for each one have been selected.
Design of a new sharpening filter
Georgios Aim. Skianis, Dimitrios A. Vaiopoulos, Konstantinos G. Nikolakopoulos
In the present paper is designed a new sharpening filter, whose response is controlled by two characteristic positive parameters b and k. Using the Fourier transform, it is studied the effect of this filter on two-dimensional signals of radial symmetry. It is observed that certain deformations may be produced, if the signal has the shape of a rapidly changing pulse. These deformations are more obvious for small b values. If the signal is smooth (gaussian type), no deformations are observed. Convolution masks are then constructed at the image domain, which simulate the filter response at frequency domain. The masks are applied on a satellite image and their effect is compared to that of other frequently used sharpening filters. It is observed that for proper b and k values the proposed filter produces images with an enhanced tonality contrast and with no high frequency noise. The general conclusion is that the proposed filter is successful in enhancing tonality differences between adjacent pixels. For different values of the parameters k and b, different tonality contrasts may be obtained. The potential user is encouraged to try on various k and b values, in order to obtain the optimum result for the area under study.
Analysis of remotely sensed imagery using the level-crossing statistics texture descriptor
Carlos Santamaria, Miroslaw Bober, Wieslaw Szajnowski, et al.
In this paper, we present a novel approach for the extraction of the Level-crossing Statistics (LCS) texture descriptor and the application of this descriptor to the processing of remote sensing data. The LCS is a recently presented statistical texture descriptor that first maps the images into 1D signals using space-filling curves, then applies a signal-dependent sampling and finally extracts texture parameters (such as crossing rate, crossing slope and sojourn time) from the 1D signal. In the new extraction approach introduced in this paper, a pyramidal decomposition is employed to extract texture features of different spatial resolution. Despite the simplicity of the texture features used, our approach offers state-of-the art performance in the texture classification and texture segmentation tasks, outperforming other tested algorithms. In the remote sensing field, the LCS descriptor has been tested in segmentation and classification scenarios. A land-use/land-cover analysis system has been designed and the new texture descriptor has shown very good results in the supervised segmentation of satellite images, even when very few training samples are provided to the system.
Classification and Clustering
icon_mobile_dropdown
Automatic partially supervised classification of multitemporal remotely sensed images
Gabriele Moser, Sebastiano Bruno Serpico, Michaela De Martino, et al.
The use of remotely sensed imagery for environmental monitoring naturally leads to operate with multitemporal images of the geographical area of interest. In order to generate thematic maps for all acquisition dates, an unsupervised classification algorithm is not effective, due to the lack of knowledge about the thematic classes. On the other hand, a detailed analysis of all the land-cover transitions is naturally accomplished in a completely supervised context, but the ground-data requirement involved by this approach is not realistic in case of short rivisit time. An interesting trade-off is represented by the partially supervised approach, exploiting ground truth only for a subset of the acquisition dates. In this context, a multitemporal classification scheme has been proposed previously by the authors, which deals with a couple of images of the same area, assuming ground truth to be available only at the first date. In the present paper, several modifications are proposed to this system in order to automatize it and to improve the detection performances. Specifically, a preprocessing algorithm is developed, which addresses the problem of mismatches in the dynamics of images acquired at different times over the same area, by both automatically correcting strong dynamics differences and detecting cloud areas. In addition, the clustering procedures integrated in the system are fully automatized by optimizing the selection of the numbers of clusters according to Bayesian estimates of the probability of correct classification. Experimental results on multitemporal Landsat-5 TM and ERS-1 SAR data are presented.
Partially supervised hierarchical clustering of SAR and multispectral imagery for urban areas monitoring
In some key operational domains, users are not specially interested in obtaining an exhaustive map with all the thematic classes present in an area of interest, but rather in identifying accurately a single class of interest. In this paper, we present a novel partially supervised classification technique that faces this interesting practical and methodological problem. We have adopted a two-stage classification scheme based on an unsupervised approach, which allows us to introduce supervised information about the class of interest without an additional sample labeling. The first stage of the process consists in an initial clustering of the image using the Self-Organizing Map algorithm. The second stage consists in a partially supervised hierarchical joint of clusters. We modify the employed criterion of similarity by introducing fuzzy membership functions that make use of the supervised information. The method is tested on urban monitoring, where the objective is to produce an automatic classification of 'Urban/Non-Urban' by using optical and radar data (Landsat TM and 35-days interferometric pairs of ERS2 SAR). We compare classification accuracy of the proposed method to its parametric version, which uses the Expectation-Maximization algorithm. The good performance confirms the validity of the proposed approach: 90% classification accuracy using supervised information only in the coherence map.
Hyperspectral Image Analysis I
icon_mobile_dropdown
Robust automatic clustering of hyperspectral imagery using non-Gaussian mixtures
Michael D. Farrell Jr., Russell M. Mersereau
This paper addresses the utility of robust automatic clustering of hyperspectral image data. Such clustering is possible only when the background in a scene is accurately modeled. Mixtures of non-Gaussian densities have been discussed recently, and here we move further down this path. We derive a t mixture model for the background in hyperspectral images, using two techniques for estimating parameters based on the Expectation-Maximization algorithm. Visual and statistical evaluation of these techniques are made with AVIRIS data. Dealing with the data's inhomogeneity by developing proper models of the background (i.e. clutter) in a hyperspectral image is important in target detection applications, especially for accurate performance prediction and detector analysis.
A neural adaptive model for hyperspectral data classification under minimal training conditions
Elisabetta Binaghi, Ignazio Gallo, Mirco Boschetti, et al.
Hyperspectral imaging is becoming an important analytical tool for generating land-use map. High dimensionality in hyperspectral remote sensing data, on one hand, provides us with more potential discrimination power for classification tasks. On the other hand, the classification performance improves up to a limited point as additional features are added, and then deteriorates due to the limited number of training samples. Proceeding from these considerations, the present work is aimed to systematically evaluate the robustness of novel classification techniques in classifying hyperspectral data under the twofold condition of high dimensionality and minimal training. We consider in the study a neural adaptive model based on Multi Layer Perceptron (MLP). Accuracy has been evaluated experimentally, classifying MIVIS Hyperspectral data to identify different typology of vegetation in Ticino Regional Park. A performance analysis has been conducted comparing the novel approach with Support Vector Machine and conventional statistical and neural techniques. The adaptive model shows advantages especially when mixed data are presented to the classifiers in combination with minimal training conditions.
Real-time software compression and classification of hyperspectral images
Giovanni Motta, Francesco Rizzo, James A. Storer, et al.
Recent years have seen a growing interest in the compression of hyperspectral imagery. In a scenario, anticipated by the NOOA for the next generation of GOES satellites, the remote acquisition platform should be able to acquire, compress, and broadcast processed data to final users, all in real time and with limited interaction with a ground station. Here we show how LPVQ, a vector quantizer algorithm previously introduced by the authors, may fit this paradigm when its arithmetic encoder is replaced with the CCSDS lossless data compressor. Beside competitive compression, this algorithm has several other interesting properties. It can be easily implemented in parallel, a number of entropy coding schemes can be used to achieve different complexity/performance tradeoffs, and the compressed stream can be used directly to perform nearest neighborhood pixel search without the need of full decompression.
Hyperspectral Image Analysis II
icon_mobile_dropdown
Anomalies detection in hyperspectral imagery using projection pursuit algorithm
Veronique Achard, Anthony Landrevie, Jean Claude Fort
Hyperspectral imagery provides detailed spectral information on the observed scene which enhances detection possibility, in particular for subpixel targets. In this context, we have developed and compared several anomaly detection algorithms based on a projection pursuit approach. The projection pursuit is performed either on the ACP or on the MNF (Minimum Noise Fraction) components. Depending on the method, the best axes of the eigenvectors basis are directly selected, or a genetic algorithm is used in order to optimize the projections. Two projection index (PI) have been tested: the kurtosis and the skewness. These different approaches have been tested on Aviris and Hymap hyperspectral images, in which subpixel targets have been included by simulation. The proportion of target in pixels varies from 50% to 10% of the surface. The results are presented and discussed. The performance of our detection algorithm is very satisfactory for target surfaces until 10% of the pixel.
Physical subspace models for invariant material identification: subspace composition and detection performance
We study material identification in a forest scene under strongly varying illumination conditions, ranging from open sunlit conditions to shaded conditions between dense tree-lines. The algorithm used is a physical subspace model, where the pixel spectrum is modelled by a subspace of physically predicted radiance spectra. We show that a pure sunlight and skylight model is not sufficient to detect shaded targets. However, by expanding the model to also represent reflected light from the surrounding vegetation, the performance of the algorithm is improved significantly. We also show that a model based on a standardized set of simulated conditions gives results equivalent to those obtained from a model based on measured ground truth spectra. Detection performance is characterized as a function of subspace dimensionality, and we find an optimum at around four dimensions. This result is consistent with what is expected from the signal-to-noise ratio in the data set. The imagery used was recorded using a new hyperspectral sensor, the Airborne Spectral Imager (ASI). The present data were obtained using the visible and near-infrared module of ASI, covering the 0.4-1.0 μm region with 160 bands. The spatial resolution is about 0.2 mrad so that the studied targets are resolved into pure pixels.
Statistical detection algorithms in fat-tailed hyperspectral background clutter
This paper explores three related themes: the statistical nature of hyperspectral background clutter; why should it be like this; and how to exploit it in algorithms. We begin by reviewing the evidence for the non-Gaussian and in particular fat-tailed nature of hyperspectral background distributions. Following this we develop a simple statistical model that gives some insight into why the observed fat tails occur. We demonstrate that this model fits the background data for some hyperspectral data sets. Finally we make use of the model to develop hyperspectral detection algorithms and compare them to traditional algorithms on some real world data sets.
Target Tracking, Data Compression, and Watermarking
icon_mobile_dropdown
IMM techniques for dual-band infrared target tracking
This paper describes an application of the IMM (Interacting Multiple Model) technique in a multiple target tracking system for an IRST (Infrared Search and Track) system operating in the mid and in the long wave infrared bands. The use of the two IR bands allows better performances in terms of detection probability, lower false tracks and short time for track initiation. To properly merge data from the two sensors, an enhancement of the PDA (Probabilistic Data Association) technique is introduced in the process. The approach has shown to properly operate with a very high number of possible targets in the two IR bands. Good results have been obtained also in the case of clustered detections, as well as in uniformly space distributed detections.
Lossless compression of hyperspectral imagery: a real-time approach
Francesco Rizzo, Giovanni Motta, Bruno Carpentieri, et al.
We present an algorithm for hyperspectral image compression that uses linear prediction in the spectral domain. In particular, we use a least squares optimized linear prediction method with spatial and spectral support. The performance of the predictor is competitive with the state of the art, even when the size of the prediction context is kept to a minimum; therefore the proposed method is suitable to spacecraft on-board implementation, where limited hardware and low power consumption are key requirements. With one band look-ahead capability, the overall compression of the proposed algorithm improves significantly with marginal usage of additional memory. Experiments on data cubes acquired by the NASA JPL's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) are presented. In the second part of the paper, we revised some on-going research that aims at coupling linear prediction with polynomial fitting, exponential fitting or interpolation. Current simulations show that further improvement is possible. Furthermore, the two tier prediction allows progressive encoding and decoding. This research is promising, but still in a preliminary stage.
Evaluation of 1D, 2D, and 3D SPIHT coding technique for remote sensing
Joan Serra-Sagrista, Jorge Gonzalez-Conejero, Pere Guitart-Colom, et al.
The Set Partitioning in Hierarchical Trees (SPIHT) is a well known lossy to lossless high performance embedded bitplane image coding algorithm which uses scalar quantization and zero-trees of transformed bidimensional (2-D) images and bases its performance on the redundancy of the significance of the coefficients in these subband hierarchical trees. In this paper, we evaluate the possibility of replacing the 2-D process by a 1-D adaptation of SPIHT, which may be performed independently in each line, followed by a post compression process to construct the embedded bitstream for the image. Several strategies to construct this bitstream, based on both a bitplane order and a precise rate distortion computation are suggested. The computational requirements of these methods are significantly lower than those of the SPIHT. Comparative results with remote sensing volumetric data show the difficulty of reducing the distortion gap with the SPIHT by means of a post compression step. Specially remarkable is the marginal differences that the optimal rate distortion strategies achieve when compared to simple strategies like a sequential bitplane ordering of the bitstream.
SAR and GPR
icon_mobile_dropdown
High-resolution vegetation index as measured by radar and its validation with spectrometer
Changes in vegetation can affect our health, the environment and the economy. Understanding this, twenty years ago scientists began to use satellite remote sensors to monitor major fluctuations in vegetation and understand how it affects the environment. The pixel accuracy of some Synthetic Aperture Radar (SAR) satellites is now at near one meter resolution. A new formulation of vegetation index using such active sensors will greatly improve the Vegetation health accuracy. Attempt has been made by M. Tokunaga to relate ERS-SAR satellite sensor data of vegetation canopies to the LANDSAT TM satellite sensor measurements, both at 30 meter resolution. A correlation was observed above Normalized Difference Vegetation Index (NDVI) of 0.4, but their experiment was not based on the data taken by the two satellite sensors at the same time period. In this research, a correlation is determined between the active and passive measurements of the vegetation index, at very high resolution. The measurements take place at the near ground level over varied vegetation health, using a Ground Penetrating Radar (GPR), and a handheld Spectrometer. The GPR and the handheld Spectrometer have the same field of view, so it is possible to compare data for the whole range of NDVI. Both measurements take place one right after the other, to allow an accurate comparison. The goal of this research is to define a new vegetation index, using active sensors. The GPR operating at 1.5 GHz produces images that contain backscatter signals obtained from vegetation. These images are processed by a filter to eliminate clutter and noise. The Fourier amplitude and phase characteristics of the vegetation health are extracted from the backscatter signal. The same vegetation is subjected to the spectrometer measurements. Our results show a linear correlation between power of GPR backscatter signal and the NDVI as calculated by the spectrometer data. As a continuity of this work, the ground validation will be compared to the active/passive satellite sensors for the measurement of vegetation health.
Information-theoretic textural features of SAR images: an assessment for land cover classification
In this work, two feature, calculable from SAR images on a per-pixel basis, but relying on global image statistics, are described and discussed. The rationale is that spatial heterogeneity is regarded as uncertainty, that is unpredictability of a sample feature, e.g., the square root of local variance, from another pixel feature, like the local mean. Thus, such an uncertainty can be measured by resorting to Shannon's Information Theory in a mathematically rigorous and physically consistent manner. Starting from the multiplicative noise and texture models peculiar of SAR imagery, the conditional information of square root of estimated local variance to local mean has been found to be a powerful heterogeneity measurement, very little sensitive to the noise, and thus capable of capturing subtle variations of backscatter and texture whenever they are embedded in a heavy speckle. On the other side, the joint information of standard deviation to mean, although not strictly a heterogeneity feature, can be used as a textural feature for automated segmentation and classification, thanks to its noise-insensitiveness and to its capability of highlighting man-made structures. Experimental results carried out on C-band SIR-C and X-band X-SAR data of the city of Pavia, in Italy, demonstrate that the proposed features are useful for automated segmentation and classification tasks. Promising results are obtained in terms of discrimination of urban and suburban areas with different degrees of building density. Furthermore, the additional capabilities stemming from the joint use of X-band data, analogous to those available after the launch of the upcoming COSMO/SkyMed mission, are highlighted and discussed.
Subsurface material type determination from ground-penetrating radar signatures
A Ground Penetrating Radar (GPR at 1.5 GHz) has been used to help determine the material type at different subsurface layers. Based on the incidence and reflected electromagnetic waves, a new method was devised which determines Material Characteristics in Fourier Domain (MCFD), which can be used for material identification. MCFD is calculated at every reflection. Each reflection is caused by sufficient change in dielectric constant of two different soil layers. Working in frequency domain the effect of the media is separated by using the wavelet of the electromagnetic signal before and after it is reflected from the media. An algorithm is developed which obtains the MCFD which defines material type by the use of a 2-layer Back-propagation Neural Network (NN). Material type can be determined irregardless of layer at which the object is buried (limited to GPR reflection intensity level), or object size, or if an extended subsurface layer is present. In this method, GPR images for different material types at different layers were obtained; up to two levels of mixed material types such as sand, clay, loam, rock, broken jar, etc. were considered.
SAR amplitude probability density function estimation based on a generalized Gaussian scattering model
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on Synthetic Aperture Radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In the present paper, an innovative parametric estimation methodology for SAR amplitude data is proposed, that takes into account the physical nature of the scattering phenomena generating a SAR image by adopting a generalized Gaussian (GG) model for the backscattering phenomena. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions, and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude probability density function better than several previously proposed parametric models for backscattering phenomena.
Poster Session
icon_mobile_dropdown
Improvements in object recognition by radar and ladar data fusion
Karlheinz Bers, Thorsten Brehm, Helmut Essen, et al.
Remote sensing using unmanned aerial vehicles is gaining more and more importance during peace keeping missions for military reconnaissance. Those applications nowadays have to take into account that under civil war conditions a mix-up of sensors within sensitive urban terrain may be useful. These tasks typically have to be fulfilled also under adverse weather conditions, which mainly can be served by airborne imaging radar sensors. Advanced radar sensors are able to deliver highly resolved images with considerable information content, as polarimetry, 3d-features and robustness against changing environmental and operational conditions. Extending the knowledge base for an object by fusion of radar data with Ladar-information or IR, a safe detection and even identification of objects becomes feasible allowing an optimized signal processing by distributing the assignments between the combined sensors. The contribution describes the different sensors and gives an overview over the image data for the sample scenes. The methods of object discrimination are discussed and representative results are shown.
Image Registration and Data Interpolation
icon_mobile_dropdown
Irregularly sampled scenes
Maria Petrou, Roberta Piroddi, Sunil Chandra
In this paper, we present a review of some commonly used methods for signal interpolation and/or estimation, from a set of randomly chosen samples. Most of these methods were originally devised for 1D signals. First we extend these methods to 2D and then perform a comparative study. Our experimental results show good interpolation/reconstruction performances of some methods for sampling ratios as small as 5% of the original number of pixels.
Constrained image restoration applied to passive millimeter-wave images
Passive millimeter-wave imaging has excellent all weather capability but is severely diffraction limited and requires large apertures to give adequate spatial resolution. Linear restoration can enhance the resolution by a factor of two, while under favorable conditions non-linear restoration can enhance it by factors of four. The amount of enhancement possible is generally limited by the amount of noise present in the original observed image. Preprocessing can reduce the effect of this noise. In many non-linear restoration techniques the amount of high spatial frequency introduced into the restored image is uncontrolled. This problem has been overcome through the use of the Lorentzian algorithm, which imposes a statistical constraint on the distribution of gradients within the restored image. Another way of applying a constraint is to selectively restore an image. The high spatial frequency content of an image exists largely at edges and sharp features and needs to be restored, while the smoother background between features contains fewer high frequencies and needs less restoration. Adaptive non-linear restoration techniques have been investigated whereby the amount of restoration applied to an image is a function of the first and second derivative of the image intensity. Images are presented to demonstrate the effectiveness of these methods.
Elastic image registration for landslide deformation detection
Siti Khairunniza-Bejo, Maria Petrou, Vassili A. Kovalev
In this paper, a new method of 2D inhomogeneous and non-parametric image registration with sub-pixel accuracy is proposed. The method is based on the invocation of deformation operators which imitate the deformations expected to be observed when a landslide occurs. The similarity between two images is measured by a similarity function which takes into consideration grey level value correlation and geometric deformation. The geometric deformation term ensures that the minimum necessary deformation compatible with the two images is employed. An extra term, ensuring maximum overlap between the two images is also incorporated, in order to avoid the pitfall of finding maximum correlation coefficient with minimum overlap. Sub-pixel accuracy is achieved by manipulating lists of pixels (real valued positions and corresponding grey values) rather than integer grid positions conventionally used to represent images. Landsat 5 TM images of southern Italy are used for the experiments.
Change Detection and Multitemporal Data Analysis
icon_mobile_dropdown
Unsupervised classification of changes in multispectral satellite imagery
The statistical techniques of multivariate alteration detection, maximum autocorrelation factor transformation, expectation maximization, fuzzy maximum likelihood estimation and probabilistic label relaxation are combined in a unified scheme to classify changes in multispectral satellite data. An example involving bitemporal LANDSAT TM imagery is given.
Change detection in multitemporal SAR images based on generalized Gaussian distribution and EM algorithm
In this paper, we propose a novel automatic and unsupervised change-detection approach specifically oriented to the analysis of multitemporal single-channel single-polarization SAR images. Such an approach is based on a closed-loop process composed of three main steps: 1) pre-processing based on a controlled adaptive iterative filtering; 2) comparison between multitemporal images according to a standard log-ratio operator; 3) automatic analysis of the log-ratio image for generating the change-detection map. The first step aims at reducing the speckle noise in a controlled way in order to maximize the separability between changed and unchanged classes. The second step is devoted to compare the two filtered images in order to generate a log-ratio image. Finally, the third step deals with the automatic selection of the decision threshold to be applied to the log-ratio image. This selection is carried out according to a novel formulation of the Expectation Maximization (EM) algorithm under the assumption that changed and unchanged classes follow Generalized Gaussian (GG) distributions. Experimental results on real ERS-2 SAR images confirmed the effectiveness of the proposed approach.
Change detection of man-induced landslide causal factors
Cristina Tarantino, Palma N. Blonda, Guido Pasquariello
In the framework of the EU project titled: Landslide Early Warning Integrated project (LEWIS) optical RS data have been periodically processed to detect surface features changes which can be correlated with the development of slope instability mechanisms. The attention is focused on man's activity induced surface features changes, such as deforestation and ploughing, which affects slope equilibrium conditions by decreasing the effective slope shear strength and increasing the slope shear stress, respectively. Fourteen optical Landsat TM images (two per year), has been analysed on the Caramanico test site in Regione Abruzzo, Southern Italy. The main objective of the work was to verify the advantages and limitations of conventional space-borne RS data for the prevention of landslide events. The data were analysed by supervised classifier based on neural network techniques. Four classes and their transitions were considered in the analysis. Supervised techniques were preferred to unsupervised techniques because the former can provide useful information not only on the place were a transition occurred, but also on the specific classes involved in the transition between two dates. The results seem to show that in years 1987-2000 the following surface class changes, potentially related to landslide phenomena, occurred: i) a strong decrease of arboreous land in agricultural land and an increase of barren land, mainly in the area interested by landslides events; ii) an increase of artificial structures, mainly stemming from a transformation of cultivated areas.
Noise modeling and estimation in image sequences from thermal infrared cameras
Luciano Alparone, Giovanni Corsini, Marco Diani
In this paper we present an automated procedure devised to measure noise variance and correlation from a sequence, either temporal or spectral, of digitized images acquired by an incoherent imaging detector. The fundamental assumption is that the noise is signal-independent and stationary in each frame, but may be non-stationary across the sequence of frames. The idea is to detect areas within bivariate scatterplots of local statistics, corresponding to statistically homogeneous pixels. After that, the noise PDF, modeled as a parametric generalized Gaussian function, is estimated from homogeneous pixels. Results obtained applying the noise model to images taken by an IR camera operated in different environmental conditions are presented and discussed. They demonstrate that the noise is heavy-tailed (tails longer than those of a Gaussian PDF) and spatially autocorrelated. Temporal correlation has been investigated as well and found to depend on the frame rate and, by a small extent, on the wavelength of the thermal radiation.
Poster Session
icon_mobile_dropdown
An analytical nonephemeris algorithm for MODIS bowtie removal
In order to remove MODIS bowtie effect, an analytical algorithm is proposed, which is based on solid geometry projection and requires no ephemeris information. The geometry projection model is established from the parameters of MODIS platform and the amount of overlapping pixels is quantified as a function of the instantaneous scanning angle. Lookup table is utilized to guide the deletion of overlapping pixels and improve efficiency, and cubic spline interpolation is applied to subpixelly restore data following their profile. Resampling is followed to generate integral pixel coordinates. The border incontinuity problem that occurs due to the gap between different swaths is solved by introducing of a special blocking method. The validity of out algorithm is verified by comparing with three other Non-ephemeris algorithms, and the result shows that not only the bowtie effect within a single swath is effectively removed, the incontinuity caused by conventional pixel grouping method is mostly well eliminated.
Multiresolution Fusion and Super-Resolution
icon_mobile_dropdown
Experimental performance analysis of hyperspectral anomaly detectors
Nicola Acito, Giovanni Corsini, Marco Diani, et al.
Anomaly detectors are used to reveal the presence of objects having a spectral signature that differs from the one of the surrounding background area. Since the advent of the early hyper-spectral sensors, anomaly detection has gained an ever increasing attention from the user community because it represents an interesting application both in military and civilian applications. The feature that makes anomaly detection attractive is that it does not require the difficult step of atmospheric correction which is instead needed by spectral signature based detectors to compare the received signal with the target reflectance. The aim of this paper is that of investigating different anomaly detection strategies and validating their effectiveness over a set of real hyper-spectral data. Namely, data acquired during an ad-hoc measurement campaign have been used to make a comparative analysis of the performance achieved by four anomaly detectors. The detectors considered in this analysis are denoted with the acronyms of RX-LOCAL, RX-GLOBAL, OSP-RX, and LGMRX. In the paper, we first review the statistical models used to characterize both the background and the target contributions, then we introduce the four anomaly detectors mentioned above and summarise the hypotheses under which they have been derived. Finally, we describe the methodology used for comparing the algorithm performance and present the experimental results.
Poster Session
icon_mobile_dropdown
Ultraresolution of microwave, color, and synthetic color images
Evgeni N. Terentiev, Nikolai E. Terentiev, Fedor V. Shugaev
The modern digital image forming systems are multi sensors or multi rays as a rule. The Modulation Transfer Function (MTF) MO of a Point Spread Function (PSF) O can be measured with the aid of special transparent image. PSF O of passive radio vision system can be measured for a point source. If Y is the ortho-normal system of Fourier harmonics in a small domain, then PSF O and MTF MO are connected by the eigen-values problem relative convolution and multiplication operations: O*Y=MO Y. We may introduce MTF MR of resolving function R: R*Y=MR Y and MTF MRO of (R*O): R*O*Y= MRO Y. We have [1] equality: MRO = MR MO in the frequency small domain. Ultra-resolution method gives point results of resolution and it is the most effective and stable method in order to increase resolution at present time. Examples of PSFs O, MTFs: MO, MR and MRO, and numerous applications of the ultra-resolution method are considered.
Implementation of a parallel registration algorithm for registration of InSAR complex images
Lijun Lu, Mingsheng Liao, Lu Zhang, et al.
Registration of two or more images of the same scene is an important procedure in InSAR image processing that seeks to extract differential phase information exactly between two images. Meanwhile, the efficiency for large volume data processing is also a key point in the operational InSAR data processing chain. In this paper, some conventional registration methods are analyzed in detail and the parallel algorithm for registration is investigated. Combining parallel computing model with the intrinsic properties of InSAR data, the authors puts forward an image parallel registration scheme over distributed cluster of PCs. The preliminary experiment will be implemented and the result demonstrates feasibility and effectiveness of the proposed scheme.
Extraction and analysis of LUCC information based on DTCs
Ping Wang, Jixian Zhang, Yongguo Zheng, et al.
With the development of technology of remote sensing and computer, the quantity of data and information is increased greatly. After a long-time research, we find that it is impracticable to manage and handle information only using information. Only decision-making system can do it, Decision Tree is an important method in classification of land use and land cover. This paper gives the formation process from building synthetic database to designing decision tree model, knowledge base to provide some forms of data for extracting LUCC information, there are three aspect data include about: 1) grid data such as remote sensing data TM, SPOT, 2) Ground-measured data such as DEM, and 3) thematic Vector data such as land use data and so on. Regularity base consists of all the transformation rules from the source state to the destination of the problem, each dot include at least one rule, the foundation can resolve recognize where change in land area and type. At last, according to the level of complexity of the change of LUCC, it gives two kinds of decision tree models: the classified comparative between single result and the synchronic analysis with multi-temporal images. (1) The classified comparison between single results. We take the information extracting for ice changes as an example, and the result is very ideal. (2) The synchronic analysis with multi-temporal images. We construct decision tree in Hei bei, the condition include the grey value and the other features such as slope gradient and GIS thematic supported data, the result shows that the biggest change type is that other lands are transferred to the forest. The area precision is excess to 85%, and the sort precision 90%.
Target Tracking, Data Compression, and Watermarking
icon_mobile_dropdown
Review of CCSDS-ILDC and JPEG2000 coding techniques for remote sensing
Joan Serra-Sagrista, Francesc Auli-Llinas, Fernando Garcia-Vilchez, et al.
High resolution images are becoming a natural source of data for many different applications, for instance, remote sensing (RS) and geographic information systems (GIS). High resolution is to be understood as a combination of increasing spectral size, increasing spatial resolution per pixel, increasing bit depth resolution per pixel, and larger areas captured at once by the sensors. These images have, therefore, an increasing demand for both storage and transmission scenarios, so that there is a need for compression. Lossless coding, achieving at most 4:1 compression ratios, is seldom enough for applications without a great demand for visual detail. Lossy coding, that may well achieve over 200:1 compression ratios, may still be useful for some final user applications. We are interested in those lossy coding techniques that may fulfill the particular requirements of RS and GIS applications, i.e.: 1) availability of compression of both mono-band and multi-band (either multi or hyperspectral images); 2) high speed of data recovering (from the encoded bit stream) in all image regions, considering also embedded transmission; 3) zoom and lateral shift capability; 4) respect of no-data or meta-data regions, which should be maintained at any compression ratio; 5) in the case of lossy compression, lossless encoding of some physical parameters such as temperature, radiance, elevation, etc.; 6) to reach high compression ratios while maintaining the image quality. In this paper we review two such lossy coding techniques, namely the CCSDS-ILDC Recommendation and the recent JPEG2000 Standard.
Poster Session
icon_mobile_dropdown
Efficient methodology for endmembers selection by field radiometry: an application to multispectral mixture model
Jose Manuel Vazquez, Agueda Arquero, Estibaliz Martinez, et al.
Spectral mixture analysis provides an efficient mechanism for the interpretation and classification of remotely sensed multispectral imagery. It aims to identify a set of reference spectra named endmembers that can be used to model the spectral response for each pixel of the remote image. Thus, the modelling is usually carried out as a linear combination of a finite number of ground components. Although spectral mixture models have proved to be appropriate for large hyperspectral dataset subpixel analysis, few methods are available in the literature about the optimal selection of endmembers through field spectroscopy as well as the applied regression analysis techniques over the model obtained. This work has as main objective to deal with these aspects. With regard to the first subject mentioned and in order to determine not only specific conditions about covers (health, contamination, geographic and geologic characteristics, etc.), but to assure an efficient sampling method, ground-truth data collection and description is still an essential task. In particular there is a very important question to improve: the determination of the samples number to pick up in terms of the vegetation types. On this way, a useful statistic, based on t-Student distribution will be discussed in this paper.
Classification and Clustering
icon_mobile_dropdown
Use of spatial information after segmentation for very high spatial resolution satellite data classification
Alexandre P. Carleer, Eleonore Wolff
Since 1999, very high spatial resolution satellite data (IKONOS, QuickBird, OrbView_3) represent the surface of the earth with more details. However, these data don't provide necessarily better land cover/use classification. These incongruous results of earlier studies were attributed to the increase of the internal variability within the homogenous land cover unit and the weakness of spectral resolution. To overcome these problems, a region based procedure can be used. The image segmentation before the classification is successful at removing much of the structural clutter and allows an easy use of spatial information for classification. This information, on top of spectral information, can be the surface, the perimeter, the compactness (area/perimeter2), the degree and kind of texture. In this study, a feature selection method is used to show which features are useful for which classes and the use of these features to improve the land cover/use classification of very high spatial resolution satellite image. The features selection is preceded by an analysis of visual interpretation parameters useful for the identification of each class of the legend, in order to guide the choice of the features whose combinations can be numerous.
Poster Session
icon_mobile_dropdown
Customized fusion of satellite images based on the á trous algorithm
Lately, different methods for the fusion of multi-spectral and panchromatic images based on the Wavelet transform have been proposed. Even though, most of them provide satisfactory results, there is one, the algorithm a trous, which presents some advantages against the other fusion methods based on Wavelet transform. Thus its computation is very simple, it only implies elemental algebraic operations, such as products, differences and convolutions. Moreover it yields a better spatial and spectral quality than the others. On the other hand, it is well known that standard fusion methods do not allow to control the spatial and the spectral quality of the fused image. If the spectral quality is very high, this implies a low spatial quality and vice versa. In this sense, here, it is proposed a new version of a fusion method based on the Wavelet transform, computed through the algorithm à trous, which allows to customize the trade-off between the spectral and the spatial quality of the fused image by the evaluation of two quality indices: one spectral index, the ERGAS index, and other spatial one. For this last one, a new spatial index based on ERGAS concepts, translated to the spatial domain has been defined. Moreover, in this work, several different architectures for the computation of the investigated fusion method has been evaluated, in order to determine the optimize degradation level of the source image, required to perform the fusion process. The performances of the proposed fusion method have been compared with the fusion method based on Hierarchic Wavelet.
Analysis of High Geometrical Resolution Images
icon_mobile_dropdown
Simulated analysis of dependency of vegetation index on spatial resolution of sensors by QuickBird and ASTER
Through the analysis of the variograms for the entire scenes and local small areas, it is revealed that the ground spatial correlation length in NDVI map and sensor spatial resolution shows a linear relation in log-log plot. In order to search dependency of NDVI on spatial resolution of sensor, Haar discrete wavelet transform is exploited to derive resolution-reduced coarse data from the finest resolution data with sensor resolution of 2.8m acquired by Quickbird. To ensure obtained results of dependency of NDVI in simulated data with different resolutions, ASTER and MODIS data are also incorporated as well as data from Quickbird. It is shown decreasing tendency of standard deviation of NDVI values is proportional to the log of the sensor resolution. It also detected that the "range" of variogram is larger for area, which contains much natural objects such as vegetation than for area like urban area including many artifacts. It is desirable that we should take into account the ground spatial scale, which fits the sensor spatial resolution depending on the cover types, and the scale of areas to be investigated.
Poster Session
icon_mobile_dropdown
Study on the integration of GIS and remote sensing data in grouping-interpretation system for remote sensing image
In this paper, we present a new approach to integrate Geographic Information System and remote sensing. Its implementation environment is in Grouping Interpretation System (GrIS). GrIS was developed based on application task requirements, visual interpreting procedure and manner, and multi-technique integration. GrIS can operate in both single-computer mode and multi-computer mode with client/server structure in LAN and WAN environment. This system was designed to function within an integrated Geographic Information System, remote sensing processing and image interpretation function. Moreover, it allows the incorporation of raster format with vector format for image interpretation, automatic and semi-automatic interpretation mode respectively. The integration result of image interpretation into grouping interpretation system (GrIS) is demonstrated. The use of this integration technology and the relevant information from GIS leads to an enhanced information extraction and effective analysis in remote sensing images.
Interpolation in multispectral data using neural networks
Vassilis Tsagaris, Antigoni Panagiotopoulou, Vassilis Anastassopoulos
A novel procedure which aims in increasing the spatial resolution of multispectral data and simultaneously creates a high quality RGB fused representation is proposed in this paper. For this purpose, neural networks are employed and a successive training procedure is applied in order to incorporate in the network structure knowledge about recovering lost frequencies and thus giving fine resolution output color images. MERIS multispectral data are employed to demonstrate the performance of the proposed method.
Improvement of unsupervised texture classification based on genetic algorithms
Hiroshi Okumura, Yuuki Togami, Kohei Arai
At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed; 6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.
Use of multiresolution analysis to detect chemical vapors in passive Fourier transform infrared (FTIR) spectroscopy
Detection and identification of chemical vapors using Fourier Transform InfraRed (FTIR) sensors is an area of interest in many communities and is an active area of research. The use of multi-resolution analysis to detect and identify chemicals of interest in open path applications has seen limited application. In this study, we examine the use of multi-resolution analysis to detect chemicals of interest in both the laboratory and under real world conditions. Real world data was collected using a stationary FTIR sensor located at standoff ranges from the point of dissemination. The experimental results show promising detection and identification capabilities of this analysis technique.
Cascaded RM-filter for remote sensing imaging
In this paper, we present the robust Cascaded RM-filter that be able to remove the mixture of impulsive and multiplicative noise in the remote sensing imaging. The designed filter uses combined R- and M- estimators called RM-estimators. The Cascaded RM-filter is the consequential connection of two filters. The first filter employs one of the proposed the RM-KNN (MM-KNN, WM-KNN, ABSTM-KNN or MoodM-KNN) filters to provide the impulsive noise rejection and detail preservation. The second filter uses an M-filter to realize multiplicative noise suppression. We apply the simplest cut, Hampel's three part redescending, Andrew's sine, Tukey biweight, and Bernoulli influence functions in the designed filter. Extensive simulations have demonstrated that the Cascaded RM-filter consistently outperforms other filters by balancing the tradeoff between noise suppression and fine detail preservation. Finally, we have presented the implementation of proposed filter on the DSP TMS320C6701 demonstrating that it potentially provides a real-time solution in the processing of the SAR images.
Hyperspectral Image Analysis II
icon_mobile_dropdown
Regularized methods for hyperspectral image classification
In this paper, we analyze regularized non-linear methods in the context of hyperspectral image classification. For this purpose, we compare regularized radial basis function neural networks (Reg-RBFNN), standard support vector machines (SVM), and kernel Fisher discriminant (KFD) analysis both theoretically and experimentally. We focus on the accuracy of methods when working in noisy environments, high input dimension, and limited number of training samples. In addition, some other important issues are discussed, such as the sparsity of the solutions, the computational burden, and the capability of the methods to provide probabilistic outputs. Although in general all methods yielded satisfactory results, SVM revealed more effective than KFD and Reg-RBFNN in standard situations regarding accuracy, robustness, sparsity, and computational cost.
Analysis of High Geometrical Resolution Images
icon_mobile_dropdown
Spectral information extraction from very high resolution images through multiresolution fusion
This paper critically reviews state-of-the-art and advanced methods for multispectral (MS) and panchromatic (Pan) image fusion based on either intensity-hue-saturation (IHS) transformation, or redundant multiresolution analysis (MRA). In either cases, lower-resolution MS bands are sharpened by injecting details taken from the higher-resolution Pan image. Crucial point is modeling the relationships between detail coefficients of a generic MS band and the Pan image at the same resolution. Once calculated at the coarser resolution, where both types of data are available, such a model shall be extended to the finer resolution to weight the Pan details to be injected. Two injection models embedded in an "a trous" wavelet decomposition will be compared on a test set of very high resolution QuickBird MS+Pan data. One works on approximations and provides a partial unmixing of coarse MS pixels via high-resolution Pan. Another is based on spectral fidelity of original and merged MS data. Fusion comparisons on spatially degraded data, whose high-resolution MS originals are available for reference, show that the former performs better than the latter, in terms of both spatial and spectral fidelity.