Show all abstracts
View Session
- Front Matter: Volume 7810
- Satellite Data Compression I
- Satellite Data Communications I
- Satellite Data Processing I
- Satellite Data Processing II
- Satellite Data Compression II
- Satellite Data Communications II
- Satellite Data Processing III
- Satellite Data Compression III
- Satellite Data Processing IV
- Satellite Data Compression IV
Front Matter: Volume 7810
Front Matter: Volume 7810
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7810, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Satellite Data Compression I
Rate allocation method for the fast transmission of pre-encoded meteorological data over JPIP
Show abstract
This work addresses the transmission of pre-encoded video containing meteorological data over JPIP. The primary requirement
for the rate allocation algorithm deployed in the JPIP server is the real-time processing demands of the application.
A secondary requirement for the proposed algorithm is that it should be able to either minimize the mean squared error
(MMSE) of the video sequence, or minimize the maximum distortion (MMAX). The MMSE criterion considers the
minimization of the overall distortion, whereas MMAX achieves pseudo-constant quality for all frames.
The proposed rate allocation method employs the FAst rate allocation through STeepest descent (FAST) method that
was initially developed for video-on-demand applications. The adaptation of FAST in the proposed remote sensing scenario
considers meteorological data captured by the European meteorological satellites (Meteosat). Experimental results suggest
that FAST can be successfully adopted in remote sensing scenarios.
Hyperspectral compressive sensing
Show abstract
Compressive sensing (CS) is a new technique for reconstructing essentially sparse signals from a number of
measurements smaller than the Nyquist-Shannon criterion. The application of CS to hyperspectral imaging has the
potential for significantly reducing the sampling rate and hence the cost of the analog-to-digital sensors. In this paper a
novel approach for hyperspectral compressive sensing is proposed where each band of hyperspectral imagery is sampled
under the same measurement matrix. It is shown that the correlation between two neighboring band compressive sample
values is consistent with that between two neighboring band pixel values. Our hyperspectral compressive sensing
experimental results show that the proposed joint reconstruction method yields smaller reconstruction errors than the
individual reconstruction method at various sampling rates.
CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression
Show abstract
Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the
increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate
multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2].
Innovative "smart" compression techniques will be then required, performing different types of compression inside a
scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled
"selective compression", allows important compression gains by detecting and then differently compressing the
regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting
data).
Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could
be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds.
Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already
existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector
Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation
study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of
the SVM output.
In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL
description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or
Matlab/Simulink [6].
Hyperspectral image compression algorithm using wavelet transform and independent component analysis
Show abstract
A lossy hyperspectral images compression algorithm based on discrete wavelet transform (DWT) and segmented
independent component analysis is presented in this paper. Firstly, bands are divided into different groups based on the
correlation coefficient. Secondly, maximum noise fraction (MNF) method and maximum likelihood estimation are used
to estimate dimensionality of data in each group. Based on the result of dimension estimation, ICA and DWT are
deployed in spectral and spatial directions respectively. Finally, SPIHT and arithmetic coding are applied to the
transformation coefficients respectively, achieving quantization and entropy coding. Experimental results on 220 band
AVIRIS hyperspectral data show that the proposed method achieves higher compression ratio and better analysis
capability as compared with PCA and SPIHT algorithms.
Satellite Data Communications I
Robust video multicast scheme over wireless network with priority transmission
Show abstract
Nowadays, it is still a challenge to offer high quality and reliable real-time video services over wireless network. In this
paper, we studies a new practical scalable video coding (SVC) multicast scheme based on unequal error protection
(UEP) framework with priority transmission. Firstly, we propose a 3D layer-based priority ordering method to determine
the importance of scalable video bit stream. Then we present an adaptive packetization algorithm to allocate the source
and channel code bit rate for ordered layers and guarantee the layers with highest importance could be recovered first.
Simulation results show that the proposed scheme offer better performance than existing methods in preserving video
quality for different kinds of sequences under various channel conditions.
High-throughput GPU-based LDPC decoding
Show abstract
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Code-aided carrier synchronization based LDPC-Hadamard code
Wanchun Gong,
Zhiping Shi,
Fajian Tang,
et al.
Show abstract
The current papers have addressed the issue of synchronization in communication systems. In this paper, a method of
code-aided carrier synchronization is proposed, which is based on LDPC-Hadamard code at extremely low signal-tonoise
ratios (SNR) .This system consists of two parts: a low-rate LDPC-Hadamard code that is close to Shannon limit at
extremely low SNR and an EM-based carrier synchronizer that makes use of the posterior probabilities of the data
symbols provided by the iterative decoder. Performance of the proposed synchronizer has been illustrated by simulations
in terms of the BER and the root-mean-square-error (RMSE). Simulation results show that the RMSE of the parameters
performs very close to the modified Cramer-Rao bound.
Satellite Data Processing I
Watermarking scheme for tampering detection in remote sensing images using variable size tiling and DWT
Show abstract
In this paper, a semi-fragile watermarking scheme specifically developed for remote sensing images is presented.
The method can be tuned to embed the mark depending on the content and the signature to be protected. The
suggested method is based on tiling the original three-dimensional images into blocks of different sizes according
to the relevance of the area to protect (bigger blocks are used for less relevant areas). For each of these blocks,
the discrete Wavelet transform (DWT) is applied to each selected spectral band and the obtained LL DWT
sub-bands are used to build a Tree-Structured Vector Quantization tree. This tree is then modified using an
iterative algorithm until it satisfies some criterion. Once the target value is reached, the marked block is obtained
using the new LL DWT sub-band together with the other original sub-bands (LH, HL and HH) of the block.
A secret key produces a different criterion for each block in order to avoid copy-and-replace attacks. The use
of the LL DWT sub-band for each spectral band makes it possible to obtain robustness against near-lossless
compression attacks and, at the same time, relatively strong modifications are detected as tampering.
Comparison of support vector machine-based processing chains for hyperspectral image classification
Show abstract
Many different approaches have been proposed in recent years for remotely sensed hyperspectral image classification.
Despite the variety of techniques designed to tackle the aforementioned problem, the definition of
standardized processing chains for hyperspectral image classification is a difficult objective, which may ultimately
depend on the application being addressed. Generally speaking, a hyperspectral image classification
chain may be defined from two perspectives: 1) the provider's viewpoint, and 2) the user's viewpoint, where the
first part of the chain comprises activities such as data calibration and geo-correction aspects, while the second
part of the chain comprises information extraction processes from the collected data. The modules in the second
part of the chain (which constitutes our main focus in this paper) should be ideally flexible enough to be accommodated
not only to different application scenarios, but also to different hyperspectral imaging instruments
with varying characteristics, and spatial and spectral resolutions. In this paper, we evaluate the performance of
different processing chains resulting from combinations of modules for dimensionality reduction, feature extraction/
selection, image classification, and spatial post-processing. The support vector machine (SVM) classifier
is adopted as a baseline due to its ability to classify hyperspectral data sets using limited training samples.
A specific classification scenario is investigated, using a reference hyperspectral data set collected by NASA's
Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Indian Pines region in Indiana, USA.
Fast multi-spectral image registration based on a statistical learning technique
Show abstract
Statistical learning techniques have been used to dramatically speed-up keypoint matching and image registration.
However, they are rarely applied to multi-spectral images. Statistical learning techniques regard various intensities as
distinctive patterns. Thus, corresponding features extracted from multi-spectral images are recognized as different
patterns, because the features have different intensity characteristics. In order to overcome this problem, we propose a
novel statistical learning method that can be extended to multi-spectral images. The proposed approach obtains responses
from multiple classifiers that are trained with well-registered multi-spectral images, in contrast to earlier approaches
using one classifier. The responses of corresponding features can be similarly characterized as being of the same class
even though the intensities of the corresponding features are quite different. The experimental results show that our
method provides good performance on multi-spectral image registration compared to current methods.
High-order statistics Harsanyi-Farrand-Chang method for estimation of virtual dimensionality
Show abstract
Virtual dimensionality (VD) was introduced as a definition of the number of spectrally distinct signatures in
hyperspectral data where a method developed by Harsanyi-Farrand-Chang, referred to as HFC method was used to
estimate the VD. Unfortunately, some controversial issues occur due to misinterpretation of the VD. Since the non-literal
(spectral) information is the most important and critical for hyperspectral data to be preserved, the VD is particularly
defined to address this issue as the number of spectrally distinct signatures present in the data where each spectral
dimension is used to accommodate one specific signature. With this interpretation the VD is actually defined as the
minimum number of spectral dimensions used to characterize the hyperspectral data. In addition, since hyperspectral
targets of interest are generally insignificant and their occurrences have low probabilities with small populations, their
contributions to 2nd order statistics are usually very limited. Consequently, the HFC method using eigenvalues to
determine the VD may not be applicable for this purpose. Therefore, this paper revisits the VD and extends the HFC
method to high-order statistics HFC method to estimate the VD for such a type of hyperspectral targets present in the
data.
Satellite Data Processing II
Remote sensing image restoration based on compressive sensing and two-step iteration shrinkage algorithm
Show abstract
This paper proposes a new regularization algorithm combining the wavelet-based and contourlet-based regularization
items based on the Compressive Sensing (CS) theorem. The new algorithm aims at gaining maximum benefit by
combining the multiscale and multiresolution properties common to both wavelet and contourlet schemes, while
simultaneously incorporating their individual properties of point singularity and line singularity respectively. CS is
applied to remote sensing image deblurring. It has great practical significance due to saving the hardware cost and aiding
fast transmission. Experimental results show the method achieves improvement in peak-signal-noise-ratio and
correlation function as compared to traditional regularization algorithms.
Enhancement of spatial resolution of hyperspectral imagery using iterative back projection
Show abstract
Increasing the spatial resolution of panchromatic images and multispectral images is a classical problem in remote
sensing. However, it is still in its infancy to spatially enhance the resolution of hyperspectral imageries. In this paper, we
proposed a new method for increasing the spatial resolution of a hyperspectral data cube by using an iterative back
projection (IBP) based method. Also, we developed a new metric to measure the visual quality of the enhanced images.
This metric is good at measuring the visual quality of an image whose full-reference image is not available whereas the
low spatial resolution image is available. Experimental results confirm the superiority of the proposed method.
GPU implementation of fully constrained linear spectral unmixing for remotely sensed hyperspectral data exploitation
Show abstract
Spectral unmixing is an important task for remotely sensed hyperspectral data exploitation. The spectral signatures
collected in natural environments are invariably a mixture of the pure signatures of the various materials
found within the spatial extent of the ground instantaneous field view of the imaging instrument. Spectral
unmixing aims at inferring such pure spectral signatures, called endmembers, and the material fractions, called
fractional abundances, at each pixel of the scene. A standard technique for spectral mixture analysis is linear
spectral unmixing, which assumes that the collected spectra at the spectrometer can be expressed in the form
of a linear combination of endmembers weighted by their corresponding abundances, expected to obey two constraints,
i.e. all abundances should be non-negative, and the sum of abundances for a given pixel should be
unity. Several techniques have been developed in the literature for unconstrained, partially constrained and fully
constrained linear spectral unmixing, which can be computationally expensive (in particular, for complex highdimensional
scenes with a high number of endmembers). In this paper, we develop new parallel implementations
of unconstrained, partially constrained and fully constrained linear spectral unmixing algorithms. The implementations
have been developed in programmable graphics processing units (GPUs), an exciting development
in the field of commodity computing that fits very well the requirements of on-board data processing scenarios,
in which low-weight and low-power integrated components are mandatory to reduce mission payload. Our experiments,
conducted with a hyperspectral scene collected over the World Trade Center area in New York City,
indicate that the proposed implementations provide relevant speedups over the corresponding serial versions in
latest-generation Tesla C1060 GPU architectures.
Satellite Data Compression II
Quality analysis in N-dimensional lossy compression of multispectral remote sensing time series images
Show abstract
This work aims to determine an efficient procedure (balanced between quality and compression ratio) for compressing
multispectral remote sensing time series images in a 4-dimensional domain (2 spatial, 1 spectral and 1 temporal
dimension). The main factors studied were: spectral and temporal aggregation, landscape type, compression ratio, cloud
cover, thermal segregation and nodata regions.
In this study, the authors used three-dimensional Discrete Wavelet Transform (3d-DWT) as the compression
methodology, implemented in the Kakadu software with the JPEG2000 standard. This methodology was applied to a
series of 2008 Landsat-5 TM images that covered three different landscapes, and to one scene (19-06-2007) from a
hyperspectral CASI sensor.
The results show that 3d-DWT significantly improves the quality of the results for the hyperspectral images; for
example, it obtains the same quality as independently compressed images at a double compression ratio. The differences
between the two compression methodologies are smaller in the Landsat spectral analysis than in the CASI analysis, and
the results are more irregular depending on the factor analyzed. The time dimensional analysis for the Landsat series
images shows that 3d-DWT does not improve on band-independent compression.
Weighted-based arithmetic coding for satellite image compression
Show abstract
Aiming at the problems of the context-dilution and complicated context-quantization of high-order context, this paper
proposes a new context arithmetic coding using weighted-based context modeling. By classifying the weights with
non-uniform quantization, conventional high-order context-based arithmetic coding method can be approximated as
low-order arithmetic coding. Compared with the existing high-order context modeling, the proposed method not only
decreases the complexity of computation but also effectively improves the performance of entropy coding. Experimental
results show that the algorithm with proposed weighted-based arithmetic coding method performs better than SPECK,
SPIHT and JPEG2000.
A novel VLSI architecture of arithmetic encoder with reduced memory in SPIHT
Show abstract
The paper presents a context-based arithmetic coder's VLSI architecture used in SPIHT
with reduced memory, which is used for high speed real-time applications. For hardware
implementation, a dedicated context model is proposed for the coder. Each context can be
processed in parallel and high speed operators are used for interval calculations. An embedded
register array is used for cumulative frequency update. As a result, the coder can consume one
symbol at each clock cycle. After FPGA synthesis and simulation, the throughput of our coder is
comparable with those of similar hardware architectures used in ASIC technology. Especially, the
memory capacity of the coder is smaller than those of corresponding systems.
Clustered linear prediction for lossless compression of hyperspectral images using adaptive prediction length
Show abstract
This paper extends clustered differential pulse code modulation (C-DPCM) lossless compression method for hyperspectral images. In C-DPCM method the spectra of a hyperspectral image is clustered, and an optimized predictor is calculated for each cluster. Prediction is performed using a linear predictor. After prediction, the difference between the predicted and original values is computed. The difference is entropy-coded using an adaptive entropy coder for each cluster. The proposed use of adaptive prediction length is shown have lower bits/pixel value than the original C-DPCM method for new AVIRIS test images. Both calibrated are uncalibrated images showed improvement over fixed prediction length.
Satellite Data Communications II
An iterative detection method of MIMO over spatial correlated frequency selective channel: using list sphere decoding for simplification
Zhiping Shi,
Bing Yan
Show abstract
In multiple-input multiple-output(MIMO) wireless systems, combining good channel codes(e.g., Non-binary Repeat
Accumulate codes) with adaptive turbo equalization is a good option to get better performance and lower complexity
under Spatial Correlated Frequency Selective(SCFS) Channel. The key of this method is after joint antennas MMSE
detection (JAD/MMSE) based on interruption cancelling using soft information, considering the detection result as an
output of a Gaussian equivalent flat fading channel, and performing maximum likelihood detection(ML) to get more
correct estimated result. But the using of ML brings great complexity increase, which is not allowed. In this paper, a low
complexity method called list sphere decoding is introduced and applied to replace the ML in order to simplify the
adaptive iterative turbo equalization system.
Efficient broadcasting for scalable video coding streaming using random linear network coding
Show abstract
In order to improve the reconstructed quality of video sequence, a Random Linear Network Coding (RLNC) based video
transmission scheme for Scalable Video Coding (SVC) is proposed in wireless broadcast scenario. A packetization
model for SVC streaming is introduced to transmit the scalable bit streams conveniently, on the basis of which the
RLNC based Unequal Error Protection (RUEP) method is proposed to improve the efficiency of video transmission. The
RUEP's advantage lies in the fact that the redundancy protection of UEP can be efficiently determine by the capacity of
broadcast channel. Simulation results show that RUEP can improve the reconstructed quality of video sequence
compared with the traditional Store and Forward (SF) based transmission schemes.
Satellite Data Processing III
GPU implementation of target and anomaly detection algorithms for remotely sensed hyperspectral image analysis
Show abstract
Automatic target and anomaly detection are considered very important tasks for hyperspectral data exploitation.
These techniques are now routinely applied in many application domains, including defence and intelligence,
public safety, precision agriculture, geology, or forestry. Many of these applications require timely responses for
swift decisions which depend upon high computing performance of algorithm analysis. However, with the recent
explosion in the amount and dimensionality of hyperspectral imagery, this problem calls for the incorporation
of parallel computing techniques. In the past, clusters of computers have offered an attractive solution for fast
anomaly and target detection in hyperspectral data sets already transmitted to Earth. However, these systems
are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power
integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time,
i.e., at the same time as the data is collected by the sensor. An exciting new development in the field of
commodity computing is the emergence of commodity graphics processing units (GPUs), which can now bridge
the gap towards on-board processing of remotely sensed hyperspectral data. In this paper, we describe several
new GPU-based implementations of target and anomaly detection algorithms for hyperspectral data exploitation.
The parallel algorithms are implemented on latest-generation Tesla C1060 GPU architectures, and quantitatively
evaluated using hyperspectral data collected by NASA's AVIRIS system over the World Trade Center (WTC)
in New York, five days after the terrorist attacks that collapsed the two main towers in the WTC complex.
Parallel K-dimensional tree classification based on semi-matroid structure for remote sensing applications
Show abstract
Satellite remote sensing images can be interpreted to provide important information of large-scale natural resources, such
as lands, oceans, mountains, rivers, forests and minerals for Earth observations. Recent advances of remote sensing
technologies have improved the availability of satellite imagery in a wide range of applications including high
dimensional remote sensing data sets (e.g. high spectral and high spatial resolution images). The information of high
dimensional remote sensing images obtained by state-of-the-art sensor technologies can be identified more accurately
than images acquired by conventional remote sensing techniques. However, due to its large volume of image data, it
requires a huge amount of storages and computing time. In response, the computational complexity of data processing
for high dimensional remote sensing data analysis will increase. Consequently, this paper proposes a novel classification
algorithm based on semi-matroid structure, known as the parallel k-dimensional tree semi-matroid (PKTSM)
classification, which adopts a new hybrid parallel approach to deal with high dimensional data sets. It is implemented by
combining the message passing interface (MPI) library, the open multi-processing (OpenMP) application programming
interface and the compute unified device architecture (CUDA) of graphics processing units (GPU) in a hybrid mode. The
effectiveness of the proposed PKTSM is evaluated by using MODIS/ASTER airborne simulator (MASTER) images and
airborne synthetic aperture radar (AIRSAR) images for land cover classification during the Pacrim II campaign. The
experimental results demonstrated that the proposed hybrid PKTSM can significantly improve the performance in terms
of both computational speed-up and classification accuracy.
Improved panchromatic sharpening
Show abstract
In this paper, we present a new panchromatic sharpening method based on quality parameter optimization. Traditionally,
quality metrics such as UIQI, CORR, and ERGAS have been used to assess the quality of panchromatic sharpening.
Generally, HPF (high pass filtering) based panchromatic sharpening methods produce good performance. However, one
problem with these methods is the peak noise that arises due to a small denominator value when the mean shift problem
is addressed. In order to address this problem, we introduce an offset value that was optimized based on a quality metric.
We assumed that the offset value was invariant with respect to the spatial scale, and it was used to enhance the resolution
of the original multispectral images by using a high-resolution panchromatic image. The experimental results
demonstrate that the proposed method showed better performance than some existing panchromatic sharpening methods.
Image deblurring by motion estimation for remote sensing
Show abstract
The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from
unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be
arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A
deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote
sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed
detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors.
Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the
blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is
direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution
scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is
convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with
significant objective evaluation increment.
Satellite Data Compression III
Simple resiliency improvement of the CCSDS standard for lossless data compression
Show abstract
The Consultative Committee for Space Data Systems (CCSDS) recommends the use of a two-stage strategy
for lossless data compression in space. At the core of the second stage is the Rice coding method. The Rice
compression ratio rapidly decreases in the presence of noise and outliers, since this coder is specially conceived for
noiseless data following geometric distributions. This, in turn, makes the CCSDS recommendation too sensitive
in front of outliers in the data, leading to non-optimal ratios in realistic scenarios. In this paper we propose
to substitute the Rice coder of the CCSDS recommendation by a subexponential coder. We show that this
solution offers high compression ratios even when large amounts of noise are present in the data. This is done
by testing both compressors with synthetic and real data. The performance is actually similar to that obtained
with the FAPEC coder, although with slightly higher processing requirements. Therefore, this solution appears
as a simple improvement that can be done to the current CCSDS standard with an excellent return.
Implementation of CCSDS data compression for remote sensing image
Show abstract
The FORMOSAT-5 is an optical remote sensing satellite with PAN 2m and MS 4m resolution, which is under
development by the National Space Organization (NSPO) in Taiwan. The payload consists of one Panchromatic (PAN)
band with 12,000 pixels and four Multi-Spectrum (MS) bands with 6000 pixels in the remote sensing instrument. The
image data compression method complies with the Consultative Committee for Space Data Systems (CCSDS)
standards1,2. The compression ratio can be 1.5 for lossless compression, 3.75 or 7.5 for lossy compression. The Xilinx
Virtex-5Q FPGA, XQR5VFX130 is used to achieve near real time compression.
4D remote sensing image coding with JPEG2000
Show abstract
Multicomponent data have become popular in several scientific fields such as forest monitoring, environmental studies, or
sea water temperature detection. Nowadays, this multicomponent data can be collected more than one time per year for the
same region. This generates different instances in time of multicomponent data, also called 4D-Data (1D Temporal + 1D
Spectral + 2D Spatial).
For multicomponent data, it is important to take into account inter-band redundancy to produce a more compact representation
of the image by packing the energy into fewer number of bands, thus enabling a higher compression performance.
The principal decorrelators used to compact the inter-band correlation redundancy are the Karhunen Loeve Transform
(KLT) and Discrete Wavelet Transform (DWT). Because of the Temporal Dimension added, the inter-band redundancy
among different multicomponent images is increased.
In this paper we analyze the influence of the Temporal Dimension (TD) and the Spectral Dimension (SD) in 4D-Data in
terms of coding performance for JPEG2000, because it has support to apply different decorrelation stages and transforms to
the components through the different dimensions. We evaluate the influence to perform different decorrelators techniques
to the different dimensions. Also we will assess the performance of the two main decorrelation techniques, KLT and DWT.
Experimental results are provided, showing rate-distortion performances encoding 4D-Data using KLT and WT techniques
to the different dimensions TD and SD.
Fast multi-symmetry adaptive loop filter algorithm
Show abstract
In order to further improve the coding performance of Block-Based and Quadtree-Based Adaptive Loop Filter
(BQ_ALF), the Fast Multi-Symmetry Adaptive Loop Filter Algorithm (FMS_ALF) is proposed. Firstly, this algorithm
determines the optimal symmetry filter according to area symmetry and average sum of absolute difference. Then the
filter areas are obtained through the block-based and quadtree-based method in I frame and through motion vector and
Rate Distortion Optimization model A (RDOA) in P or B frame .Finally the obtained areas are filtered by the optimal
symmetry filter. Simulation results show that compared with BQ_ALF, the proposed algorithm reduces the coding time
greatly while retains the reconstructed picture quality.
Satellite Data Processing IV
Using remote sensing imagery to monitoring sea surface pollution cause by abandoned gold-copper mine
Show abstract
The Chinkuashih Benshen mine was the largest gold-copper mine in Taiwan before the owner had abandoned the mine
in 1987. However, even the mine had been closed, the mineral still interacts with rain and underground water and flowed
into the sea. The polluted sea surface had appeared yellow, green and even white color, and the pollutants had carried by
the coast current. In this study, we used the optical satellite images to monitoring the sea surface. Several image
processing algorithms are employed especial the subpixel technique and linear mixture model to estimate the
concentration of pollutants. The change detection approach is also applied to track them. We also conduct the chemical
analysis of the polluted water to provide the ground truth validation. By the correlation analysis between the satellite
observation and the ground truth chemical analysis, an effective approach to monitoring water pollution could be
established.
A new system to perform unsupervised and supervised classification of satellite images from Google Maps
Show abstract
In this paper, we describe a new system for unsupervised and supervised classification of satellite images from
Google Maps. The system has been developed using the SwingX-WS library, and incorporates functionalities
such as unsupervised classification of image portions selected by the user (at the maximum zoom level) using
ISODATA and k-Means, and supervised classification using the Minimum Distance and Maximum Likelihood,
followed by spatial post-processing based on majority voting. Selected regions in the classified portion are used
to train a maximum likelihood classifier able to map larger image areas in a manner transparent to the user.
The system also retrieves areas containing regions similar to those already classified. An experimental validation
of the proposed system has been conducted by comparing the obtained classification results with those provided
by commercial software, such as the popular Research Systems ENVI package.
Unsupervised segmentation for hyperspectral images using mean shift segmentation
Show abstract
In this paper, we propose an unsupervised segmentation method for hyperspectral images using mean shift filtering.
One major problem of traditional mean shift algorithms is the difficulty of determining kernel bandwidths. We address
this problem by using efficient clustering methods. First, PCA (Principal Component Analysis) was applied to
hyperspectral images and the first three eigenimages were selected. Then, we applied mean shift filtering to the selected
images using a kernel with a small bandwidth. This procedure produced a large number of clusters. In order to merge the
homogeneous clusters, we used the Bhattacharyya distance. Experiments showed promising segmentation results
without requiring user input.
Accelerating the RTTOV-7 radiative transfer model on graphics processing units
Show abstract
We develop a Graphics Processing Unit (GPU)-based high-performance RTTOV-7 forward model. The RTTOV
forward model performs the fast computation of the radiances, brightness temperatures, overcast radiances, surface to
space transmittances, surface emissivities and pressure level to space transmittances for a given profile vector. A special
optimized high performance CUDA kernel was used for multi-profile processing. The difference between single-profile
and multi-profile kernels is that in a multi-profile kernel each thread is responsible for computing the results for a single
channel in several profiles. Multi-profile processing gave over two fold increase in processing speed compared to singleprofile
processing. Using a GPU processing we reached promising speedups of 170x and 334x for single-profile and
multi-profile processing, respectively. The significant 334x speedup means that the proposed GPU-based highperformance
forward model is able to compute one day's amount of 1,296,000 Infrared Atmospheric Sounding
Interferometer (IASI) spectra nearly within 12 minutes, whereas the original CPU-based version will impractically take
nearly 3 days.
Analysis of the effects of compression on extraction of features from images
Show abstract
Line features are very important in photogrammetry, and compression often causes a loss of line features in images. In
order to study the effects of compression on feature extraction, a SPOT5 image was selected as test data and the image
was compressed with seven compression ratios. Features of the original and compressed images were extracted with the
Canny operator. A change detection method was used to compare the features of the original and compressed images.
The results show that the features extracted vary with the compression ratio, and the change ratio for those features
increases with increasing compression ratio.
Maximum orthogonal subspace projection approach to estimating the number of spectral signal sources in hyperspectral imagery
Show abstract
Estimating the number of spectral signal sources, denoted by p, in hyperspectral imagery is very challenging due to the
fact that many unknown material substances can be uncovered by very high spectral resolution hyperspectral sensors.
This paper investigates a recent approach, called maximum orthogonal complement algorithm (MOCA), for this
purpose. The MOCA was originally developed by Kuybeda et al. for estimating the rank of a rare vector space in a highdimensional
noisy data space. Interestingly, the idea of the MOCA is essentially derived from the automatic target
generation process (ATGP) developed by Ren and Chang. By appropriately interpreting the MOCA in context of the
ATGP a potentially useful technique, called maximum orthogonal subspace projection (MOSP) can be further developed
where determining a stopping rule for the ATGP turns out to be equivalent to estimating the rank of a rare vector space
by the MOCA and the number of targets determined by the stopping rule for the ATGP to generate is the desired value
of the parameter p. Furthermore, a Neyman-Pearson detector version of MOCA, NPD-MOCA can be also derived by the
MOSP as opposed to the MOCA considered as a Bayes detector. Surprisingly, the MOCA-NPD has very similar design
rationale to that of a technique referred to as Harsanyi-Farrand-Chang method that was developed to estimate the virtual
dimensionality (VD) which is defined as the p.
Satellite Data Compression IV
Optimizing GPS data transmission using entropy coding compression
Alberto G. Villafranca,
Iu Mora,
Patrícia Ruiz-Rodríguez,
et al.
Show abstract
The Global Positioning System (GPS) has long been used as a scientific tool, and it has turned into a very powerful
technique in domains like geophysics, where it is commonly used to study the dynamics of a large variety of
systems, like glaciers, tectonic plates and others. In these cases, the large distances between receivers as well
as their remote locations usually pose a challenge for data transmission. The standard format for scientific
applications is a compressed RINEX file - a raw data format which allows post-processing. Its associated
compression algorithm is based on a pre-processing stage followed by a commercial data compressor. In this
paper we present a new compression method which can achieve better compression ratios with a faster operation.
We have improved the pre-compression stage, split the resulting file into two, and applied the most appropriate
compressor to each file. FAPEC, a highly resilient entropy coder, is applied to the observables file. The results
obtained so far demonstrate that it is possible to obtain average compression gains of about 35% with respect
to the original compressor.
Compression of hyperspectral imagery based on compressive sensing and interband prediction
Show abstract
An efficient compression algorithm for hyperspectral imagery based on compressive sensing and interband linear
prediction is proposed which has the advantages of high compression performance and low computational complexity by
exploiting the strong spectral correlation. At the encoder, the random measurements of each frame are made, quantized
and transmitted to the decoder independently. The prediction parameters between adjacent bands are also estimated
using the linear prediction algorithm and transmitted to the decoder. At the decoder, a new reconstruction algorithm with
the proposed initialization and stopping criterion is employed to reconstruct the current frames with the assistance of the
prediction frame, which is derived from the previous reconstructed neighboring frames and the received prediction
parameters using the same prediction algorithm. Experimental results show that the proposed algorithm not only obtains
about 1.1 dB gains but greatly decreases decoding complexity. Furthermore, our algorithm has the characteristics of
low-complexity encoding and facility in hardware implementation.
A GPU-based implementation of predictive partitioned vector quantization for compression of ultraspectral sounder data
Show abstract
Recently there is a boom on the use of graphic processor units (GPU) for speedup of scientific computations. By
identifying the time dominant portions of the code that can be executed in parallel, significant speedup can be achieved
by a GPU-based implementation. For the voluminous ultraspectral sounder data, lossless compression is desirable to
save storage space and transmission time without losing precision in retrieval of geophysical parameters. Predictive
partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral
sounder data. It consists of linear prediction, bit partition, vector quantization, and entropy coding. Two most time
consuming stages of linear prediction and vector quantization are chosen for GPU-based implementation. By exploiting
the data parallel characteristics of these two stages, a speedup of 42x has been achieved in our GPU-based
implementation of the PPVQ compression scheme.
Hyperspectral image compression using distributed arithmetic coding and bit-plane coding
Show abstract
Hyperspectral images are of very large data size and highly correlated in neighboring bands, therefore, it is necessary to
realize the efficient compression performance on the condition of low encoding complexity. In this paper, we propose a
method based on both partitioning embedded block and lossless adaptive-distributed arithmetic coding (LADAC).
Combined with three-dimensional wavelet transform and SW-SPECK algorithm, LADAC is adopted according to the
correlation between the adjacent bit-plane. Experimental results show that our proposed algorithm outperforms
3D-SPECK, furthermore, our method need not take the inter-band prediction or transform into account, so the
complexity is small relatively.