Proceedings Volume 10007

High-Performance Computing in Geoscience and Remote Sensing VI

cover
Proceedings Volume 10007

High-Performance Computing in Geoscience and Remote Sensing VI

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 19 December 2016
Contents: 6 Sessions, 20 Papers, 15 Presentations
Conference: SPIE Remote Sensing 2016
Volume Number: 10007

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10007
  • High Performance Computing I
  • High Performance Computing II
  • High Performance Computing III
  • High Performance Computing IV
  • Poster Session
Front Matter: Volume 10007
icon_mobile_dropdown
Front Matter: Volume 10007
This PDF file contains the front matter associated with SPIE Proceedings Volume 10007, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
High Performance Computing I
icon_mobile_dropdown
A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study
Agustín García-Flores, Abel Paz-Gallardo, Antonio Plaza, et al.
This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.
A study on computation optimization method for three-dimension scene light field radiation simulation in visible light band
Ligang Li, Wei Ni, Xiaoshan Ma, et al.
The simulation of high accuracy three-dimension (3D) scene optical field radiation distribution can provide the input for camera design, optimization of key parameters and testing of various imaging models. It can benefit for reducing the strong coupling between the imaging models and scene simulation. However, the simulation computation is extremely large and the non-optimization computing method can’t performed efficiently. Therefore, a study was carried out from the algorithm optimization and using high-performance platform to accelerate the operation speed. On the one hand, the visibility of scene was pre-computed which include the visibility from the light source to each facet in scene and the visibility between facets. The bounding box accelerate algorithm was adopted which can avoid a lot of time-consuming computation of occlusion in the light field radiation simulation process. On the other hand, since the 3D scene light field radiation simulation was obtained by a large number of light approximation, the algorithms can be divided blocks and processed parallelly. The GPU parallel framework was adopted to realize the simulation model of light field radiation in visible band. Finally, experiments were performed. The result shown the proposed method was more efficient and effective compared with the non-optimization method.
Increasing the object recognition distance of compact open air on board vision system
Sergey Kirillov, Ivan Kostkin, Valery Strotov, et al.
The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.
Performance of the dot product function in radiative transfer code SORD
Sergey Korkin, Alexei Lyapustin, Aliaksandr Sinyuk, et al.
The successive orders of scattering radiative transfer (RT) codes frequently call the scalar (dot) product function. In this paper, we study performance of some implementations of the dot product in the RT code SORD using 50 scenarios for light scattering in the atmosphere-surface system. In the dot product function, we use the unrolled loops technique with different unrolling factor. We also considered the intrinsic Fortran functions. We show results for two machines: ifort compiler under Windows, and pgf90 under Linux. Intrinsic DOT_PRODUCT function showed best performance for the ifort. For the pgf90, the dot product implemented with unrolling factor 4 was the fastest.

The RT code SORD together with the interface that runs all the mentioned tests are publicly available from ftp://maiac.gsfc.nasa.gov/pub/skorkin/SORD_IP_16B (current release) or by email request from the corresponding (first) author.
High Performance Computing II
icon_mobile_dropdown
Parallel hyperspectral image reconstruction using random projections
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA).

Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
A new semi-supervised classification strategy combining active learning and spectral unmixing of hyperspectral data
Hyperspectral remote sensing allows for the detailed analysis of the surface of the Earth by providing high-dimensional images with hundreds of spectral bands. Hyperspectral image classification plays a significant role in hyperspectral image analysis and has been a very active research area in the last few years. In the context of hyperspectral image classification, supervised techniques (which have achieved wide acceptance) must address a difficult task due to the unbalance between the high dimensionality of the data and the limited availability of labeled training samples in real analysis scenarios. While the collection of labeled samples is generally difficult, expensive, and time-consuming, unlabeled samples can be generated in a much easier way. Semi-supervised learning offers an effective solution that can take advantage of both unlabeled and a small amount of labeled samples. Spectral unmixing is another widely used technique in hyperspectral image analysis, developed to retrieve pure spectral components and determine their abundance fractions in mixed pixels. In this work, we propose a method to perform semi-supervised hyperspectral image classification by combining the information retrieved with spectral unmixing and classification. Two kinds of samples that are highly mixed in nature are automatically selected, aiming at finding the most informative unlabeled samples. One kind is given by the samples minimizing the distance between the first two most probable classes by calculating the difference between the two highest abundances. Another kind is given by the samples minimizing the distance between the most probable class and the least probable class, obtained by calculating the difference between the highest and lowest abundances. The effectiveness of the proposed method is evaluated using a real hyperspectral data set collected by the airborne visible infrared imaging spectrometer (AVIRIS) over the Indian Pines region in Northwestern Indiana. In the paper, techniques for efficient implementation of the considered technique in high performance computing architectures are also discussed.
Parallel implementation of a hyperspectral image linear SVM classifier using RVC-CAL
D. Madroñal , H. Fabelo, R. Lazcano, et al.
Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundreds of bands across the electromagnetic spectrum –from the ultraviolet to the infrared range–. Thanks to this huge amount of information, an identification of the different elements that compound the hyperspectral image is feasible. Initially, HI was developed for remote sensing applications and, nowadays, its use has been spread to research fields such as security and medicine. In all of them, new applications that demand the specific requirement of real-time processing have appear. In order to fulfill this requirement, the intrinsic parallelism of the algorithms needs to be explicitly exploited.

In this paper, a Support Vector Machine (SVM) classifier with a linear kernel has been implemented using a dataflow language called RVC-CAL. Specifically, RVC-CAL allows the scheduling of functional actors onto the target platform cores. Once the parallelism of the classifier has been extracted, a comparison of the SVM classifier implementation using LibSVM –a specific library for SVM applications– and RVC-CAL has been performed.

The speedup results obtained for the image classifier depends on the number of blocks in which the image is divided; concretely, when 3 image blocks are processed in parallel, an average speed up above 2.50, with regard to the RVC-CAL sequential version, is achieved.
The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system
Boris Alpatov, Pavel Babayan, Maksim Ershov, et al.
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
OpenCL-library-based implementation of SCLSU algorithm for remotely sensed hyperspectral data exploitation: clMAGMA versus viennaCL
Sergio Bernabé , Guillermo Botella, Carlos Orueta, et al.
In the last decade, hyperspectral spectral unmixing (HSU) analysis have been applied in many remote sensing applications. For this process, the linear mixture model (LMM) has been the most popular tool used to find pure spectral constituents or endmembers and their fractional abundance in each pixel of the data set. The unmixing process consists of three stages: (i) estimation of the number of pure spectral signatures or endmembers, (ii) automatic identification of the estimated endmembers, and (iii) estimation of the fractional abundance of each endmember in each pixel of the scene. However, unmixing algorithms can be very expensive computationally, a fact that compromises their use in applications under real-time constraints. This is, mainly, due to the last two stages in the unmixing process, which are the most consuming ones. In this work, we propose parallel opencl-library- based implementations of the sum-to-one constrained least squares unmixing (P-SCLSU) algorithm to estimate the per-pixel fractional abundances by using mathematical libraries such as clMAGMA or ViennaCL. To the best of our knowledge, this kind of analysis using OpenCL libraries have not been previously conducted in the hyperspectral imaging processing literature, and in our opinion it is very important in order to achieve efficient implementations using parallel routines. The efficacy of our proposed implementations is demonstrated through Monte Carlo simulations for real data experiments and using high performance computing (HPC) platforms such as commodity graphics processing units (GPUs).
High Performance Computing III
icon_mobile_dropdown
A multiple criteria-based spectral partitioning method for remotely sensed hyperspectral image classification
Hyperspectral remote sensing offers a powerful tool in many different application contexts. The imbalance between the high dimensionality of the data and the limited availability of training samples calls for the need to perform dimensionality reduction in practice. Among traditional dimensionality reduction techniques, feature extraction is one of the most widely used approaches due to its flexibility to transform the original spectral information into a subspace. In turn, band selection is important when the application requires preserving the original spectral information (especially the physically meaningful information) for the interpretation of the hyperspectral scene. In the case of hyperspectral image classification, both techniques need to discard most of the original features/bands in order to perform the classification using a feature set with much lower dimensionality. However, the discriminative information that allows a classifier to provide good performance is usually classdependent and the relevant information may live in weak features/bands that are usually discarded or lost through subspace transformation or band selection. As a result, in practice, it is challenging to use either feature extraction or band selection for classification purposes. Relevant lines of attack to address this problem have focused on multiple feature selection aiming at a suitable fusion of diverse features in order to provide relevant information to the classifier. In this paper, we present a new dimensionality reduction technique, called multiple criteria-based spectral partitioning, which is embedded in an ensemble learning framework to perform advanced hyperspectral image classification. Driven by the use of a multiple band priority criteria that is derived from classic band selection techniques, we obtain multiple spectral partitions from the original hyperspectral data that correspond to several band subgroups with much lower spectral dimensionality as compared with the original band set. An ensemble learning technique is then used to fuse the information from multiple features, taking advantage of the relevant information provided by each classifier. Our experimental results with two real hyperspectral images, collected by the reflective optics system imaging spectrometer (ROSIS) over the University of Pavia in Italy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the Salinas scene, reveal that our presented method, driven by multiple band priority criteria, is able to obtain better classification results compared with classic band selection techniques. This paper also discusses several possibilities for computationally efficient implementation of the proposed technique using various high-performance computing architectures.
A new comparison of hyperspectral anomaly detection algorithms for real-time applications
Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing.

An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by the conclusions drawn from the simulations.
A new hyperspectral image compression paradigm based on fusion
Raúl Guerra, José Melián, Sebastián López, et al.
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Toward an optimisation technique for dynamically monitored environment
Orabi M. Shurrab
The data fusion community has introduced multiple procedures of situational assessments; this is to facilitate timely responses to emerging situations. More directly, the process refinement of the Joint Directors of Laboratories (JDL) is a meta-process to assess and improve the data fusion task during real-time operation. In other wording, it is an optimisation technique to verify the overall data fusion performance, and enhance it toward the top goals of the decision-making resources.

This paper discusses the theoretical concept of prioritisation. Where the analysts team is required to keep an up to date with the dynamically changing environment, concerning different domains such as air, sea, land, space and cyberspace. Furthermore, it demonstrates an illustration example of how various tracking activities are ranked, simultaneously into a predetermined order. Specifically, it presents a modelling scheme for a case study based scenario, where the real-time system is reporting different classes of prioritised events. Followed by a performance metrics for evaluating the prioritisation process of situational awareness (SWA) domain. The proposed performance metrics has been designed and evaluated using an analytical approach. The modelling scheme represents the situational awareness system outputs mathematically, in the form of a list of activities. Such methods allowed the evaluation process to conduct a rigorous analysis of the prioritisation process, despite any constrained related to a domain-specific configuration.

After conducted three levels of assessments over three separates scenario, The Prioritisation Capability Score (PCS) has provided an appropriate scoring scheme for different ranking instances, Indeed, from the data fusion perspectives, the proposed metric has assessed real-time system performance adequately, and it is capable of conducting a verification process, to direct the operator’s attention to any issue, concerning the prioritisation capability of situational awareness domain.
High Performance Computing IV
icon_mobile_dropdown
Parallelism exploitation of a PCA algorithm for hyperspectral images using RVC-CAL
R. Lazcano, I. Sidrach-Cardona , D. Madroñal, et al.
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. The tremendous development of this technology within the field of remote sensing has led to new research fields, such as cancer automatic detection or precision agriculture, but has also increased the performance requirements of the applications. For instance, strong time constraints need to be respected, since many applications imply real-time responses. Achieving real-time is a challenge, as hyperspectral sensors generate high volumes of data to process. Thus, so as to achieve this requisite, first the initial image data needs to be reduced by discarding redundancies and keeping only useful information. Then, the intrinsic parallelism in a system specification must be explicitly highlighted.

In this paper, the PCA (Principal Component Analysis) algorithm is implemented using the RVC-CAL dataflow language, which specifies a system as a set of blocks or actors and allows its parallelization by scheduling the blocks over different processing units. Two implementations of PCA for hyperspectral images have been compared when aiming at obtaining the first few principal components: first, the algorithm has been implemented using the Jacobi approach for obtaining the eigenvectors; thereafter, the NIPALS-PCA algorithm, which approximates the principal components iteratively, has also been studied. Both implementations have been compared in terms of accuracy and computation time; then, the parallelization of both models has also been analyzed.

These comparisons show promising results in terms of computation time and parallelization: the performance of the NIPALS-PCA algorithm is clearly better when only the first principal component is achieved, while the partitioning of the algorithm execution over several cores shows an important speedup for the PCA-Jacobi. Thus, experimental results show the potential of RVC–CAL to automatically generate implementations which process in real-time the large volumes of information of hyperspectral sensors, as it provides advanced semantics for exploiting system parallelization.
Spatial-spectral preprocessing for endmember extraction on GPU's
Luis I. Jimenez, Javier Plaza, Antonio Plaza, et al.
Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based on our experiments with hyperspectral data sets, high computational performance is observed in both cases.
Poster Session
icon_mobile_dropdown
Fast DPCM scheme for lossless compression of aurora spectral images
Aurora has abundant information to be stored. Aurora spectral image electronically preserves spectral information and visual observation of aurora during a period to be studied later. These images are helpful for the research of earth-solar activities and to understand the aurora phenomenon itself. However, the images are produced with a quite high sampling frequency, which leads to the challenging transmission load. In order to solve the problem, lossless compression turns out to be required.

Indeed, each frame of aurora spectral images differs from the classical natural image and also from the frame of hyperspectral image. Existing lossless compression algorithms are not quite applicable. On the other hand, the key of compression is to decorrelate between pixels. We consider exploiting a DPCM-based scheme for the lossless compression because DPCM is effective for decorrelation. Such scheme makes use of two-dimensional redundancy both in the spatial and spectral domain with a relatively low complexity. Besides, we also parallel it for a faster computation speed. All codes are implemented on a structure consists of nested for loops of which the outer and the inner loops are respectively designed for spectral and spatial decorrelation. And the parallel version is represented on CPU platform using different numbers of cores.

Experimental results show that compared to traditional lossless compression methods, the DPCM scheme has great advantage in compression gain and meets the requirement of real-time transmission. Besides, the parallel version has expected computation performance with a high CPU utilization.
Generation of OAM waves using metamaterials substrate antenna
In this paper, we demonstrate a single layer X-band (8-12 GHz) metamaterials substrate antenna for generating orbital angular momentum (OAM) beam. The proposed design consists of a series of phase-shift unit cells to control the emergent phase, whose function is same as spiral phase plate (SPP). The SPP uses the concept of reflectarray antenna having a double circular ring type unit cell to control the reflection phase for generating beams carrying OAM wave. The results of phase patterns simulation verify that the vortex radio waves can be generated by using sub-wavelength reflective metamaterials substrate antenna. From the perspective of the electromagnetic calculation, we can use high performance computing to reduce the computation time.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
Rui Guo, Yingxiao Zhao, Yue Zhang, et al.
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
Coastline change mapping using a spectral band method and Sobel edge operator
Saeed Al-Mansoori, Fatima Al-Marzouqi
Coastline extraction has become an essential activity in wake of the natural disasters taking place in some regions such as tsunami, flooding etc. Salient feature of such catastrophes is lack of reaction time available for combating emergency, thus it is the endeavor of any country to develop constant monitoring mechanism of shorelines. This is a challenging task because of the magnitude of changes taking place to the coastline regularly. Previous research findings highlight a need of formulating automation driven methodology for timely and accurate detection of alterations in the coastline impacting sustainability of mankind operating in the coastal zone. In this study, we propose a new approach for automatic extraction of the coastline using remote sensing data. This approach is composed of three main stages. Firstly, classifying pixels of the image into two categories i.e. land and water body by applying two normalized difference indices i.e. Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). Then, the process of binary conversion of classified image takes place using a local threshold method. Finally, the coastline is extracted by applying Sobel edge operator with a pair of (3×3) kernels. The approach is tested using 2.5m DubaiSat-1 (DS1) and DubaiSat-2 (DS2) images captured to detect and monitor the changes occurring along Dubai coastal zone within a period of six years from 2009 till 2015. Experimental results prove that the approach is capable of extracting the coastlines from DS1 and DS2 images with moderate human interaction. The results of the study show an increase of 6% in Dubai shoreline resulting on account of numerous man-made infrastructure development projects in tourism and allied sectors.