Share Email Print

Journal of Applied Remote Sensing

Convolutional networks for vehicle track segmentation
Author(s): Tu-Thach Quach
Format Member Price Non-Member Price
PDF $20.00 $25.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.

Paper Details

Date Published: 19 August 2017
PDF: 10 pages
J. Appl. Rem. Sens. 11(4) 042603 doi: 10.1117/1.JRS.11.042603
Published in: Journal of Applied Remote Sensing Volume 11, Issue 4
Show Author Affiliations
Tu-Thach Quach, Sandia National Labs. (United States)

© SPIE. Terms of Use
Back to Top