Share Email Print
cover

Proceedings Paper

Vehicle classification in WAMI imagery using deep network
Author(s): Meng Yi; Fan Yang; Erik Blasch; Carolyn Sheaff; Kui Liu; Genshe Chen; Haibin Ling
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Humans have always had a keen interest in understanding activities and the surrounding environment for mobility, communication, and survival. Thanks to recent progress in photography and breakthroughs in aviation, we are now able to capture tens of megapixels of ground imagery, namely Wide Area Motion Imagery (WAMI), at multiple frames per second from unmanned aerial vehicles (UAVs). WAMI serves as a great source for many applications, including security, urban planning and route planning. These applications require fast and accurate image understanding which is time consuming for humans, due to the large data volume and city-scale area coverage. Therefore, automatic processing and understanding of WAMI imagery has been gaining attention in both industry and the research community. This paper focuses on an essential step in WAMI imagery analysis, namely vehicle classification. That is, deciding whether a certain image patch contains a vehicle or not. We collect a set of positive and negative sample image patches, for training and testing the detector. Positive samples are 64 × 64 image patches centered on annotated vehicles. We generate two sets of negative images. The first set is generated from positive images with some location shift. The second set of negative patches is generated from randomly sampled patches. We also discard those patches if a vehicle accidentally locates at the center. Both positive and negative samples are randomly divided into 9000 training images and 3000 testing images. We propose to train a deep convolution network for classifying these patches. The classifier is based on a pre-trained AlexNet Model in the Caffe library, with an adapted loss function for vehicle classification. The performance of our classifier is compared to several traditional image classifier methods using Support Vector Machine (SVM) and Histogram of Oriented Gradient (HOG) features. While the SVM+HOG method achieves an accuracy of 91.2%, the accuracy of our deep network-based classifier reaches 97.9%.

Paper Details

Date Published: 13 May 2016
PDF: 8 pages
Proc. SPIE 9838, Sensors and Systems for Space Applications IX, 98380E (13 May 2016); doi: 10.1117/12.2224916
Show Author Affiliations
Meng Yi, Temple Univ. (United States)
Fan Yang, Temple Univ. (United States)
Erik Blasch, Air Force Research Lab. (United States)
Carolyn Sheaff, Air Force Research Lab. (United States)
Kui Liu, Intelligent Fusion Technology, Inc. (United States)
Genshe Chen, Intelligent Fusion Technology, Inc. (United States)
Haibin Ling, Temple Univ. (United States)


Published in SPIE Proceedings Vol. 9838:
Sensors and Systems for Space Applications IX
Khanh D. Pham; Genshe Chen, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray