Share Email Print

Proceedings Paper

Physically realizable adversarial examples for convolutional object detection algorithms
Author(s): David R. Chambers; H. Abe Garza
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In our work, we make two primary contributions to the field of adversarial example generation for convolutional neural network based perception technologies. First of all, we extend recent work on physically realizable adversarial examples to make them more robust to translation, rotation, and scale in real-world scenarios. Secondly, we demonstrate attacks against object detection neural networks rather than considering only the simpler problem of classification, demonstrating the ability to force these networks to mislocalize as well as misclassify. We demonstrate our method on multiple object detection frameworks, including Faster R-CNN, YOLO v3, and our own single-shot detection architecture.

Paper Details

Date Published: 14 May 2019
PDF: 11 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880R (14 May 2019); doi: 10.1117/12.2520166
Show Author Affiliations
David R. Chambers, Southwest Research Institute (United States)
H. Abe Garza, Southwest Research Institute (United States)

Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?