
Proceedings Paper
Physically realizable adversarial examples for convolutional object detection algorithmsFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
In our work, we make two primary contributions to the field of adversarial example generation for convolutional neural network based perception technologies. First of all, we extend recent work on physically realizable adversarial examples to make them more robust to translation, rotation, and scale in real-world scenarios. Secondly, we demonstrate attacks against object detection neural networks rather than considering only the simpler problem of classification, demonstrating the ability to force these networks to mislocalize as well as misclassify. We demonstrate our method on multiple object detection frameworks, including Faster R-CNN, YOLO v3, and our own single-shot detection architecture.
Paper Details
Date Published: 14 May 2019
PDF: 11 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880R (14 May 2019); doi: 10.1117/12.2520166
Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)
PDF: 11 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880R (14 May 2019); doi: 10.1117/12.2520166
Show Author Affiliations
David R. Chambers, Southwest Research Institute (United States)
H. Abe Garza, Southwest Research Institute (United States)
Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)
© SPIE. Terms of Use
