Share Email Print

Proceedings Paper

Detection of sticker based adversarial attacks
Author(s): András Horváth; Csanád Egervári
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Adversarial examples revealed and important aspect of convolutional neural networks and are getting more and more attention in machine learning. It was shown that not only small perturbations, covering the whole image can be applied but also sticker based attacks, concentrated on small regions of the image can cause misclassification. Meanwhile the first type of attack is theoretical the later can be applied in practice and lead tomisclassification in image processing pipelines. In this paper we show a method how sticker based adversarial samples can be detected by calculating the responses of the neurons in the last layers and estimating the measure of region based classification consistency.

Paper Details

Date Published: 9 August 2018
PDF: 5 pages
Proc. SPIE 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 108066Y (9 August 2018); doi: 10.1117/12.2503219
Show Author Affiliations
András Horváth, Pázmány Péter Catholic Univ. (Hungary)
Csanád Egervári, Pázmány Péter Catholic Univ. (Hungary)

Published in SPIE Proceedings Vol. 10806:
Tenth International Conference on Digital Image Processing (ICDIP 2018)
Xudong Jiang; Jenq-Neng Hwang, Editor(s)

© SPIE. Terms of Use
Back to Top