Share Email Print
cover

Proceedings Paper

Exploiting random perturbations to defend against adversarial attacks
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Adversarial examples are deliberately crafted data points which aim to induce errors in machine learning models. This phenomenon has gained much attention recently, especially in the field of image classification, where many methods have been proposed to generate such malicious examples. In this paper we focus on defending a trained model against such attacks by introducing randomness to its inputs.

Paper Details

Date Published: 1 October 2018
PDF: 7 pages
Proc. SPIE 10808, Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018, 108082N (1 October 2018); doi: 10.1117/12.2501606
Show Author Affiliations
Pawel Zawistowski, Warsaw Univ. of Technology (Poland)
Bartlomiej Twardowski, Warsaw Univ. of Technology (Poland)


Published in SPIE Proceedings Vol. 10808:
Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018
Ryszard S. Romaniuk; Maciej Linczuk, Editor(s)

© SPIE. Terms of Use
Back to Top