Registration is open - make plans to attend
Get pricing and details
>
Conference 12033 > Paper 12033-122
Paper 12033-122

Evaluation of the impact of physical adversarial attacks on deep learning models for classifying covid cases

In person: 23 February 2022 • 5:30 PM - 7:00 PM PST

Abstract

This paper investigated the impact of Adversarial Attacks (AE) on Convolution Neural Networks (CNNs) models for classifying COVID-19 and normal cases in chest X-ray images. We evaluated the accuracy of a few models with and without AE. Our results in an attack-free environment showed that CNNs achieved accuracy of 99%. However, when CNNs have been attacked by the Fast Gradient Sign Method (FGSM), their performance was reduced. MobileNetV2 was the most affected model (specificity decreased from 98.61% to 67.73%) and the least affected was VGG16. Our finds describe that the FGSM is be able to fool the models, misclassifying the labels.

Presenter

Univ. de São Paulo (Brazil)
Erikson Júlio de Aguiar is a Ph.D. student in Computer Science at the Institute of Mathematics and Computer Science (ICMC), University of São Paulo (USP); his research focuses on security & privacy of Machine Learning applied to Medical Imaging. He also completed his Master’s degree in Computer Science at the Institute of Mathematics and Computer Science (ICMC), University of São Paulo (USP), in 2021. He completed his B.Sc. in Computer Science at the State University of Northern Paraná (UENP) in 2017. His main research interest includes Security & Privacy, Machine Learning, Machine Learning, and Medical Imaging.
Presenter/Author
Univ. de São Paulo (Brazil)
Author
Univ. de São Paulo (Brazil)
Author
Univ. de São Paulo (Brazil)
Author
Instituto do Coração do Hospital das Clínicas (Brazil), Univ. de São Paulo (Brazil)
Author
Univ. de São Paulo (Brazil)
Author
Univ. de São Paulo (Brazil)