Share Email Print
cover

Proceedings Paper

Evaluating data distribution and drift vulnerabilities of machine learning algorithms in secure and adversarial environments
Author(s): Kevin Nelson; George Corbin; Misty Blowers
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Machine learning is continuing to gain popularity due to its ability to solve problems that are difficult to model using conventional computer programming logic. Much of the current and past work has focused on algorithm development, data processing, and optimization. Lately, a subset of research has emerged which explores issues related to security. This research is gaining traction as systems employing these methods are being applied to both secure and adversarial environments. One of machine learning’s biggest benefits, its data-driven versus logic-driven approach, is also a weakness if the data on which the models rely are corrupted. Adversaries could maliciously influence systems which address drift and data distribution changes using re-training and online learning. Our work is focused on exploring the resilience of various machine learning algorithms to these data-driven attacks. In this paper, we present our initial findings using Monte Carlo simulations, and statistical analysis, to explore the maximal achievable shift to a classification model, as well as the required amount of control over the data.

Paper Details

Date Published: 22 May 2014
PDF: 11 pages
Proc. SPIE 9119, Machine Intelligence and Bio-inspired Computation: Theory and Applications VIII, 911904 (22 May 2014); doi: 10.1117/12.2053045
Show Author Affiliations
Kevin Nelson, BAE Systems (United States)
George Corbin, BAE Systems (United States)
Misty Blowers, Air Force Research Lab. (United States)


Published in SPIE Proceedings Vol. 9119:
Machine Intelligence and Bio-inspired Computation: Theory and Applications VIII
Misty Blowers; Jonathan Williams, Editor(s)

© SPIE. Terms of Use
Back to Top