Share Email Print
cover

Proceedings Paper • new

Model poisoning attacks against distributed machine learning systems
Author(s): Richard Tomsett; Kevin Chan; Supriyo Chakraborty
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Future military coalition operations will increasingly rely on machine learning (ML) methods to improve situational awareness. The coalition context presents unique challenges for ML: the tactical environment creates significant computing and communications limitations while also having to deal with an adversarial presence. Further, coalition operations must operate in a distributed manner, while coping with the constraints posed by the operational environment. Envisioned ML deployments in military assets must be resilient to these challenges. Here, we focus on the susceptibility of ML models to be poisoned (during training) or fooled (after training) by adversarial inputs. We review recent work on distributed adversarial ML, and present new results from our own investigations into model poisoning attacks on distributed learning systems without a central parameter aggregation node.

Paper Details

Date Published: 10 May 2019
PDF: 9 pages
Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 110061D (10 May 2019); doi: 10.1117/12.2520275
Show Author Affiliations
Richard Tomsett, IBM United Kingdom Ltd. (United Kingdom)
Kevin Chan, U.S. Army Research Lab. (United States)
Supriyo Chakraborty, IBM Thomas J. Watson Research Ctr. (United States)


Published in SPIE Proceedings Vol. 11006:
Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications
Tien Pham, Editor(s)

© SPIE. Terms of Use
Back to Top