Less Than One Week Left: Comment on NIST’s New Adversarial Machine Learning Report


The National Cybersecurity Center for Excellence recently published a draft of its latest cybersecurity draft guide, National Institute of Standards and Technology (NIST) Interagency/Internal Report (NISTIR) 8269, A Taxonomy and Terminology of Adversarial Machine Learning. The public comment period is currently open and will close on Monday, December 16, 2019.

This document was developed as a step toward securing applications of artificial intelligence (AI), especially against adversarial manipulations of Machine Learning (ML). Although AI also includes various knowledge-based systems, the data-driven approach of ML introduces additional security challenges in training and testing (inference) phases of system operations. Adversarial Machine Learning (AML) is concerned with the design of ML algorithms that can resist security challenges, the study of the capabilities of attackers, and the understanding of attack consequences.

The public is invited to review and comment on the findings and considerations, published in the draft NISTIR 8269. This document develops a taxonomy of concepts and defines terminology in the field of AML. The taxonomy, built on and integrating previous AML survey works, is arranged in a conceptual hierarchy that includes key types of attacks, defenses, and consequences. The terminology, arranged in an alphabetical glossary, defines key terms associated with the security of ML components of an AI system. Taken together, the terminology and taxonomy are intended to inform future standards and best practices for assessing and managing the security of ML components, by establishing a common language and understanding of the rapidly developing AML landscape.

The public comment period for this document closes on December 16, 2019. See the publication details for a copy of the document and instructions for submitting comments.

If you are interested in following the developments of this guide and future NIST AI research, please email ai-nccoe@nist.gov.