Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Adversarial Perturbation Defense on Deep Neural Networks

Adversarial Perturbation Defense on Deep Neural Networks Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Adversarial perturbations can also fool real-world machine learning systems and transfer between different architectures and datasets. Recently, defense methods against adversarial perturbations have become a hot topic and attracted much attention. A large number of works have been put forward to defend against adversarial perturbations, enhancing DNN robustness against potential attacks, or interpreting the origin of adversarial perturbations. In this article, we provide a comprehensive survey on classical and state-of-the-art defense methods by illuminating their main concepts, in-depth algorithms, and fundamental hypotheses regarding the origin of adversarial perturbations. In addition, we further discuss potential directions of this domain for future researchers. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Computing Surveys (CSUR) Association for Computing Machinery

Adversarial Perturbation Defense on Deep Neural Networks

Loading next page...
 
/lp/association-for-computing-machinery/adversarial-perturbation-defense-on-deep-neural-networks-Q9eNkA1SG0

References (157)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2021 Association for Computing Machinery.
ISSN
0360-0300
eISSN
1557-7341
DOI
10.1145/3465397
Publisher site
See Article on Publisher Site

Abstract

Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Adversarial perturbations can also fool real-world machine learning systems and transfer between different architectures and datasets. Recently, defense methods against adversarial perturbations have become a hot topic and attracted much attention. A large number of works have been put forward to defend against adversarial perturbations, enhancing DNN robustness against potential attacks, or interpreting the origin of adversarial perturbations. In this article, we provide a comprehensive survey on classical and state-of-the-art defense methods by illuminating their main concepts, in-depth algorithms, and fundamental hypotheses regarding the origin of adversarial perturbations. In addition, we further discuss potential directions of this domain for future researchers.

Journal

ACM Computing Surveys (CSUR)Association for Computing Machinery

Published: Oct 4, 2021

Keywords: Adversarial perturbation defense

There are no references for this article.