Access the full text.
Sign up today, get DeepDyve free for 14 days.
A. Prakash, N. Moran, S. Garber, A. DiLillo, J. Storer (2018)
Deflecting adversarial attacks with pixel deflectionProceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, A. Madry (2019)
Adversarial Examples Are Not Bugs, They Are FeaturesArXiv, abs/1905.02175
Saeed Mahloujifar, Dimitrios Diochnos, Mohammad Mahmoody (2018)
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure
Jean-Yves Franceschi, Alhussein Fawzi, Omar Fawzi (2018)
Robustness of classifiers to uniform $\ell_p$ and Gaussian noise
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, P. Frossard (2015)
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, D. Erhan, I. Goodfellow, R. Fergus (2013)
Intriguing properties of neural networksCoRR, abs/1312.6199
Bo Wang, F. Zou, X. Liu (2020)
New algorithm to generate the adversarial example of imageOptik, 207
Aaditya Prakash, N. Moran, Solomon Garber, Antonella DiLillo, J. Storer (2018)
Deflecting Adversarial Attacks with Pixel Deflection2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
A. Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu (2017)
Towards Deep Learning Models Resistant to Adversarial AttacksArXiv, abs/1706.06083
J. Gilmer, Luke Metz, Fartash Faghri, S. Schoenholz, M. Raghu, M. Wattenberg, I. Goodfellow (2018)
Adversarial Spheres
Alhussein Fawzi, Hamza Fawzi, Omar Fawzi (2018)
Adversarial vulnerability for any classifierArXiv, abs/1802.08686
Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama (2018)
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Liwei Wang (2020)
MACER: Attack-free and Scalable Robust Training via Maximizing Certified RadiusArXiv, abs/2001.02378
Jiajun Lu, Theerasit Issaranon, D. Forsyth (2017)
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly2017 IEEE International Conference on Computer Vision (ICCV)
Valentina Zantedeschi, Maria-Irina Nicolae, Ambrish Rawat (2017)
Efficient Defenses Against Adversarial AttacksProceedings of the 10th ACM Workshop on Artificial Intelligence and Security
Nicholas Carlini, D. Wagner (2017)
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection MethodsProceedings of the 10th ACM Workshop on Artificial Intelligence and Security
Haofeng Li, Guanbin Li, Yizhou Yu (2019)
ROSA: Robust Salient Object Detection Against Adversarial AttacksIEEE Transactions on Cybernetics, 50
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh (2017)
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute ModelsProceedings of the 10th ACM Workshop on Artificial Intelligence and Security
Takeru Miyato, S. Maeda, Masanori Koyama, Ken Nakae, S. Ishii (2015)
Distributional Smoothing with Virtual Adversarial TrainingarXiv: Machine Learning
Yinpeng Dong, Hang Su, Jun Zhu, Fan Bao (2017)
Towards Interpretable Deep Neural Networks by Leveraging Adversarial ExamplesArXiv, abs/1901.09035
Eric Wong, Frank Schmidt, J. Metzen, J. Kolter (2018)
Scaling provable adversarial defenses
Takeru Miyato, S. Maeda, Masanori Koyama, S. Ishii (2017)
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised LearningIEEE Transactions on Pattern Analysis and Machine Intelligence, 41
Ruitong Huang, Bing Xu, Dale Schuurmans, Csaba Szepesvari (2015)
Learning with a Strong AdversaryArXiv, abs/1511.03034
Jianyu Wang (2018)
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks2019 IEEE/CVF International Conference on Computer Vision (ICCV)
A. Ross, F. Doshi-Velez (2017)
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
L. Deng, Dong Yu (2014)
Deep Learning: Methods and ApplicationsFound. Trends Signal Process., 7
C. Mao, A. Gupta, V. Nitin, B. Ray, S. Song, J. Yang, C. Vondrick (2020)
Multitask learning strengthens adversarial robustnessProceedings of the European Conference on Computer Vision. IEEE
Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, B. Kulis, Xue Lin, S. Chin (2019)
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Yann LeCun, Yoshua Bengio, Geoffrey Hinton (2015)
Deep LearningNature, 521
X. Li, Deng Pan, D. Zhu (2020)
Defending Against Adversarial Attacks On Medical Imaging Ai System, Classification Or Detection?2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, Nate Kushman (2017)
PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial ExamplesArXiv, abs/1710.10766
Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, D. Su, Yupeng Gao, Cho-Jui Hsieh, L. Daniel (2018)
Evaluating the Robustness of Neural Networks: An Extreme Value Theory ApproachArXiv, abs/1801.10578
D. Hendrycks, K. Gimpel (2016)
Early methods for detecting adversarial imagesRetrieved June 25, 2021 from https://arXiv:1608.00530., 25
Dan Hendrycks, Kimin Lee, Mantas Mazeika (2019)
Using Pre-Training Can Improve Model Robustness and UncertaintyArXiv, abs/1901.09960
Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, A. Prakash (2019)
Efficient Adversarial Training With Transferable Adversarial Examples2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
David Miller, Z. Xiang, G. Kesidis (2019)
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against AttacksArXiv, abs/1904.06292
Beomsu Kim, Junghoon Seo, Taegyun Jeon (2019)
Bridging Adversarial Robustness and Gradient InterpretabilityArXiv, abs/1903.11626
G. Dziugaite, Zoubin Ghahramani, Daniel Roy (2016)
A study of the effect of JPG compression on adversarial imagesArXiv, abs/1608.00853
Cihang Xie, Yuxin Wu, L. Maaten, A. Yuille, Kaiming He (2018)
Feature Denoising for Improving Adversarial Robustness2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Daniel Kang, Yi Sun, Tom Brown, Dan Hendrycks, J. Steinhardt (2019)
Transfer of Adversarial Robustness Between Perturbation TypesArXiv, abs/1905.01034
Preetum Nakkiran (2019)
Adversarial Robustness May Be at Odds With SimplicityArXiv, abs/1901.00532
C. Kou, H. Lee, E. Chang, Teck Ng (2020)
Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier
Reuben Feinman, Ryan Curtin, S. Shintre, Andrew Gardner (2017)
Detecting Adversarial Samples from ArtifactsArXiv, abs/1703.00410
Hyun Kwon, H. Yoon, Ki-Woong Park (2019)
POSTER: Detecting Audio Adversarial Example through Audio ModificationProceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
Xiaoyu Cao, N. Gong (2017)
Mitigating Evasion Attacks to Deep Neural Networks via Region-based ClassificationProceedings of the 33rd Annual Computer Security Applications Conference
J. Kolter, Eric Wong (2017)
Provable defenses against adversarial examples via the convex outer adversarial polytopeArXiv, abs/1711.00851
Nicolas Papernot, P. Mcdaniel, I. Goodfellow, S. Jha, Z. Celik, A. Swami (2016)
Practical Black-Box Attacks against Machine LearningProceedings of the 2017 ACM on Asia Conference on Computer and Communications Security
Kai Chen, Haoqi Zhu, Leiming Yan, Jinwei Wang (2020)
A Survey on Adversarial Examples in Deep LearningJournal on Big Data
Pu Zhao, Sijia Liu, Yanzhi Wang, X. Lin (2018)
An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural NetworksProceedings of the 26th ACM international conference on Multimedia
Nicolas Papernot, P. Mcdaniel, S. Jha, Matt Fredrikson, Z. Celik, A. Swami (2015)
The Limitations of Deep Learning in Adversarial Settings2016 IEEE European Symposium on Security and Privacy (EuroS&P)
Xing Hao, Guigang Zhang, Shang Ma (2016)
Deep LearningInt. J. Semantic Comput., 10
Ali Shafahi, W. Huang, Christoph Studer, S. Feizi, T. Goldstein (2018)
Are adversarial examples inevitable?ArXiv, abs/1809.02104
Aamir Mustafa, Salman Khan, Munawar Hayat, R. Goecke, Jianbing Shen, Ling Shao (2020)
Deeply Supervised Discriminative Learning for Adversarial DefenseIEEE Transactions on Pattern Analysis and Machine Intelligence, 43
Yanpei Liu, Xinyun Chen, Chang Liu, D. Song (2016)
Delving into Transferable Adversarial Examples and Black-box AttacksArXiv, abs/1611.02770
Tianyu Pang, Chao Du, Yinpeng Dong, Jun Zhu (2017)
Towards Robust Detection of Adversarial Examples
T. Miyato, S.-i (2015)
Maeda, MKoyama, K. Nakae, and S. Ishii. 2015. Distributional smoothing with virtual adversarial training. Retrieved June 25, 2021 from https://arXiv:1507.00677., 25
Guy Katz, Clark Barrett, D. Dill, Kyle Julian, Mykel Kochenderfer (2017)
Reluplex: An Efficient SMT Solver for Verifying Deep Neural NetworksArXiv, abs/1702.01135
I. Goodfellow, Jonathon Shlens, Christian Szegedy (2014)
Explaining and Harnessing Adversarial ExamplesCoRR, abs/1412.6572
Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, M. Backes, P. Mcdaniel (2017)
On the (Statistical) Detection of Adversarial ExamplesArXiv, abs/1702.06280
Andrew Ilyas, A. Jalal, Eirini Asteri, C. Daskalakis, A. Dimakis (2017)
The Robust Manifold Defense: Adversarial Training using Generative ModelsArXiv, abs/1712.09196
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, D. Boneh, P. Mcdaniel (2017)
Ensemble Adversarial Training: Attacks and DefensesArXiv, abs/1705.07204
Florian Tramèr, Nicolas Papernot, I. Goodfellow, D. Boneh, P. Mcdaniel (2017)
The Space of Transferable Adversarial ExamplesArXiv, abs/1704.03453
Naveed Akhtar, A. Mian (2018)
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A SurveyIEEE Access, 6
Pouya Samangouei, Maya Kabkab, R. Chellappa (2018)
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative ModelsArXiv, abs/1805.06605
Xiangnan He, Zhankui He, Xiaoyu Du, Tat-Seng Chua (2018)
Adversarial Personalized Ranking for RecommendationThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval
Dan Hendrycks, Kevin Gimpel (2016)
Early Methods for Detecting Adversarial ImagesarXiv: Learning
Xin Li, Fuxin Li (2016)
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics2017 IEEE International Conference on Computer Vision (ICCV)
F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, J. Zhu (2018)
Defense against adversarial attacks using high-level representation guided denoiserProceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE
Xiaojun Jia, Xingxing Wei, Xiaochun Cao (2019)
Identifying and Resisting Adversarial Videos Using Temporal ConsistencyArXiv, abs/1909.04837
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, I. Goodfellow, A. Madry, Alexey Kurakin (2019)
On Evaluating Adversarial RobustnessArXiv, abs/1902.06705
Xuanqing Liu, Minhao Cheng, Huan Zhang, Cho-Jui Hsieh (2017)
Towards Robust Neural Networks via Random Self-ensembleArXiv, abs/1712.00673
S. Ma, Y. Liu, G. Tao, W.-C. Lee, X. Zhang (2019)
NIC: Detecting Adversarial Samples with Neural Network Invariant CheckingIn Proceedings of the 26th Network and Distributed Systems Security. OpenReview.net, San Diego, CA USA. Retrieved from https://openreview.net/forum?id=D54VV3ic4L.
Haichao Zhang, Jianyu Wang (2019)
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Nic Ford, J. Gilmer, Nicholas Carlini, E. Cubuk (2019)
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Jianli Zhou, Chao Liang, Jun Chen (2020)
Manifold Projection for Adversarial Defense on Face Recognition
R. Stanforth, A. Fawzi, P. Kohli (2019)
Are labels required for improving adversarial robustness? Retrieved June 25, 2021 from https://arXiv:1905Are labels required for improving adversarial robustness? Retrieved June 25, 2021 from https://arXiv:1905.13725.
Yisen Wang, Difan Zou, Jinfeng Yi, J. Bailey, Xingjun Ma, Quanquan Gu (2020)
Improving Adversarial Robustness Requires Revisiting Misclassified Examples
Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, S. Jana (2018)
Certified Robustness to Adversarial Examples with Differential Privacy2019 IEEE Symposium on Security and Privacy (SP)
C. Xie, Y. Wu, L. v (2019)
dMaaten
Matthias Hein, Maksym Andriushchenko (2017)
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Sébastien Bubeck, Eric Price, Ilya Razenshteyn (2018)
Adversarial examples from computational constraintsArXiv, abs/1805.10204
Jacob Buckman, Aurko Roy, Colin Raffel, I. Goodfellow (2018)
Thermometer Encoding: One Hot Way To Resist Adversarial Examples
Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, A. Yuille (2017)
Mitigating adversarial effects through randomizationArXiv, abs/1711.01991
A. Bhagoji, Daniel Cullina, Prateek Mittal (2017)
Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning ClassifiersArXiv, abs/1704.02654
J. Metzen, Tim Genewein, Volker Fischer, Bastian Bischoff (2017)
On Detecting Adversarial PerturbationsArXiv, abs/1702.04267
Yash Sharma, Pin-Yu Chen (2017)
Attacking the Madry Defense Model with L1-based Adversarial ExamplesArXiv, abs/1710.10733
A. Shafahi, W. R. Huang, C. Studer, S. Feizi, T. Goldstein (2018)
Are adversarial examples inevitable? Retrieved June 25, 2021 from https://arXiv:1809Are adversarial examples inevitable? Retrieved June 25, 2021 from https://arXiv:1809.02104.
Geoffrey Hinton, Simon Osindero, Y. Teh (2006)
A Fast Learning Algorithm for Deep Belief NetsNeural Computation, 18
S. Bai, Yingwei Li, Yuyin Zhou, Qizhu Li, Philip Torr (2020)
Adversarial Metric Attack and Defense for Person Re-IdentificationIEEE Transactions on Pattern Analysis and Machine Intelligence, 43
Zhitao Gong, Wenlu Wang, Wei-Shinn Ku (2017)
Adversarial and Clean Data Are Not TwinsProceedings of the Sixth International Workshop on Exploiting Artificial Intelligence Techniques for Data Management
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, D. Song (2017)
Robust physical-world attacks on deep learning modelsRetrieved June 25, 2021 from https://arXiv:1707.08945., 25
Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, D. Boning, I. Dhillon, L. Daniel (2018)
Towards Fast Computation of Certified Robustness for ReLU NetworksArXiv, abs/1804.09699
Moustapha Cissé, Piotr Bojanowski, Edouard Grave, Yann Dauphin, Nicolas Usunier (2017)
Parseval Networks: Improving Robustness to Adversarial ExamplesArXiv, abs/1704.08847
X. Wang, S. Wang, P.-Y. Chen, Y. Wang, S. Chin (2019)
Protecting neural networks with hierarchical random switching: Towards better robustness-accuracy trade-off for stochastic defenses28th International Joint Conference on Artificial Intelligence (IJCAI'19). Morgan Kaufmann
Minhao Cheng, Qi Lei, Pin-Yu Chen, I. Dhillon, Cho-Jui Hsieh (2020)
CAT: Customized Adversarial Training for Improved RobustnessArXiv, abs/2002.06789
Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu (2019)
Improving Black-box Adversarial Attacks with a Transfer-based PriorArXiv, abs/1906.06919
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, A. Madry (2018)
Adversarially Robust Generalization Requires More DataArXiv, abs/1804.11285
Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin (2018)
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial AttacksArXiv, abs/1807.03888
Xingjun Ma, Bo Li, Yisen Wang, S. Erfani, S. Wijewickrema, M. Houle, G. Schoenebeck, D. Song, J. Bailey (2018)
Characterizing Adversarial Subspaces Using Local Intrinsic DimensionalityArXiv, abs/1801.02613
B. Wang, F. Zou, X. Liu (2020)
New algorithm to generate the adversarial example of imageOptik 207 (April 2020), 207
Pang Koh, Percy Liang (2017)
Understanding Black-box Predictions via Influence Functions
Aman Sinha, Hongseok Namkoong, John Duchi (2017)
Certifiable Distributional Robustness with Principled Adversarial TrainingArXiv, abs/1710.10571
Alhussein Fawzi, Omar Fawzi, P. Frossard (2015)
Analysis of classifiers’ robustness to adversarial perturbationsMachine Learning, 107
Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, Peixin Zhang (2018)
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)
Aditi Raghunathan, J. Steinhardt, Percy Liang (2018)
Certified Defenses against Adversarial ExamplesArXiv, abs/1801.09344
J. Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli (2019)
Are Labels Required for Improving Adversarial Robustness?ArXiv, abs/1905.13725
Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, D. Kaeli, S. Chin, X. Lin (2018)
Defensive dropout for hardening deep neural networks under adversarial attacksProceedings of the International Conference on Computer-Aided Design
Dongyu Meng, Hao Chen (2017)
MagNet: A Two-Pronged Defense against Adversarial ExamplesProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security
Nina Narodytska, S. Kasiviswanathan (2016)
Simple Black-Box Adversarial Perturbations for Deep NetworksArXiv, abs/1612.06299
Xiaogang Xu, Hengshuang Zhao, Jiaya Jia (2020)
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation2021 IEEE/CVF International Conference on Computer Vision (ICCV)
S. Ye, K. Xu, S. Liu, H. Cheng, J.-H. Lambrechts, H. Zhang, A. Zhou, K. Ma, Y. Wang, X. Lin (2019)
Adversarial robustness vsmodel compression
Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, L. Daniel (2018)
Efficient Neural Network Robustness Certification with General Activation Functions
Mesut Ozdag (2018)
Adversarial Attacks and Defenses Against Deep Neural Networks: A SurveyProcedia Computer Science, 140
Nicholas Carlini, D. Wagner (2016)
Towards Evaluating the Robustness of Neural Networks2017 IEEE Symposium on Security and Privacy (SP)
Shaokai Ye, Xue Lin, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang (2019)
Adversarial Robustness vs. Model Compression, or Both?2019 IEEE/CVF International Conference on Computer Vision (ICCV)
S. Gu, Luca Rigazio (2014)
Towards Deep Neural Network Architectures Robust to Adversarial ExamplesCoRR, abs/1412.5068
Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin (2018)
Black-box Adversarial Attacks with Limited Queries and Information
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, E. Xing, L. Ghaoui, Michael Jordan (2019)
Theoretically Principled Trade-off between Robustness and AccuracyArXiv, abs/1901.08573
Bai Li, Changyou Chen, Wenlin Wang, L. Carin (2018)
Second-Order Adversarial Attack and Certifiable RobustnessArXiv, abs/1809.03113
Y. Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, John Duchi (2019)
Unlabeled Data Improves Adversarial RobustnessArXiv, abs/1905.13736
Nicolas Papernot, P. Mcdaniel, Xi Wu, S. Jha, A. Swami (2015)
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks2016 IEEE Symposium on Security and Privacy (SP)
Yong Cheng, Lu Jiang, Wolfgang Macherey (2019)
Robust Neural Machine Translation with Doubly Adversarial InputsArXiv, abs/1906.02443
Vincent Tjeng, Kai Xiao, Russ Tedrake (2017)
Evaluating Robustness of Neural Networks with Mixed Integer Programming
Alexey Kurakin, I. Goodfellow, Samy Bengio (2016)
Adversarial Machine Learning at ScaleArXiv, abs/1611.01236
X. Xu, H. Zhao, J. Jia (2020)
Dynamic divide-and-conquer adversarial training for robust semantic segmentationRetrieved June 25, 2021 from https://arXiv:2003.06555., 25
Nicolas Papernot, P. Mcdaniel, I. Goodfellow (2016)
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial SamplesArXiv, abs/1605.07277
Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Jun Zhu, Xiaolin Hu (2017)
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Minhao Cheng, Simranjit Singh, Patrick Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh (2019)
Sign-OPT: A Query-Efficient Hard-label Adversarial AttackArXiv, abs/1909.10773
Andrew Ilyas, Logan Engstrom, A. Madry (2018)
Prior Convictions: Black-Box Adversarial Attacks with Bandits and PriorsArXiv, abs/1807.07978
Xiaowei Huang, M. Kwiatkowska, Sen Wang, Min Wu (2016)
Safety Verification of Deep Neural Networks
Uri Shaham, Yutaro Yamada, S. Negahban (2015)
Understanding adversarial training: Increasing local stability of supervised models through robust optimizationNeurocomputing, 307
Jeremy Cohen, Elan Rosenfeld, J. Kolter (2019)
Certified Adversarial Robustness via Randomized Smoothing
Harini Kannan, Alexey Kurakin, I. Goodfellow (2018)
Adversarial Logit PairingArXiv, abs/1803.06373
A. Bhagoji, Daniel Cullina, Chawin Sitawarin, Prateek Mittal (2017)
Enhancing robustness of machine learning systems via data transformations2018 52nd Annual Conference on Information Sciences and Systems (CISS)
I. Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, D. Song (2017)
Robust Physical-World Attacks on Deep Learning ModelsarXiv: Cryptography and Security
M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, S. Jana (2018)
Certified robustness to adversarial examples with differential privacyRetrieved June 25, 2021 from https://arXiv:1802.03471. 2018., 25
Francesco Croce, Matthias Hein (2019)
Provable robustness against all adversarial lp-perturbations for p≥1ArXiv, abs/1905.11213
D. Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, Yupeng Gao (2018)
Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
Yan Luo, X. Boix, G. Roig, T. Poggio, Qi Zhao (2015)
Foveation-based Mechanisms Alleviate Adversarial ExamplesArXiv, abs/1511.06292
Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Deniz Erdoğmuş, Yanzhi Wang, X. Lin (2018)
Structured Adversarial Attack: Towards General Implementation and Better InterpretabilityArXiv, abs/1808.01664
D. Gopinath, Guy Katz, C. Pasareanu, Clark Barrett (2017)
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural NetworksArXiv, abs/1710.00486
Anish Athalye, Nicholas Carlini, D. Wagner (2018)
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
T. Tanay, Lewis Griffin (2016)
A Boundary Tilting Persepective on the Phenomenon of Adversarial ExamplesArXiv, abs/1608.07690
Pu Zhao, Pin-Yu Chen, Siyue Wang, X. Lin (2020)
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent
Guneet Dhillon, K. Azizzadenesheli, Zachary Lipton, Jeremy Bernstein, Jean Kossaifi, A. Khanna, Anima Anandkumar (2018)
Stochastic Activation Pruning for Robust Adversarial DefenseArXiv, abs/1803.01442
Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, L. Davis, T. Goldstein (2018)
Universal Adversarial TrainingArXiv, abs/1811.11304
Deqiang Li, Qianmu Li (2020)
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware DetectionIEEE Transactions on Information Forensics and Security, 15
J. Gilmer, L. Metz, F. Faghri, S. S. Schoenholz, M. Raghu, M. Wattenberg, I. Goodfellow (2018)
The relationship between high-dimensional geometry and adversarial examplesRetrieved June 25, 2021 from https://arXiv:1801.02774., 25
Bai Li, Changyou Chen, Wenlin Wang, L. Carin (2018)
Certified Adversarial Robustness with Additive Noise
Weilin Xu, David Evans, Yanjun Qi (2017)
Feature Squeezing: Detecting Adversarial Examples in Deep Neural NetworksArXiv, abs/1704.01155
Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li (2017)
Adversarial Examples: Attacks and Defenses for Deep LearningIEEE Transactions on Neural Networks and Learning Systems, 30
X. Li, D. Pan, D. Zhu (2020)
Defending against adversarial attacks on medical imaging AI system, classification or detection? Retrieved June 25, 2021 from https://arXiv:2006Defending against adversarial attacks on medical imaging AI system, classification or detection? Retrieved June 25, 2021 from https://arXiv:2006.13555.
Hang Zhou, Kejiang Chen, Weiming Zhang, Han Fang, Wenbo Zhou, Nenghai Yu (2018)
DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, P. Frossard (2016)
Robustness of classifiers: from adversarial to random noiseArXiv, abs/1608.08967
Anh-Vu Bui, Trung Le, He Zhao, Paul Montague, O. deVel, Tamas Abraham, Dinh Phung (2020)
Improving Adversarial Robustness by Enforcing Local and Global Compactness
Shiqing Ma, Yingqi Liu, Guanhong Tao, Wen-Chuan Lee, X. Zhang (2019)
NIC: Detecting Adversarial Samples with Neural Network Invariant CheckingProceedings 2019 Network and Distributed System Security Symposium
Chuan Guo, Mayank Rana, Moustapha Cissé, L. Maaten (2018)
Countering Adversarial Images using Input TransformationsArXiv, abs/1711.00117
Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Adversarial perturbations can also fool real-world machine learning systems and transfer between different architectures and datasets. Recently, defense methods against adversarial perturbations have become a hot topic and attracted much attention. A large number of works have been put forward to defend against adversarial perturbations, enhancing DNN robustness against potential attacks, or interpreting the origin of adversarial perturbations. In this article, we provide a comprehensive survey on classical and state-of-the-art defense methods by illuminating their main concepts, in-depth algorithms, and fundamental hypotheses regarding the origin of adversarial perturbations. In addition, we further discuss potential directions of this domain for future researchers.
ACM Computing Surveys (CSUR) – Association for Computing Machinery
Published: Oct 4, 2021
Keywords: Adversarial perturbation defense
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.