Access the full text.
Sign up today, get DeepDyve free for 14 days.
Pedro Tabacof, Eduardo Valle (2015)
Exploring the space of adversarial images2016 International Joint Conference on Neural Networks (IJCNN)
M. Barreno, B. Nelson, A. Joseph, Doug Tygar (2010)
The security of machine learningMachine Learning, 81
Nilesh Dalvi, Pedro Domingos, Mausam, Sumit Sanghai, D. Verma (2004)
Adversarial classificationProceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
M. Kloft, P. Laskov (2010)
Security analysis of online centroid anomaly detectionJ. Mach. Learn. Res., 13
Tom Fawcett (2003)
"In vivo" spam filtering: a challenge problem for KDDSIGKDD Explor., 5
S. Hinde (2003)
Spam: the evolution of a nuisanceComput. Secur., 22
B. Nelson, Benjamin Rubinstein, Ling Huang, A. Joseph, J. Tygar (2010)
Classifier Evasion: Models and Open Problems
Ion Androutsopoulos, G. Paliouras, E. Michelakis, E. Michelakis (2006)
Learning to Filter Unsolicited Commercial E-Mail
C. Teo, A. Globerson, S. Roweis, Alex Smola (2007)
Convex Learning with Invariances
Leif Peterson (2009)
K-nearest neighborScholarpedia, 4
Wei Liu, S. Chawla (2009)
A Game Theoretical Model for Adversarial Learning2009 IEEE International Conference on Data Mining Workshops
Benjamin Rubinstein, B. Nelson, Ling Huang, A. Joseph, S. Lau, Satish Rao, N. Taft, J. Tygar (2009)
ANTIDOTE: understanding and defending against poisoning of anomaly detectors
Vikas Deshpande, R. Erbacher, Chris Harris, X-) P (2007)
An Evaluation of Naïve Bayesian Anti-Spam Filtering Techniques2007 IEEE SMC Information Assurance and Security Workshop
Zoltán Gyöngyi, H. Garcia-Molina (2005)
Spam: it's not just for inboxes anymoreComputer, 38
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami
2016cThe limitations of deep learning in adversarial settings. In Prceedings of the IEEE European Symposium on Security and Privacy (EuroS8P’16). IEEE
M. Mahoney, P. Chan (2002)
Learning nonstationary models of normal network traffic for detecting novel attacksProceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Shobha Venkataraman, Avrim Blum, D. Song (2008)
Limits of Learning-based Signature Generation with Adversaries
(2010)
MNIST Handwritten Digit Database
Christoph Karlberger, G. Bayler, Christopher Krügel, E. Kirda (2007)
Exploiting Redundancy in Natural Language to Penetrate Bayesian Spam Filters
Peter Bühlmann (2019)
Robust Statistics
M. Kearns, Ming Li (1993)
Learning in the presence of malicious errorsSIAM J. Comput., 22
(2013)
UCI Machine Learning Repository
Michael Kearns, Ming Li (1993)
Learning in the presence of malicious errorsSIAM Journal on Computing, 22
Liyiming Ke, Bo Li, Yevgeniy Vorobeychik (2016)
Behavioral Experiments in Email Filter Evasion
David Tyler (2008)
Robust Statistics: Theory and MethodsJournal of the American Statistical Association, 103
M. Sahami, S. Dumais, D. Heckerman, E. Horvitz (1998)
A Bayesian Approach to Filtering Junk E-Mail
Yan Zhou, Murat Kantarcioglu (2014)
Adversarial Learning with Bayesian Hierarchical Mixtures of Experts
Michael Brückner, T. Scheffer (2009)
Nash Equilibria of Static Prediction Games
Tom Fawcett, F. Provost (1997)
Adaptive Fraud DetectionData Mining and Knowledge Discovery, 1
Dimitrios Vassilakis, Ion Androutsopoulos, Evangelos Magirou (2007)
A Game-Theoretic Investigation of the Effect of Human Interactive Proofs on Spam E-mail
Nedim Srndic, P. Laskov (2013)
Detection of Malicious PDF Files Based on Hierarchical Document Structure
Eilon Solan, E. Reshef (2006)
The Effects of Anti-Spam Methods on Spam Mail
MohamadAli Torkamani, Daniel Lowd (2013)
Convex Adversarial Collective Classification
Alexey Kurakin, I. Goodfellow, Samy Bengio (2016)
Adversarial Machine Learning at ScaleArXiv, abs/1611.01236
Nicolas Papernot, P. Mcdaniel, I. Goodfellow, S. Jha, Z. Celik, A. Swami (2016)
Practical Black-Box Attacks against Deep Learning Systems using Adversarial ExamplesArXiv, abs/1602.02697
Alex Kantchelian, J. Tygar, A. Joseph (2015)
Evasion and Hardening of Tree Ensemble Classifiers
(2009)
Enron email dataset
Nicolas Papernot, P. Mcdaniel, I. Goodfellow, S. Jha, Z. Celik, A. Swami (2016)
Practical Black-Box Attacks against Machine LearningProceedings of the 2017 ACM on Asia Conference on Computer and Communications Security
B. Biggio, G. Fumera, F. Roli (2014)
Security Evaluation of Pattern Classifiers under AttackIEEE Transactions on Knowledge and Data Engineering, 26
J. Pita, Milind Tambe, Christopher Kiekintveld, Shane Cullen, Erin Steigerwald (2011)
GUARDS: game theoretic security allocation on a national scale
Wei Liu, S. Chawla (2010)
Mining adversarial patterns via regularized loss minimizationMachine Learning, 81
Fei Zhang, P. Chan, B. Biggio, D. Yeung, F. Roli (2016)
Adversarial Feature Selection Against Evasion AttacksIEEE Transactions on Cybernetics, 46
B. Nelson, Benjamin Rubinstein, Ling Huang, A. Joseph, Steven Lee, Satish Rao, J. Tygar (2010)
Query Strategies for Evading Convex-Inducing ClassifiersJ. Mach. Learn. Res., 13
Ion Androutsopoulos, J. Koutsias, K. Chandrinos, G. Paliouras, C. Spyropoulos (2000)
An evaluation of Naive Bayesian anti-spam filteringArXiv, cs.CL/0006013
Nicolas Papernot, P. Mcdaniel, I. Goodfellow (2016)
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial SamplesArXiv, abs/1605.07277
Nicolas Papernot, P. Mcdaniel, S. Jha, Matt Fredrikson, Z. Celik, A. Swami (2015)
The Limitations of Deep Learning in Adversarial Settings2016 IEEE European Symposium on Security and Privacy (EuroS&P)
Yevgeniy Vorobeychik, Bo Li (2014)
Optimal randomized classification in adversarial settings
Joshua Goodman, G. Cormack, D. Heckerman (2007)
Spam and the ongoing battle for the inboxCommun. ACM, 50
Yan Zhou, Murat Kantarcioglu, B. Thuraisingham, B. Xi (2012)
Adversarial support vector machine learning
Ion Androutsopoulos, Evangelos Magirou, Dimitrios Vassilakis (2005)
A Game Theoretic Model of Spam E-Mailing
Bo Li, Yevgeniy Vorobeychik (2014)
Feature Cross-Substitution in Adversarial Classification
P. Laskov, R. Lippmann (2010)
Machine learning in adversarial environmentsMachine Learning, 81
Pedro Tabacof, Eduardo Valle (2015)
Exploring the space of adversarial imagesarxiv:1510.05328.
I. Goodfellow, Jonathon Shlens, Christian Szegedy (2014)
Explaining and Harnessing Adversarial ExamplesCoRR, abs/1412.6572
A. Globerson, S. Roweis (2006)
Nightmare at test time: robust learning by feature deletionProceedings of the 23rd international conference on Machine learning
(2003)
Gert René Georges Lanckriet, Georges Natsoulis, and others
D. Wagner (2004)
Resilient aggregation in sensor networks
M. Parameswaran, Huaxia Rui, Serpil Sayın, Andrew Whinston (2010)
A Game Theoretic Model and Empirical Analysis of Spammer Strategies
Daniel Lowd, Christopher Meek (2005)
Adversarial learning
J. Newsome, B. Karp, D. Song (2006)
Paragraph: Thwarting Signature Learning by Training Maliciously
Anirudh Ramachandran, N. Feamster (2006)
Understanding the network-level behavior of spammersProceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications
Vangelis Metsis, Ion Androutsopoulos, Georgios Paliouras (2006)
Spam filtering with naive Bayes-which naive bayes? In Proceedings of the 3rd Conference on Email and Anti-Spam (CEAS’06)
Bo Li, Yevgeniy Vorobeychik (2015)
Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings, 38
V. Metsis, Ion Androutsopoulos, G. Paliouras (2006)
Spam Filtering with Naive Bayes - Which Naive Bayes?
Michael Brückner, T. Scheffer (2011)
Stackelberg games for adversarial prediction problems
Charles Smutz, A. Stavrou (2012)
Malicious PDF detection using metadata and structural features
S. Priya, S.Pothumani (2015)
Identifying Security Evaluation of PatternClassifiers Under attackInternational Journal of Innovative Research in Science, Engineering and Technology, 4
Huan Xu, C. Caramanis, Shie Mannor (2008)
Robustness and Regularization of Support Vector MachinesJ. Mach. Learn. Res., 10
S. Sabour, Yanshuai Cao, Fartash Faghri, David Fleet (2015)
Adversarial Manipulation of Deep RepresentationsCoRR, abs/1511.05122
X. Carreras, Lluís Villodre (2001)
Boosting Trees for Anti-Spam Email FilteringArXiv, cs.CL/0109015
Laurent El Ghaoui, Gert René Georges Lanckriet, Georges Natsoulis, others (2003)
Robust Classification with Interval DataComputer Science Division
Stephen Boyd, Lieven Vandenberghe (2004)
Convex OptimizationCambridge University Press.
Anh Nguyen, J. Yosinski, J. Clune (2014)
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Anukool Lakhina, M. Crovella, C. Diot (2004)
Diagnosing network-wide traffic anomalies
G. McCormick (1976)
Computability of global solutions to factorable nonconvex programs: Part I — Convex underestimating problemsMathematical Programming, 10
Stephen Boyd, L. Vandenberghe (2005)
Convex OptimizationJournal of the American Statistical Association, 100
M. Barreno, P. Bartlett, F. Chi, A. Joseph, B. Nelson, Benjamin Rubinstein, Udam Saini, J. Tygar (2008)
Open problems in the security of learning
Bryan Klimt, Yiming Yang (2004)
The Enron Corpus: A New Dataset for Email Classi(cid:12)cation Research
Anirudh Ramachandran, N. Feamster, S. Vempala (2007)
Filtering spam with behavioral blacklisting
E. Allman (2003)
The Economics of SpamQueue, 1
The success of classification learning has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static, but make a deliberate effort to evade the classifiers. We investigate both the problem of modeling the objectives of such adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries. We first present a general approach based on mixed-integer linear programming (MILP) with constraint generation. This approach is the first to compute an optimal solution to adversarial loss minimization for two general classes of adversarial evasion models in the context of binary feature spaces. To further improve scalability and significantly generalize the scope of the MILP-based method, we propose a principled iterative retraining framework, which can be used with arbitrary classifiers and essentially arbitrary attack models. We show that the retraining approach, when it converges, minimizes an upper bound on adversarial loss. Extensive experiments demonstrate that the mixed-integer programming approach significantly outperforms several state-of-the-art adversarial learning alternatives. Moreover, the retraining framework performs nearly as well, but scales significantly better. Finally, we show that our approach is robust to misspecifications of the adversarial model.
ACM Transactions on Knowledge Discovery from Data (TKDD) – Association for Computing Machinery
Published: Jun 8, 2018
Keywords: Adversarial classification
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.