Foundations of Adversarial Learning Daniel Lowd, University of Washington Christopher Meek, Microsoft Research Pedro Domingos, University of Washington.

Slides:



Advertisements
Similar presentations
A Support Vector Method for Optimizing Average Precision
Advertisements

Statistical Machine Learning- The Basic Approach and Current Research Challenges Shai Ben-David CS497 February, 2007.
On-line learning and Boosting
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Efficiency of Algorithms
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Randomized Sensing in Adversarial Environments Andreas Krause Joint work with Daniel Golovin and Alex Roper International Joint Conference on Artificial.
Machine Learning Theory Machine Learning Theory Maria Florina Balcan 04/29/10 Plan for today: - problem of “combining expert advice” - course retrospective.
Boosting Approach to ML
Foundations of Adversarial Learning Daniel Lowd, University of Washington Christopher Meek, Microsoft Research Pedro Domingos, University of Washington.
Partitioned Logistic Regression for Spam Filtering Ming-wei Chang University of Illinois at Urbana-Champaign Wen-tau Yih and Christopher Meek Microsoft.
On the Hardness of Evading Combinations of Linear Classifiers Daniel Lowd University of Oregon Joint work with David Stevens.
Efficient Query Evaluation on Probabilistic Databases
Active Learning of Binary Classifiers
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
Probably Approximately Correct Model (PAC)
Reduced Support Vector Machine
Deep Belief Networks for Spam Filtering
Adversarial Learning: Practice and Theory Daniel Lowd University of Washington July 14th, 2006 Joint work with Chris Meek, Microsoft Research “If you know.
Bing LiuCS Department, UIC1 Learning from Positive and Unlabeled Examples Bing Liu Department of Computer Science University of Illinois at Chicago Joint.
Experts and Boosting Algorithms. Experts: Motivation Given a set of experts –No prior information –No consistent behavior –Goal: Predict as the best expert.
Linear Discriminators Chapter 20 From Data to Knowledge.
Online Learning Algorithms
Learning at Low False Positive Rate Scott Wen-tau Yih Joshua Goodman Learning for Messaging and Adversarial Problems Microsoft Research Geoff Hulten Microsoft.
Good Word Attacks on Statistical Spam Filters Daniel Lowd University of Washington (Joint work with Christopher Meek, Microsoft Research)
SVM by Sequential Minimal Optimization (SMO)
by B. Zadrozny and C. Elkan
1 Naïve Bayes Models for Probability Estimation Daniel Lowd University of Washington (Joint work with Pedro Domingos)
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Great Theoretical Ideas in Computer Science.
Trust-Aware Optimal Crowdsourcing With Budget Constraint Xiangyang Liu 1, He He 2, and John S. Baras 1 1 Institute for Systems Research and Department.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
ECE 8443 – Pattern Recognition Objectives: Error Bounds Complexity Theory PAC Learning PAC Bound Margin Classifiers Resources: D.M.: Simplified PAC-Bayes.
Research Ranked Recall: Efficient Classification by Learning Indices That Rank Omid Madani with Michael Connor (UIUC)
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
Télécom 2A – Algo Complexity (1) Time Complexity and the divide and conquer strategy Or : how to measure algorithm run-time And : design efficient algorithms.
Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.
Potential-Based Agnostic Boosting Varun Kanade Harvard University (joint work with Adam Tauman Kalai (Microsoft NE))
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Online Algorithms By: Sean Keith. An online algorithm is an algorithm that receives its input over time, where knowledge of the entire input is not available.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Optimal XOR Hashing for a Linearly Distributed Address Lookup in Computer Networks Christopher Martinez, Wei-Ming Lin, Parimal Patel The University of.
Inference Complexity As Learning Bias Daniel Lowd Dept. of Computer and Information Science University of Oregon Joint work with Pedro Domingos.
Spam Detection Ethan Grefe December 13, 2013.
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Bing LiuCS Department, UIC1 Chapter 8: Semi-supervised learning.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Evaluating Classification Performance
Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University Today: Computational Learning Theory Probably Approximately.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
Machine Learning in Practice Lecture 10 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
A Kernel Approach for Learning From Almost Orthogonal Pattern * CIS 525 Class Presentation Professor: Slobodan Vucetic Presenter: Yilian Qin * B. Scholkopf.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
More on HMMs and Multiple Sequence Alignment BMI/CS 776 Mark Craven March 2002.
Data Mining CH6 Implementation: Real machine learning schemes(2) Reporter: H.C. Tsai.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 15: Text Classification & Naive Bayes 1.
KNN & Naïve Bayes Hongning Wang
Mistake Bounds William W. Cohen. One simple way to look for interactions Naïve Bayes – two class version dense vector of g(x,y) scores for each word in.
Unconstrained Submodular Maximization Moran Feldman The Open University of Israel Based On Maximizing Non-monotone Submodular Functions. Uriel Feige, Vahab.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
Dan Roth Department of Computer and Information Science
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
KDD 2004: Adversarial Classification
Extensive-form games and how to solve them
Objective of This Course
Chapter 2: Evaluative Feedback
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Chapter 2: Evaluative Feedback
Presentation transcript:

Foundations of Adversarial Learning Daniel Lowd, University of Washington Christopher Meek, Microsoft Research Pedro Domingos, University of Washington

Motivation Many adversarial problems  Spam filtering  Intrusion detection  Malware detection  New ones every year! Want general-purpose solutions We can gain much insight by modeling adversarial situations mathematically

Example: Spam Filtering cheap = 1.0 mortgage = 1.5 Total score = 2.5 From: Cheap mortgage now!!! Feature Weights > 1.0 (threshold) Spam

Example: Spammers Adapt cheap = 1.0 mortgage = 1.5 Cagliari = -1.0 Sardinia = -1.0 Total score = 0.5 From: Cheap mortgage now!!! Cagliari Sardinia Feature Weights < 1.0 (threshold) OK

Example: Classifier Adapts cheap = 1.5 mortgage = 2.0 Cagliari = -0.5 Sardinia = -0.5 Total score = 2.5 Feature Weights > 1.0 (threshold) OK Spam From: Cheap mortgage now!!! Cagliari Sardinia

Outline Problem definitions Anticipating adversaries (Dalvi et al., 2004)  Goal: Defeat adaptive adversary  Assume: Perfect information, optimal short-term strategies  Results: Vastly better classifier accuracy Reverse engineering classifiers (Lowd & Meek, 2005a,b)  Goal: Assess classifier vulnerability  Assume: Membership queries from adversary  Results: Theoretical bounds, practical attacks Conclusion

Definitions X1X1 X2X2 x X1X1 X2X2 x + - X1X1 X2X2 Instance space Classifier Adversarial cost function c(x): X  {+,  } c  C, concept class (e.g., linear classifier) X = {X 1, X 2, …, X n } Each X i is a feature Instances, x  X (e.g., s) a(x): X  R a  A (e.g., more legible spam is better)

Adversarial scenario Classifier’s Task: Choose new c’(x) minimize (cost-sensitive) error Adversary’s Task: Choose x to minimize a(x) subject to c(x) = 

This is a game! Adversary’s actions: {x  X} Classifier’s actions: {c  C} Assume perfect information Finding a Nash equilibrium is triply exponential (at best)! Instead, we’ll look at optimal myopic strategies: Best action assuming nothing else changes

Initial classifier cheap = 1.0 mortgage = 1.5 Cagliari = -1.0 Sardinia = -1.0 Set weights using cost-sensitive naïve Bayes Assume: training data is untainted Learned weights:

Adversary’s strategy cheap = 1.0 mortgage = 1.5 Cagliari = -1.0 Sardinia = -1.0 From: example.com Cheap mortgage now!!! Use cost: a(x) = Σ i w(x i, b i ) Solve knapsack-like problem with dynamic programming Assume: that the classifier will not modify c(x) From: example.com Cheap mortgage now!!! Cagliari Sardinia

Classifier’s strategy cheap = 1.0 mortgage = 1.5 Cagliari = -1.0 Sardinia = -1.0 For given x, compute probability it was modified by adversary Assume: the adversary is using the optimal strategy Learned weights:

Classifier’s strategy cheap = 1.5 mortgage = 2.0 Cagliari = -0.5 Sardinia = -0.5 For given x, compute probability it was modified by adversary Assume: the adversary is using the optimal strategy Learned weights:

Evaluation: spam Data: -Data Scenarios  Plain (PL)  Add Words (AW)  Synonyms (SYN)  Add Length (AL) Similar results with Ling-Spam, different classifier costs Score

Repeated Game Adversary responds to new classifier; classifier predicts adversary’s revised response Oscillations occur as adversaries switch strategies back and forth.

Outline Problem definitions Anticipating adversaries (Dalvi et al., 2004)  Goal: Defeat adaptive adversary  Assume: Perfect information, optimal short-term strategies  Results: Vastly better classifier accuracy Reverse engineering classifiers (Lowd & Meek, 2005a,b)  Goal: Assess classifier vulnerability  Assume: Membership queries from adversary  Results: Theoretical bounds, practical attacks Conclusion

Imperfect information What can an adversary accomplish with limited knowledge of the classifier? Goals:  Understand classifier’s vulnerabilities  Understand our adversary’s likely strategies “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” -- Sun Tzu, 500 BC

Adversarial Classification Reverse Engineering (ACRE) + - Adversary’s Task: Minimize a(x) subject to c(x) =  Problem: The adversary doesn’t know c(x)!

Adversarial Classification Reverse Engineering (ACRE) Task: Minimize a(x) subject to c(x) =  Given: X1X1 X2X2 ?? ? ? ? ? ? ? - + –Full knowledge of a(x) –One positive and one negative instance, x + and x  –A polynomial number of membership queries Within a factor of k

Comparison to other theoretical learning methods Probably Approximately Correct (PAC): accuracy over same distribution Membership queries: exact classifier ACRE: single low-cost, negative instance

ACRE example X1X1 X2X2 X1X1 X2X2 xaxa Linear classifier: c(x) = +, iff (w  x > T) Linear cost function:

Linear classifiers with continuous features ACRE learnable within a factor of (1+  ) under linear cost functions Proof sketch  Only need to change the highest weight/cost feature  We can efficiently find this feature using line searches in each dimension X1X1 X2X2 xaxa

Linear classifiers with Boolean features Harder problem: can’t do line searches ACRE learnable within a factor of 2 if adversary has unit cost per change: xaxa x-x- wiwi wjwj wkwk wlwl wmwm c(x)c(x)

Algorithm Iteratively reduce the cost in two ways: 1. Remove any unnecessary change: O(n) 2. Replace any two changes with one: O(n 3 ) xaxa y wiwi wjwj wkwk wlwl c(x)c(x) wmwm x-x- xaxa y’ wiwi wjwj wkwk wlwl c(x)c(x) wpwp

Evaluation Classifiers: Naïve Bayes (NB), Maxent (ME) Data: 500k Hotmail messages, 276k features Adversary feature sets:  23,000 words (Dict)  1,000 random words (Rand) CostQueries Dict NB23261,000 Dict ME10119,000 Rand NB3123,000 Rand ME129,000

Comparison of Filter Weights “spammy”“good”

We can find good features (words) instead of good instances ( s) Active attacks: Test s allowed Passive attacks: No filter access Finding features

Active Attacks Learn which words are best by sending test messages (queries) through the filter First-N: Find n good words using as few queries as possible Best-N: Find the best n words

First-N Attack Step 1: Find a “Barely spam” message Threshold Legitimate Spam “Barely spam” Hi, mom! Cheap mortgage now!!! “Barely legit.” mortgage now!!! Original spam Original legit.

First-N Attack Step 2: Test each word Threshold Legitimate Spam Good words “Barely spam” message Less good words

Best-N Attack Key idea: use spammy words to sort the good words. Threshold Legitimate Spam Better Worse

Results Attack typeNaïve Bayes words (queries) Maxent words (queries) First-N 59 (3,100) 20 (4,300) Best-N 29 (62,000) 9(69,000) ACRE (Rand) 31* (23,000) 12* (9,000) * words added + words removed

Passive Attacks Heuristics  Select random dictionary words (Dictionary)  Select most frequent English words (Freq. Word)  Select highest ratio: English freq./spam freq. (Freq. Ratio) Spam corpus: spamarchive.org English corpora:  Reuters news articles  Written English  Spoken English  1992 USENET

Passive Attack Results

Results Attack typeNaïve Bayes words (queries) Maxent words (queries) First-N 59 (3,100) 20 (4,300) Best-N 29 (62,000) 9(69,000) ACRE (Rand) 31* (23,000) 12* (9,000) Passive 112 (0) 149 (0) * words added + words removed

Conclusion Mathematical modeling is a powerful tool in adversarial situations  Game theory lets us make classifiers aware of and resistant to adversaries  Complexity arguments let us explore the vulnerabilities of our own systems This is only the beginning…  Can we weaken our assumptions?  Can we expand our scenarios?

Proof sketch (Contradiction) xaxa y wiwi wjwj wkwk wlwl c(x)c(x) wmwm x wpwp wrwr x’s average change is twice as good as y’s We can replace y’s two worst changes with x’s single best change But we already tried every such replacement! Suppose there is some negative instance x with less than half the cost of y: