Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence in Medicine HCA 590 (Topics in Health Sciences) Rohit Kate 8. Machine Learning, Support Vector Machines Many of the slides have.

Similar presentations


Presentation on theme: "Artificial Intelligence in Medicine HCA 590 (Topics in Health Sciences) Rohit Kate 8. Machine Learning, Support Vector Machines Many of the slides have."— Presentation transcript:

1 Artificial Intelligence in Medicine HCA 590 (Topics in Health Sciences) Rohit Kate 8. Machine Learning, Support Vector Machines Many of the slides have been adapted from Ray Mooney’s Machine Learning course at UT Austin.

2 Reading Chapter 3, Computational Intelligence in Biomedical Engineering by Rezaul Begg, Daniel T.H. Lai, Marimuthu Palaniswami, CRC Press 2007.

3 3 What is Learning? Herbert Simon: “Learning is any process by which a system improves performance from experience.” What is the task? –Classification –Problem solving / planning / control

4 4 Classification Assign object/event to one of a given finite set of categories. –Medical diagnosis –Radiology images –Credit card applications or transactions –Fraud detection in e-commerce –Worm detection in network packets –Spam filtering in –Recommended articles in a newspaper –Recommended books, movies, music, or jokes –Financial investments –DNA sequences –Spoken words –Handwritten letters –Astronomical images

5 5 Problem Solving / Planning / Control Performing actions in an environment in order to achieve a goal. –Solving calculus problems –Playing checkers, chess, or backgammon –Balancing a pole –Driving a car or a jeep –Flying a plane, helicopter, or rocket –Controlling an elevator –Controlling a character in a video game –Controlling a mobile robot

6 6 Measuring Performance Classification Accuracy Solution correctness Solution quality (length, efficiency) Speed of performance

7 7 Why Study Machine Learning? Engineering Better Computing Systems Develop systems that are too difficult/expensive to construct manually because they require specific detailed skills or knowledge tuned to a specific task (knowledge engineering bottleneck). Develop systems that can automatically adapt and customize themselves to individual users. –Personalized news or mail filter –Personalized tutoring Discover new knowledge from large databases (data mining). –Market basket analysis (e.g. diapers and beer) –Medical text mining (e.g. drugs and their adverse effects)

8 8 Why Study Machine Learning? Cognitive Science Computational studies of learning may help us understand learning in humans and other biological organisms. –Hebbian neural learning “Neurons that fire together, wire together.” –Human’s relative difficulty of learning disjunctive concepts vs. conjunctive ones. –Power law of practice log(# training trials) log(perf. time)

9 9 Why Study Machine Learning? The Time is Ripe Many basic effective and efficient algorithms available. Large amounts of on-line data available. Large amounts of computational resources available.

10 10 Related Disciplines Artificial Intelligence Data Mining Probability and Statistics Information theory Numerical optimization Computational complexity theory Control theory (adaptive) Psychology (developmental, cognitive) Neurobiology Linguistics Philosophy

11 11 Defining the Learning Task Improve on task, T, with respect to performance metric, P, based on experience, E. T: Recognizing hand-written words P: Percentage of words correctly classified E: Database of human-labeled images of handwritten words T: Categorize messages as spam or legitimate. P: Percentage of messages correctly classified. E: Database of s, some with human-given labels

12 12 Designing a Learning System Choose the training experience –Training examples –Features Choose exactly what is too be learned, i.e. the target function. Choose how to represent the target function. Choose a learning algorithm to infer the target function from the experience.

13 An Example of Learning Task Task: Predict the class of Iris plant (Iris Setosa, Iris Versicolor, Iris Vriginica) from the dimensions of its sepals and petals –http://archive.ics.uci.edu/ml/datasets/Iris Features: –Sepal length in cm –Sepal width in cm –Petal length in cm –Petal width in cm Manually (expert) label some examples which become the training examples 13

14 An Example of Learning Task Example 14 ExampleSepal length Sepal width Petal length Petal width Class Setosa Setosa Versicolor Virginica Virginica ……………… Training Examples

15 An Example of Learning Task Example 15 ExampleSepal length Sepal width Petal length Petal width Class Setosa Setosa Versicolor Virginica Virginica ……………… Training Examples Features

16 An Example of Learning Task Example 16 ExampleSepal length Sepal width Petal length Petal width Class Setosa Setosa Versicolor Virginica Virginica ……………… Training Examples Features Feature values

17 An Example of Learning Task Example 17 ExampleSepal length Sepal width Petal length Petal width Class Setosa Setosa Versicolor Virginica Virginica ……………… Training Examples Features Feature values Expert labels

18 An Example of Learning Task Example 18 ExampleSepal length Sepal width Petal length Petal width Class Setosa Setosa Versicolor Virginica Virginica ……………… Training Examples Test Examples ExampleSepal length Sepal width Petal length Petal width Class ?? ?? ?? Unknown labels

19 An Example of Learning Task Example How to relate features to the class (target)? Decide a representation for the target function: f(SL,SW,PL,PW) = {Setosa,Versicolor,Virginica} An example, x = w0+w1*SL + w2*SW + w3*PL + w4*PW y = v0+v1*SL + v2*SW + v3*PL + v4*PW f(sl,sw,pl,pw)= Setosa if x < 0 Versicolor if x >= 0 and y < 0 Virginica if x >=0 and y >=0 Find the values of the parameters w0, w1, w2, w3, w4, v0, v1, v2, v3, v4 that fits the training data using a machine learning method If the test examples are from the same distribution as the training examples (similar), then the learned function should predict classes for the test examples with a good accuracy 19

20 Classification and Regression Most learning tasks fall under two categories Classification: The value to be predicted is a nominal value, for example, class of the plant, positive or negative diagnosis Regression: The value to be predicted is a numerical value, for example, stock prices, energy expenditure Most machine learning methods have both classification and regression versions 20

21 Feature Engineering Besides the machine learning method employed, the performance depends largely on the features used It is a skill to come up with the best features, called feature engineering If the relevant features are not used then the machine learning method will never be able to learn to predict the correct class Extraneous features may confuse the machine learning methods, although they usually have some robustness to certain level Methods exist to automatically search the possible space of features to select the best features, feature selection methods 21

22 22 Lessons Learned about Learning Learning can be viewed as using experience to approximate a chosen target function. Function approximation can be viewed as a search through a space of hypotheses (representations of functions) for one that best fits a set of training data. Different learning methods assume different hypothesis spaces (representation languages) and/or employ different search techniques.

23 23 Various Function Representations Numerical functions –Linear regression –Neural networks –Support vector machines Symbolic functions –Decision trees –Rules in propositional logic –Rules in first-order predicate logic Instance-based functions –Nearest-neighbor –Case-based Probabilistic Graphical Models –Naïve Bayes –Bayesian networks –Hidden-Markov Models (HMMs) –Probabilistic Context Free Grammars (PCFGs) –Markov networks

24 24 Various Search Algorithms Gradient descent –Perceptron –Backpropagation Dynamic Programming –HMM Learning –PCFG Learning Divide and Conquer –Decision tree induction –Rule learning Evolutionary Computation –Genetic Algorithms (GAs) –Genetic Programming (GP) –Neuro-evolution

25 25 Evaluation of Learning Systems Experimental –Conduct controlled cross-validation experiments to compare various methods on a variety of benchmark datasets. –Gather data on their performance, e.g. test accuracy, training-time, testing-time. –Analyze differences for statistical significance. Theoretical –Analyze algorithms mathematically and prove theorems about their: Computational complexity (how fast the algorithm runs) Ability to fit training data Sample complexity (number of training examples needed to learn an accurate function)

26 26 History of Machine Learning 1950s –Samuel’s checker player –Selfridge’s Pandemonium 1960s: –Neural networks: Perceptron –Pattern recognition –Learning in the limit theory –Minsky and Papert prove limitations of Perceptron 1970s: –Symbolic concept induction –Winston’s arch learner –Expert systems and the knowledge acquisition bottleneck –Quinlan’s ID3 –Michalski’s AQ and soybean diagnosis –Scientific discovery with BACON –Mathematical discovery with AM

27 27 History of Machine Learning (cont.) 1980s: –Advanced decision tree and rule learning –Explanation-based Learning (EBL) –Learning and planning and problem solving –Utility problem –Analogy –Cognitive architectures –Resurgence of neural networks (connectionism, backpropagation) –Valiant’s PAC Learning Theory –Focus on experimental methodology 1990s –Data mining –Adaptive software agents and web applications –Text learning –Reinforcement learning (RL) –Inductive Logic Programming (ILP) –Ensembles: Bagging, Boosting, and Stacking –Bayes Net learning

28 28 History of Machine Learning (cont.) 2000s –Support vector machines –Kernel methods –Graphical models –Statistical relational learning –Transfer learning –Sequence labeling –Collective classification and structured outputs –Computer Systems Applications Compilers Debugging Graphics Security (intrusion, virus, and worm detection) – management –Personalized assistants that learn –Learning in robotics and vision

29 29 University of Texas at Austin Machine Learning Group Support Vector Machine (SVM)

30 30 University of Texas at Austin Machine Learning Group Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b = 0 w T x + b < 0 w T x + b > 0 f(x) = sign(w T x + b)

31 31 University of Texas at Austin Machine Learning Group Linear Separators Which of the linear separators is optimal?

32 32 University of Texas at Austin Machine Learning Group Classification Margin Distance from example x i to the separator is Examples closest to the hyperplane are support vectors. Margin ρ of the separator is the distance between support vectors. r ρ

33 33 University of Texas at Austin Machine Learning Group Maximum Margin Classification Maximizing the margin is good according to intuition and PAC theory. Implies that only support vectors matter; other training examples are ignorable.

34 34 University of Texas at Austin Machine Learning Group Linear SVMs Mathematically Formulate the optimization problem: Which can be reformulated as: Find w and b such that is maximized and for all (x i, y i ), i=1..n : y i (w T x i + b) ≥ 1 Find w and b such that Φ(w) = ||w|| 2 =w T w is minimized and for all (x i, y i ), i=1..n : y i (w T x i + b) ≥ 1

35 35 University of Texas at Austin Machine Learning Group Solving the Optimization Problem Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems for which several (non-trivial) algorithms exist. The solution involves constructing a dual problem where a Lagrange multiplier α i is associated with every inequality constraint in the primal (original) problem: Find w and b such that Φ(w) =w T w is minimized and for all (x i, y i ), i=1..n : y i (w T x i + b) ≥ 1 Find α 1 …α n such that Q( α ) = Σ α i - ½ ΣΣ α i α j y i y j x i T x j is maximized and (1) Σ α i y i = 0 (2) α i ≥ 0 for all α i

36 36 University of Texas at Austin Machine Learning Group Soft Margin Classification What if the training set is not linearly separable? Slack variables ξ i can be added to allow misclassification of difficult or noisy examples, resulting margin called soft. ξiξi ξiξi

37 37 University of Texas at Austin Machine Learning Group Soft Margin Classification Mathematically The old formulation: Modified formulation incorporates slack variables: Parameter C can be viewed as a way to control overfitting: it “trades off” the relative importance of maximizing the margin and fitting the training data. Find w and b such that Φ(w) =w T w is minimized and for all (x i,y i ), i=1..n : y i (w T x i + b) ≥ 1 Find w and b such that Φ(w) =w T w + C Σ ξ i is minimized and for all (x i,y i ), i=1..n : y i (w T x i + b) ≥ 1 – ξ i,, ξ i ≥ 0

38 38 University of Texas at Austin Machine Learning Group Linear SVMs: Overview The classifier is a separating hyperplane. Most “important” training points are support vectors; they define the hyperplane. Quadratic optimization algorithms can identify which training points x i are support vectors with non-zero Lagrangian multipliers α i. Both in the dual formulation of the problem and in the solution training points appear only inside inner products: Find α 1 …α N such that Q( α ) = Σ α i - ½ ΣΣ α i α j y i y j x i T x j is maximized and (1) Σ α i y i = 0 (2) 0 ≤ α i ≤ C for all α i f(x) = Σ α i y i x i T x + b

39 39 University of Texas at Austin Machine Learning Group Non-linear SVMs Datasets that are linearly separable with some noise work out great: But what are we going to do if the dataset is just too hard? How about… mapping data to a higher-dimensional space: x2x2 x x x

40 40 University of Texas at Austin Machine Learning Group Non-linear SVMs: Feature spaces General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x)

41 41 University of Texas at Austin Machine Learning Group The “Kernel Trick” The linear classifier relies on inner product between vectors K(x i,x j )=x i T x j If every datapoint is mapped into high-dimensional space via some transformation Φ: x → φ(x), the inner product becomes: K(x i,x j )= φ(x i ) T φ(x j ) A kernel function is a function that is eqiuvalent to an inner product in some feature space. Example: 2-dimensional vectors x=[x 1 x 2 ]; let K(x i,x j )=(1 + x i T x j ) 2, Need to show that K(x i,x j )= φ(x i ) T φ(x j ): K(x i,x j )=(1 + x i T x j ) 2, = 1+ x i1 2 x j x i1 x j1 x i2 x j2 + x i2 2 x j x i1 x j1 + 2x i2 x j2 = = [1 x i1 2 √2 x i1 x i2 x i2 2 √2x i1 √2x i2 ] T [1 x j1 2 √2 x j1 x j2 x j2 2 √2x j1 √2x j2 ] = = φ(x i ) T φ(x j ), where φ(x) = [1 x 1 2 √2 x 1 x 2 x 2 2 √2x 1 √2x 2 ] Thus, a kernel function implicitly maps data to a high-dimensional space (without the need to compute each φ(x) explicitly).

42 42 University of Texas at Austin Machine Learning Group Examples of Kernel Functions Linear: K(x i,x j )= x i T x j –Mapping Φ: x → φ(x), where φ(x) is x itself Polynomial of power p: K(x i,x j )= (1+ x i T x j ) p –Mapping Φ: x → φ(x), where φ(x) has dimensions Gaussian (radial-basis function): K(x i,x j ) = –Mapping Φ: x → φ(x), where φ(x) is infinite-dimensional: every point is mapped to a function (a Gaussian); combination of functions for support vectors is the separator. Higher-dimensional space still has intrinsic dimensionality d, but linear separators in it correspond to non-linear separators in original space.

43 43 University of Texas at Austin Machine Learning Group Non-linear SVMs Mathematically Dual problem formulation: The solution is: Optimization techniques for finding α i ’s remain the same! Find α 1 …α n such that Q(α) = Σ α i - ½ ΣΣ α i α j y i y j K(x i, x j ) is maximized and (1) Σ α i y i = 0 (2) α i ≥ 0 for all α i f(x) = Σ α i y i K(x i, x j )+ b

44 44 University of Texas at Austin Machine Learning Group SVM applications SVMs were originally proposed by Boser, Guyon and Vapnik in 1992 and gained increasing popularity in late 1990s. SVMs are currently among the best performers for a number of classification tasks ranging from text to genomic data. SVMs can be applied to complex data types beyond feature vectors (e.g. graphs, sequences, relational data) by designing kernel functions for such data. SVM techniques have been extended to a number of tasks such as regression [Vapnik et al. ’97], principal component analysis [Schölkopf et al. ’99], etc. Most popular optimization algorithms for SVMs use decomposition to hill- climb over a subset of α i ’s at a time, e.g. SMO [Platt ’99] and [Joachims ’99] Tuning SVMs remains a black art: selecting a specific kernel and parameters is usually done in a try-and-see manner.


Download ppt "Artificial Intelligence in Medicine HCA 590 (Topics in Health Sciences) Rohit Kate 8. Machine Learning, Support Vector Machines Many of the slides have."

Similar presentations


Ads by Google