Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning CUNY Graduate Center Lecture 2: Math Primer.

Similar presentations


Presentation on theme: "Machine Learning CUNY Graduate Center Lecture 2: Math Primer."— Presentation transcript:

1 Machine Learning CUNY Graduate Center Lecture 2: Math Primer

2 Today Probability and Statistics –Naïve Bayes Classification Linear Algebra –Matrix Multiplication –Matrix Inversion Calculus –Vector Calculus –Optimization –Lagrange Multipliers 1

3 Classical Artificial Intelligence Expert Systems Theorem Provers Shakey Chess Largely characterized by determinism. 2

4 Modern Artificial Intelligence Fingerprint ID Internet Search Vision – facial ID, object recognition Speech Recognition Asimo Jeopardy! Statistical modeling to generalize from data. 3

5 Two Caveats about Statistical Modeling Black Swans The Long Tail 4

6 Black Swans In the 17 th Century, all known swans were white. Based on evidence, it is impossible for a swan to be anything other than white. 5 In the 18 th Century, black swans were discovered in Western Australia Black Swans are rare, sometimes unpredictable events, that have extreme impact Almost all statistical models underestimate the likelihood of unseen events.

7 The Long Tail Many events follow an exponential distribution These distributions have a very long “tail”. –I.e. A large region with significant probability mass, but low likelihood at any particular point. Often, interesting events occur in the Long Tail, but it is difficult to accurately model behavior in this region. 6

8 Boxes and Balls 2 Boxes, one red and one blue. Each contain colored balls. 7

9 Boxes and Balls Suppose we randomly select a box, then randomly draw a ball from that box. The identity of the Box is a random variable, B. The identity of the ball is a random variable, L. B can take 2 values, r, or b L can take 2 values, g or o. 8

10 Boxes and Balls Given some information about B and L, we want to ask questions about the likelihood of different events. What is the probability of selecting an apple? If I chose an orange ball, what is the probability that I chose from the blue box? 9

11 Some basics The probability (or likelihood) of an event is the fraction of times that the event occurs out of n trials, as n approaches infinity. Probabilities lie in the range [0,1] Mutually exclusive events are events that cannot simultaneously occur. –The sum of the likelihoods of all mutually exclusive events must equal 1. If two events are independent then, p(X, Y) = p(X)p(Y) p(X|Y) = p(X) 10

12 Joint Probability – P(X,Y) A Joint Probability function defines the likelihood of two (or more) events occurring. Let n ij be the number of times event i and event j simultaneously occur. 11 OrangeGreen Blue box134 Red box628 7512

13 Generalizing the Joint Probability 12

14 Marginalization Consider the probability of X irrespective of Y. The number of instances in column j is the sum of instances in each cell Therefore, we can marginalize or “sum over” Y: 13

15 Conditional Probability Consider only instances where X = x j. The fraction of these instances where Y = y i is the conditional probability –“The probability of y given x” 14

16 Relating the Joint, Conditional and Marginal 15

17 Sum and Product Rules In general, we’ll refer to a distribution over a random variable as p(X) and a distribution evaluated at a particular value as p(x). 16 Sum Rule Product Rule

18 Bayes Rule 17

19 Interpretation of Bayes Rule Prior: Information we have before observation. Posterior: The distribution of Y after observing X Likelihood: The likelihood of observing X given Y 18 Prior Posterior Likelihood

20 Boxes and Balls with Bayes Rule Assuming I’m inherently more likely to select the red box (66.6%) than the blue box (33.3%). If I selected an orange ball, what is the likelihood that I selected the red box? –The blue box? 19

21 Boxes and Balls 20

22 Naïve Bayes Classification This is a simple case of a simple classification approach. Here the Box is the class, and the colored ball is a feature, or the observation. We can extend this Bayesian classification approach to incorporate more independent features. 21

23 Naïve Bayes Classification Some theory first. 22

24 Naïve Bayes Classification Assuming independent features simplifies the math. 23

25 Naïve Bayes Example Data HOTLIGHTSOFTRED COLDHEAVYSOFTRED HOTHEAVYFIRMRED HOTLIGHTFIRMRED COLDLIGHTSOFTBLUE COLDHEAVYFIRMBLUE HOTHEAVYFIRMBLUE HOTLIGHTFIRMBLUE HOTHEAVYFIRM????? 24

26 Naïve Bayes Example Data HOTLIGHTSOFTRED COLDHEAVYSOFTRED HOTHEAVYFIRMRED HOTLIGHTFIRMRED COLDLIGHTSOFTBLUE COLDHEAVYFIRMBLUE HOTHEAVYFIRMBLUE HOTLIGHTFIRMBLUE HOTHEAVYFIRM????? 25 Prior:

27 Naïve Bayes Example Data HOTLIGHTSOFTRED COLDHEAVYSOFTRED HOTHEAVYFIRMRED HOTLIGHTFIRMRED COLDLIGHTSOFTBLUE COLDHEAVYSOFTBLUE HOTHEAVYFIRMBLUE HOTLIGHTFIRMBLUE HOTHEAVYFIRM????? 26

28 Naïve Bayes Example Data HOTLIGHTSOFTRED COLDHEAVYSOFTRED HOTHEAVYFIRMRED HOTLIGHTFIRMRED COLDLIGHTSOFTBLUE COLDHEAVYSOFTBLUE HOTHEAVYFIRMBLUE HOTLIGHTFIRMBLUE HOTHEAVYFIRM????? 27

29 Continuous Probabilities So far, X has been discrete where it can take one of M values. What if X is continuous? Now p(x) is a continuous probability density function. The probability that x will lie in an interval (a,b) is: 28

30 Continuous probability example 29

31 Properties of probability density functions 30 Sum Rule Product Rule

32 Expected Values Given a random variable, with a distribution p(X), what is the expected value of X? 31

33 Multinomial Distribution If a variable, x, can take 1-of-K states, we represent the distribution of this variable as a multinomial distribution. The probability of x being in state k is μ k 32

34 Expected Value of a Multinomial The expected value is the mean values. 33

35 Gaussian Distribution One Dimension D-Dimensions 34

36 Gaussians 35

37 How machine learning uses statistical modeling Expectation –The expected value of a function is the hypothesis Variance –The variance is the confidence in that hypothesis 36

38 Variance The variance of a random variable describes how much variability around the expected value there is. Calculated as the expected squared error. 37

39 Covariance The covariance of two random variables expresses how they vary together. If two variables are independent, their covariance equals zero. 38

40 Linear Algebra Vectors –A one dimensional array. –If not specified, assume x is a column vector. Matrices –Higher dimensional array. –Typically denoted with capital letters. –n rows by m columns 39

41 Transposition Transposing a matrix swaps columns and rows. 40

42 Transposition Transposing a matrix swaps columns and rows. 41

43 Addition Matrices can be added to themselves iff they have the same dimensions. –A and B are both n-by-m matrices. 42

44 Multiplication To multiply two matrices, the inner dimensions must be the same. –An n-by-m matrix can be multiplied by an m-by-k matrix 43

45 Inversion The inverse of an n-by-n or square matrix A is denoted A -1, and has the following property. Where I is the identity matrix is an n-by-n matrix with ones along the diagonal. –I ij = 1 iff i = j, 0 otherwise 44

46 Identity Matrix Matrices are invariant under multiplication by the identity matrix. 45

47 Helpful matrix inversion properties 46

48 Norm The norm of a vector, x, represents the euclidean length of a vector. 47

49 Positive Definite-ness Quadratic form –Scalar –Vector Positive Definite matrix M Positive Semi-definite 48

50 Calculus Derivatives and Integrals Optimization 49

51 Derivatives A derivative of a function defines the slope at a point x. 50

52 Derivative Example 51

53 Integrals Integration is the inverse operation of derivation (plus a constant) Graphically, an integral can be considered the area under the curve defined by f(x) 52

54 Integration Example 53

55 Vector Calculus Derivation with respect to a matrix or vector Gradient Change of Variables with a Vector 54

56 Derivative w.r.t. a vector Given a vector x, and a function f(x), how can we find f’(x)? 55

57 Derivative w.r.t. a vector Given a vector x, and a function f(x), how can we find f’(x)? 56

58 Example Derivation 57

59 Example Derivation 58 Also referred to as the gradient of a function.

60 Useful Vector Calculus identities Scalar Multiplication Product Rule 59

61 Useful Vector Calculus identities Derivative of an inverse Change of Variable 60

62 Optimization Have an objective function that we’d like to maximize or minimize, f(x) Set the first derivative to zero. 61

63 Optimization with constraints What if I want to constrain the parameters of the model. –The mean is less than 10 Find the best likelihood, subject to a constraint. Two functions: –An objective function to maximize –An inequality that must be satisfied 62

64 Lagrange Multipliers Find maxima of f(x,y) subject to a constraint. 63

65 General form Maximizing: Subject to: Introduce a new variable, and find a maxima. 64

66 Example Maximizing: Subject to: Introduce a new variable, and find a maxima. 65

67 Example 66 Now have 3 equations with 3 unknowns.

68 Example 67 Eliminate LambdaSubstitute and Solve

69 Why does Machine Learning need these tools? Calculus –We need to identify the maximum likelihood, or minimum risk. Optimization –Integration allows the marginalization of continuous probability density functions Linear Algebra –Many features leads to high dimensional spaces –Vectors and matrices allow us to compactly describe and manipulate high dimension al feature spaces. 68

70 Why does Machine Learning need these tools? Vector Calculus –All of the optimization needs to be performed in high dimensional spaces –Optimization of multiple variables simultaneously – Gradient Descent –Want to take a marginal over high dimensional distributions like Gaussians. 69

71 Next Time Linear Regression and Regularization Read Chapter 1.1, 3.1, 3.3 70


Download ppt "Machine Learning CUNY Graduate Center Lecture 2: Math Primer."

Similar presentations


Ads by Google