Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification and risk prediction

Similar presentations


Presentation on theme: "Classification and risk prediction"— Presentation transcript:

1 Classification and risk prediction
Usman Roshan

2 Disease risk prediction
What is the best method to predict disease risk? (Focus on this today) Which SNPs and non-genetic variables best predict risk?

3 Review Chi-square statistic for ranking SNPs
Logistic regression model for ranking SNPs and classifying disease risk Supervised learning Empirical risk minimization Maximum likelihood Least squares

4 Evaluating probabilities
Under logistic regression model the probability of disease given genotype is Receiver Operating Characteristic Curve (ROC) Plot true positive rate against false positive rate Area under ROC curve is probability that a classifer will rank a positive higher than negative

5 True and false positives
Actual value P N P’ True positive (TP) False positive (FP) N’ False negative (FN) True negative (TN) Predicted value True positive rate (TPR)=TP/P False positive rate (FPR)=FP/N

6 Classification: Bayesian learning
Bayes rule: To classify a given datapoint x we select the model (class) Mi with the highest P(Mi|x) The denominator is a normalizing term and does not affect the classification. Therefore P(x|M) is called the likelihood and P(M) is the prior probability. To classify a given datapoint x we need to know the likelihood and the prior. If priors P(M) are uniform (the same) then finding the model that maximizes P(M|D) is the same as finding M that maximizes the likelihood P(D|M).

7 Gaussian models Assume that class likelihood is represented by a Gaussian distribution with parameters  (mean) and  (standard deviation) We find the model (in other words mean and variance) that maximize the likelihood (or equivalently the log likelihood). Suppose we are given training points x1,x2,…,xn1 from class C1. Assuming that each datapoint is drawn independently from C1 the sample log likelihood is

8 Gaussian models The log likelihood is given by
By setting the first derivatives dP/d1 and dP/d1 to 0. This gives us the maximum likelihood estimate of 1 and 1 (denoted as m1 and s1 respectively) Similarly we determine m2 and s2 for class C2.

9 Gaussian models After having determined class parameters for C1 and C2 we can classify a given datapoint by evaluating P(x|C1) and P(x|C2) and assigning it to the class with the higher likelihood (or log likelihood). The likelihood can also be used as a loss function and has an equivalent representation in empirical risk minimization.

10 Gaussian classification example
Consider one SNP genotype for case and control subjects. Case (class C1): 1, 1, 2, 1, 0, 2 Control (class C2): 0, 1, 0, 0, 1, 1 Under the Gaussian assumption case and control classes are represented by Gaussian distributions with parameters (1, 1) and (2, 2) respectively. The maximum likelihood estimates of means are

11 Gaussian classification example
The estimates of class standard deviations are Similarly s2=.25 Which class does x=1 belong to? What about x=0 and x=2? What happens if class variances are equal?

12 Multivariate Gaussian classification
Suppose each datapoint is a m-dimensional vector. In the previous example we would have m SNP genotypes instead of one. The class likelihood is given by where 1 is the class covariance matrix. 1 is of dimensional d x d. The (i,j)th entry of 1 is the covariance of the ith and jth variable.

13 Multivariate Gaussian classification
The maximum likelihood estimates of  and  are The class log likelihoods with estimated parameters (ignoring constant terms) are

14 Multivariate Gaussian classification
If S1=S2 then the class log likelihoods with estimated parameters (ignoring constant terms) are Depends on distance to means.

15 Naïve Bayes algorithm If we assume that variables are independent (no interaction between SNPs) then the off-diagonal terms of S are zero and the log likelihood becomes (ignoring constant terms)

16 Nearest means classifier
If we assume all variances sj to be equal then (ignoring constant terms) we get

17 Gaussian classification example
Consider three SNP genotype for case and control subjects. Case (class C1): (1,2,0), (2,2,0), (2,2,0), (2,1,1), (0,2,1), (2,1,0) Control (class C2): (0,1,2), (1,1,1), (1,0,2), (1,0,0), (0,0,2), (0,1,0) Classify (1,2,1) and (0,0,1) with the nearest means classifier

18 Discriminative learning
No assumptions about the underlying model Support vector machine: optimally separating hyperplane between two sets of points

19 Support vector machine: optimally separating hyperplane

20 SVMs


Download ppt "Classification and risk prediction"

Similar presentations


Ads by Google