Download presentation

Presentation is loading. Please wait.

Published byBrian Murphy Modified over 4 years ago

1
1 A Statistical Analysis of the Precision-Recall Graph Ralf Herbrich Microsoft Research UK Joint work with Hugo Zaragoza and Simon Hill

2
2 Overview The Precision-Recall Graph A Stability Analysis Main Result Discussion and Applications Conclusions

3
3 Features of Ranking Learning We cannot take differences of ranks. We cannot ignore the order of ranks. Point-wise loss functions do not capture the ranking performance! ROC or precision-recall curves do capture the ranking performance. We need generalisation error bounds for ROC and precision-recall curves!

4
4 Precision and Recall Given: Sample z=((x 1,y 1 ),...,(x m,y m )) 2 (X £ {0,1}) m with k positive y i together with a function f:X ! R. Ranking the sample: Re-order the sample: f(x (1) ) ¸ ¢¢¢ ¸ f(x (m) ) Record the indices i 1,…, i k of the positive y (j). Precision p i and r i recall:

5
5 Precision-Recall: An Example After reordering: f(x(i))f(x(i))

6
6 Break-Even Point 00.20.40.60.81 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Break-Even point

7
7 Average Precision 00.20.40.60.81 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision

8
8 A Stability Analysis: Questions 1. How much does A(f,z) change if we can alter one sample (x i,y i )? 2. How much does A(f,¢) change if we can alter z? We will assume that the number of positive examples, k, has to remain constant. We can only alter x i, rotate one y (i).

9
9 Stability Analysis Case 1: y i =0 Case 2: y i =1

10
10 Proof Case 1: y i =0 Case 2: y i =1

11
11 Main Result Theorem: For all probability measures, for all ®>1/m, for all f:X ! R, with probability at least 1-± over the IID draw of a training and test sample both of size m, if both training sample z and test sample z contain at least d®me positive examples then

12
12 Proof 1. McDiarmids inequality: For any function g:Z n ! R with stability c, for all probability measures P with probability at least 1-± over the IID draw of Z 2. Set n= 2m and call the two m-halfes Z 1 and Z 2. Define g i (Z):=A(f,Z i ). Then, by IID

13
13 Discussions First bound which shows that asymptotically (m!1) training and test set performance (in terms of average precision) converge! The effective sample size is only the number of positive examples, in fact, only ® 2 m. The proof can be generalised to arbitrary test sample sizes. The constants can be improved.

14
14 Applications Cardinality bounds Compression Bounds (TREC 2002) No VC bounds! No Margin bounds! Union bound:

15
15 Conclusions Ranking learning requires to consider non- point-wise loss functions. In order to study the complexity of algorithms we need to have large deviation inequalities for ranking performance measures. McDiarmids inequality is a powerful tool. Future work is focused on ROC curves.

Similar presentations

Presentation is loading. Please wait....

OK

Counting On www.schooltrain.info Wendy Pallant 2002.

Counting On www.schooltrain.info Wendy Pallant 2002.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google