Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning Ensembles of First- Order Clauses That Optimize Precision-Recall Curves Mark Goadrich Computer Sciences Department University of Wisconsin - Madison.

Similar presentations


Presentation on theme: "Learning Ensembles of First- Order Clauses That Optimize Precision-Recall Curves Mark Goadrich Computer Sciences Department University of Wisconsin - Madison."— Presentation transcript:

1 Learning Ensembles of First- Order Clauses That Optimize Precision-Recall Curves Mark Goadrich Computer Sciences Department University of Wisconsin - Madison Ph. D. Defense August 13th, 2007

2 Biomedical Information Extraction *image courtesy of SEER Cancer Training Site DatabaseStructured

3 Biomedical Information Extraction http://www.geneontology.org

4 NPL3 encodes a nuclear protein with an RNA recognition motif and similarities to a family of proteins involved in RNA metabolism. NPL3 encodes a nuclear protein with an RNA recognition motif and similarities to a family of proteins involved in RNA metabolism. ykuD was transcribed by SigK RNA polymerase from T4 of sporulation. ykuD was transcribed by SigK RNA polymerase from T4 of sporulation. Mutations in the COL3A1 gene have been implicated as a cause of type IV Ehlers-Danlos syndrome, a disease leading to aortic rupture in early adult life. Mutations in the COL3A1 gene have been implicated as a cause of type IV Ehlers-Danlos syndrome, a disease leading to aortic rupture in early adult life.

5 Outline Biomedical Information Extraction Biomedical Information Extraction Inductive Logic Programming Inductive Logic Programming Gleaner Gleaner Extensions to Gleaner Extensions to Gleaner –GleanerSRL –Negative Salt –F-Measure Search –Clause Weighting (time permitting)

6 Inductive Logic Programming Machine Learning Machine Learning –Classify data into categories –Divide data into train and test sets –Generate hypotheses on train set and then measure performance on test set In ILP, data are Objects … In ILP, data are Objects … –person, block, molecule, word, phrase, … and Relations between them and Relations between them –grandfather, has_bond, is_member, …

7 Seeing Text as Relational Objects Phrase Sentence Word alphanumeric(…) internal_caps(…)verb(…) phrase_child(…, …) long_sentence(…) phrase_parent(…, …) noun_phrase(…)

8 Protein Localization Clause prot_loc(Protein,Location,Sentence) :- phrase_contains_some_alphanumeric(Protein,E), phrase_contains_some_internal_cap_word(Protein,E), phrase_next(Protein,_), different_phrases(Protein,Location), one_POS_in_phrase(Location,noun), phrase_contains_some_arg2_10x_word(Location,_), phrase_previous(Location,_), avg_length_sentence(Sentence).

9 ILP Background Seed Example Seed Example –A positive example that our clause must cover Bottom Clause Bottom Clause –All predicates which are true about seed example seed prot_loc(P,L,S) prot_loc(P,L,S):- alphanumeric(P) prot_loc(P,L,S):- alphanumeric(P),leading_cap(L)

10 Clause Evaluation Prediction vs Actual Prediction vs Actual Positive or Negative True or False FNTP + FPTP + TPTP FP TN FN actual prediction RP 2PR + F1 Score = Focus on positive examples Focus on positive examples Recall = Precision =

11 Protein Localization Clause prot_loc(Protein,Location,Sentence) :- phrase_contains_some_alphanumeric(Protein,E), phrase_contains_some_internal_cap_word(Protein,E), phrase_next(Protein,_), different_phrases(Protein,Location), one_POS_in_phrase(Location,noun), phrase_contains_some_arg2_10x_word(Location,_), phrase_previous(Location,_), avg_length_sentence(Sentence). 0.15 Recall0.51 Precision0.23 F1 Score

12 Aleph (Srinivasan ‘03) Aleph learns theories of clauses Aleph learns theories of clauses –Pick positive seed example –Use heuristic search to find best clause –Pick new seed from uncovered positives and repeat until threshold of positives covered Sequential learning is time-consuming Sequential learning is time-consuming Can we reduce time with ensembles? Can we reduce time with ensembles? And also increase quality? And also increase quality?

13 Outline Biomedical Information Extraction Biomedical Information Extraction Inductive Logic Programming Inductive Logic Programming Gleaner Gleaner Extensions to Gleaner Extensions to Gleaner –GleanerSRL –Negative Salt –F-Measure Search –Clause Weighting

14 Gleaner (Goadrich et al. ‘04, ‘06) Definition of Gleaner Definition of Gleaner –One who gathers grain left behind by reapers Key Ideas of Gleaner Key Ideas of Gleaner –Use Aleph as underlying ILP clause engine –Search clause space with Rapid Random Restart –Keep wide range of clauses usually discarded –Create separate theories for diverse recall

15 Gleaner - Learning Precision Recall Create B Bins Create B Bins Generate Clauses Generate Clauses Record Best per Bin Record Best per Bin

16 Gleaner - Learning Recall Seed 1 Seed 2 Seed 3 Seed K......

17 Gleaner - Ensemble........ ex1: prot_loc(…) 12 ex2: prot_loc(…) 47 ex3: prot_loc(…) 55 ex598: prot_loc(…) 5 ex599: prot_loc(…) 14 ex600: prot_loc(…) 2 ex601: prot_loc(…) 18 12 ex2: prot_loc(…) 47 Pos Neg Pos Neg Pos Clauses from bin 5

18 Gleaner - Ensemble Recall Precision 1.0 pos3: prot_loc(…) neg28: prot_loc(…) pos2: prot_loc(…) neg4: prot_loc(…) neg475: prot_loc(…). pos9: prot_loc(…) neg15: prot_loc(…). 55 52 47 18 17 16 ScoreExamples 1.00 0.05 0.50 0.05 0.66 0.10 0.12 0.85 0.13 0.90 0.12 0.90 PrecisionRecall

19 Gleaner - Overlap For each bin, take the topmost curve For each bin, take the topmost curve Recall Precision

20 How to Use Gleaner (Version 1) Precision Recall Generate Tuneset Curve Generate Tuneset Curve User Selects Recall Bin User Selects Recall Bin Return Testset Classifications Ordered By Their Score Return Testset Classifications Ordered By Their Score Recall = 0.50 Precision = 0.70

21 Gleaner Algorithm Divide space into B bins Divide space into B bins For K positive seed examples For K positive seed examples –Perform RRR search with precision x recall heuristic –Save best clause found in each bin b For each bin b For each bin b –Combine clauses in b to form theory b –Find L of K threshold for theory m which performs best in bin b on tuneset Evaluate thresholded theories on testset Evaluate thresholded theories on testset

22 Aleph Ensembles (Dutra et al ‘02) Compare to ensembles of theories Compare to ensembles of theories Ensemble Algorithm Ensemble Algorithm –Use K different initial seeds –Learn K theories containing C rules –Rank examples by the number of theories

23 YPD Protein Localization Hand-labeled dataset (Ray & Craven ’01) Hand-labeled dataset (Ray & Craven ’01) –7,245 sentences from 871 abstracts –Examples are phrase-phrase combinations  1,810 positive & 279,154 negative 1.6 GB of background knowledge 1.6 GB of background knowledge –Structural, Statistical, Lexical and Ontological –In total, 200+ distinct background predicates Performed five-fold cross-validation Performed five-fold cross-validation

24 Evaluation Metrics Area Under Precision- Recall Curve (AUC-PR) Area Under Precision- Recall Curve (AUC-PR) –All curves standardized to cover full recall range –Averaged AUC-PR over 5 folds Number of clauses considered Number of clauses considered –Rough estimate of time Recall Precision 1.0

25 PR Curves - 100,000 Clauses

26 Protein Localization Results

27 Other Relational Datasets Genetic Disorder (Ray & Craven ’01) Genetic Disorder (Ray & Craven ’01) –233 positive & 103,959 negative Protein Interaction (Bunescu et al ‘04) Protein Interaction (Bunescu et al ‘04) –799 positive & 76,678 negative Advisor (Richardson and Domingos ‘04) Advisor (Richardson and Domingos ‘04) –Students, Professors, Courses, Papers, etc. –113 positive & 2,711 negative

28 Genetic Disorder Results

29 Protein Interaction Results

30 Advisor Results

31 Gleaner Summary Gleaner makes use of clauses that are not the highest scoring ones for improved speed and quality Gleaner makes use of clauses that are not the highest scoring ones for improved speed and quality Issues with Gleaner Issues with Gleaner –Output is PR curve, not probability –Redundant clauses across seeds –L of K clause combination

32 Outline Biomedical Information Extraction Biomedical Information Extraction Inductive Logic Programming Inductive Logic Programming Gleaner Gleaner Extensions to Gleaner Extensions to Gleaner –GleanerSRL –Negative Salt –F-Measure Search –Clause Weighting

33 Estimating Probabilities - SRL Given highly skewed relational datasets Given highly skewed relational datasets Produce accurate probability estimates Produce accurate probability estimates Gleaner only produces PR curves Gleaner only produces PR curves Recall Precision

34 Gleaner Algorithm GleanerSRL Algorithm (Goadrich ‘07) Divide space into B bins Divide space into B bins For K positive seed examples For K positive seed examples –Perform RRR search with precision x recall heuristic –Save best clause found in each bin b For each bin b For each bin b –Combine clauses in b to form theory b –Find L of K threshold for theory m which performs best in bin b on tuneset Evaluate thresholded theories on testset Evaluate thresholded theories on testset Create propositional feature-vectors Create propositional feature-vectors Learn scores with SVM or other propositional learning algorithms Learn scores with SVM or other propositional learning algorithms Calibrate scores into probabilities Calibrate scores into probabilities Evaluate probabilities with Cross Entropy Evaluate probabilities with Cross Entropy

35 GleanerSRL Algorithm

36 Learning with Gleaner Precision Recall Create B Bins Create B Bins Generate Clauses Generate Clauses Record Best per Bin Record Best per Bin Repeat for K seeds Repeat for K seeds

37 Creating Feature Vectors...... ex1: prot_loc(…) 12 Pos Neg Pos Neg Clauses from bin 5 1 Binned K Boolean 1 0 1 1 0......

38 Learning Scores via SVM

39 Calibrating Probabilities Use Isotonic Regression (Zadrozny & Elkan ‘03) to transform SVM scores into probabilities Use Isotonic Regression (Zadrozny & Elkan ‘03) to transform SVM scores into probabilities 001011011001011011 Score Probability -2 -0.4 0.20.40.50.91.31.715 Examples Class 0.660.500.001.00

40 GleanerSRL Results for Advisor (Davis et al. 05) (Davis et al. 07)

41 Outline Biomedical Information Extraction Biomedical Information Extraction Inductive Logic Programming Inductive Logic Programming Gleaner Gleaner Extensions to Gleaner Extensions to Gleaner –GleanerSRL –Negative Salt –F-Measure Search –Clause Weighting

42 Diversity of Gleaner Clauses

43 Negative Salt Seed Example Seed Example –A positive example that our clause must cover Salt Example Salt Example –A negative example that our clause should avoid seed prot_loc(P,L,S) salt

44 Gleaner Algorithm Divide space into B bins Divide space into B bins For K positive seed examples For K positive seed examples –Perform RRR search with precision x recall heuristic –Save best clause found in each bin b For each bin b For each bin b –Combine clauses in b to form theory b –Find L of K threshold for theory m which performs best in bin b on tuneset Evaluate thresholded theories on testset Evaluate thresholded theories on testset –Select Negative Salt example –Perform RRR search with salt-avoiding heuristic –Save best clause found in each bin b For each bin b For each bin b –Combine clauses in b to form theory b –Find L of K threshold for theory m which performs best in bin b on tuneset Evaluate thresholded theories on testset Evaluate thresholded theories on testset

45 Diversity of Negative Salt

46 Effect of Salt on Theory m Choice

47 Negative Salt AUC-PR

48 Outline Biomedical Information Extraction Biomedical Information Extraction Inductive Logic Programming Inductive Logic Programming Gleaner Gleaner Extensions to Gleaner Extensions to Gleaner –GleanerSRL –Negative Salt –F-Measure Search –Clause Weighting

49 Gleaner Algorithm Divide space into B bins Divide space into B bins For K positive seed examples For K positive seed examples –Perform RRR search with precision x recall heuristic –Perform RRR search with F Measure heuristic –Save best clause found in each bin b For each bin b For each bin b –Combine clauses in b to form theory b –Find L of K threshold for theory m which performs best in bin b on tuneset Evaluate thresholded theories on testset Evaluate thresholded theories on testset

50 RRR Search Heuristic Heuristic function directs RRR search Heuristic function directs RRR search Can provide direction through F Measure Can provide direction through F Measure Low values for encourage Precision Low values for encourage Precision High values for encourage Recall High values for encourage Recall

51 F 0.01 Measure Search

52 F 1 Measure Search

53 F 100 Measure Search

54 F Measure AUC-PR Results Genetic DisorderProtein Localization Genetic DisorderProtein Localization

55 Weighting Clauses Alter the L of K combination in Gleaner Alter the L of K combination in Gleaner Within Single Theory Within Single Theory –Cumulative weighting schemes successful –Precision highest-scoring scheme Within Gleaner Within Gleaner –Precision beats Equal Wgt’ed and Naïve Bayes –Significant results on genetic-disorder dataset

56 Weighting Clauses...... ex1: prot_loc(…) Pos Neg Pos Neg Clauses from bin 5 1 0 1 1 0 W1W1 0 W3W3 W4W4 0 Cumulative ∑(precision of each matching clause) ∑(recall of each matching clause) ∑(F1 measure of each matching clause) Naïve Bayes and TAN learn probability for example Ranked List max(precision of each matching clause) Weighted Vote ave(precision of each matching clause)

57 Dominance Results Statistically significant dominance in i,j Statistically significant dominance in i,j Precision is never dominated Precision is never dominated Naïve Bayes competitive with cumulative Naïve Bayes competitive with cumulative

58 Weighting Gleaner Results

59 Conclusions and Future Work Gleaner is a flexible and fast ensemble algorithm for highly skewed ILP datasets Gleaner is a flexible and fast ensemble algorithm for highly skewed ILP datasets Other Work Other Work –Proper interpolation of PR Space (Goadrich et al. ‘04, ‘06) –Relationship of PR and ROC Curves (Davis and Goadrich ‘06) Future Work Future Work –Explore Gleaner on propositional datasets –Learn heuristic function for diversity (Oliphant and Shavlik ‘07)

60 Acknowledgements USA DARPA Grant F30602-01-2-0571 USA DARPA Grant F30602-01-2-0571 USA Air Force Grant F30602-01-2-0571 USA Air Force Grant F30602-01-2-0571 USA NLM Grant 5T15LM007359-02 USA NLM Grant 5T15LM007359-02 USA NLM Grant 1R01LM07050-01 USA NLM Grant 1R01LM07050-01 UW Condor Group UW Condor Group Jude Shavlik, Louis Oliphant, David Page, Vitor Santos Costa, Ines Dutra, Soumya Ray, Marios Skounakis, Mark Craven, Burr Settles, Patricia Brennan, AnHai Doan, Jesse Davis, Frank DiMaio, Ameet Soni, Irene Ong, Laura Goadrich, all 6th Floor MSCers Jude Shavlik, Louis Oliphant, David Page, Vitor Santos Costa, Ines Dutra, Soumya Ray, Marios Skounakis, Mark Craven, Burr Settles, Patricia Brennan, AnHai Doan, Jesse Davis, Frank DiMaio, Ameet Soni, Irene Ong, Laura Goadrich, all 6th Floor MSCers


Download ppt "Learning Ensembles of First- Order Clauses That Optimize Precision-Recall Curves Mark Goadrich Computer Sciences Department University of Wisconsin - Madison."

Similar presentations


Ads by Google