Presentation is loading. Please wait.

Presentation is loading. Please wait.

Structured Prediction: A Large Margin Approach Ben Taskar University of Pennsylvania Joint work with: V. Chatalbashev, M. Collins, C. Guestrin, M. Jordan,

Similar presentations


Presentation on theme: "Structured Prediction: A Large Margin Approach Ben Taskar University of Pennsylvania Joint work with: V. Chatalbashev, M. Collins, C. Guestrin, M. Jordan,"— Presentation transcript:

1 Structured Prediction: A Large Margin Approach Ben Taskar University of Pennsylvania Joint work with: V. Chatalbashev, M. Collins, C. Guestrin, M. Jordan, D. Klein, D. Koller, S. Lacoste-Julien, C. Manning

2 “Don’t worry, Howard. The big questions are multiple choice.”

3 Handwriting Recognition brace Sequential structure xy

4 Object Segmentation Spatial structure xy

5 Natural Language Parsing The screen was a sea of red Recursive structure xy

6 Bilingual Word Alignment What is the anticipated cost of collecting fees under the new proposal? En vertu des nouvelles propositions, quel est le coût prévu de perception des droits? xy What is the anticipated cost of collecting fees under the new proposal ? En vertu de les nouvelles propositions, quel est le coût prévu de perception de les droits ? Combinatorial structure

7 Protein Structure and Disulfide Bridges Protein: 1IMT AVITGACERDLQCG KGTCCAVSLWIKSV RVCTPVGTSGEDCH PASHKIPFSGQRMH HTCPCAPNLACVQT SPKKFKCLSK

8 Local Prediction Classify using local information  Ignores correlations & constraints! breac

9 Local Prediction building tree shrub ground

10 Structured Prediction Use local information Exploit correlations breac

11 Structured Prediction building tree shrub ground

12 Outline Structured prediction models Sequences (CRFs) Trees (CFGs) Associative Markov networks (Special MRFs) Matchings Structured large margin estimation Margins and structure Min-max formulation Linear programming inference Certificate formulation

13 Structured Models Mild assumption: linear combination space of feasible outputs scoring function

14 Chain Markov Net (aka CRF*) a-z y x *Lafferty et al. 01

15 Chain Markov Net (aka CRF*) a-z y x *Lafferty et al. 01

16 Associative Markov Nets Point features spin-images, point height Edge features length of edge, edge orientation yjyj ykyk  jk jj “associative” restriction

17 CFG Parsing #(NP  DT NN) … #(PP  IN NP) … #(NN  ‘sea’)

18 Bilingual Word Alignment position orthography association What is the anticipated cost of collecting fees under the new proposal ? En vertu de les nouvelles propositions, quel est le coût prévu de perception de le droits ? j k

19 Disulfide Bonds: Non-bipartite Matching 1 23 4 65 RSCCPCYWGGCPWGQNCYPEGCSGPKV 1 2 3 4 5 6 6 1 2 4 5 3 Fariselli & Casadio `01, Baldi et al. ‘04

20 Scoring Function RSCCPCYWGGCPWGQNCYPEGCSGPKV 1 2 3 4 5 6 RSCCPCYWGGCPWGQNCYPEGCSGPKV 1 2 3 4 5 6 1 23 4 65 amino acid identities phys/chem properties

21 Structured Models Mild assumptions: linear combination sum of part scores space of feasible outputs scoring function

22 Supervised Structured Prediction Learning Prediction Estimate w Example: Weighted matching Generally: Combinatorial optimization Data Model: Likelihood (intractable) MarginLocal (ignores structure)

23 Outline Structured prediction models Sequences (CRFs) Trees (CFGs) Associative Markov networks (Special MRFs) Matchings Structured large margin estimation Margins and structure Min-max formulation Linear programming inference Certificate formulation

24 We want: Equivalently: OCR Example a lot! … “brace” “aaaaa” “brace”“aaaab” “brace”“zzzzz”

25 We want: Equivalently: ‘It was red’ Parsing Example a lot! S AB CD S AB DF S AB CD S EF GH S AB CD S AB CD S AB CD … ‘It was red’

26 We want: Equivalently: ‘What is the’ ‘Quel est le’ Alignment Example a lot! … 123123 123123 ‘What is the’ ‘Quel est le’ 123123 123123 ‘What is the’ ‘Quel est le’ 123123 123123 ‘What is the’ ‘Quel est le’ 123123 123123 123123 123123 123123 123123 123123 123123 ‘What is the’ ‘Quel est le’ ‘What is the’ ‘Quel est le’ ‘What is the’ ‘Quel est le’

27 Structured Loss b c a r e b r o r e b r o c e b r a c e 2 2 1 0 123123 123123 123123 123123 123123 123123 123123 123123 ‘What is the’ ‘Quel est le’ 0 1 2 2 S AE CD S BE AC S BD AC S AB CD ‘It was red’ 0 1 2 3

28 Large margin estimation Given training examples, we want: Maximize margin Mistake weighted margin: # of mistakes in y *Collins 02, Altun et al 03, Taskar 03

29 Large margin estimation Eliminate Add slacks for inseparable case

30 Large margin estimation Brute force enumeration Min-max formulation ‘Plug-in’ linear program for inference

31 Min-max formulation LP Inference Structured loss (Hamming): Inference discrete optim. Key step: continuous optim.

32 Outline Structured prediction models Sequences (CRFs) Trees (CFGs) Associative Markov networks (Special MRFs) Matchings Structured large margin estimation Margins and structure Min-max formulation Linear programming inference Certificate formulation

33 y  z Map for Markov Nets 01.0 00.0...0 0000 1 0 : 0 0 1 : 0 1 0 : 0 0 1 : 0 0 1 : 0 a b : z 00.0 10.0...0 0000 01.0 00.0...0 0000 00.0 01.0...0 0000 a b : z ab.zab.zab.zab.z

34 Markov Net Inference LP 0000 0000 0100 0000 0 0 1 0 0100 Has integral solutions z for chains, trees Can be fractional for untriangulated networks normalization agreement

35 Associative MN Inference LP For K=2, solutions are always integral (optimal) For K>2, within factor of 2 of optimal “associative” restriction 0 1 0 0 0 1 0 0 0100

36 CFG Chart CNF tree = set of two types of parts: Constituents (A, s, e) CF-rules (A  B C, s, m, e)

37 CFG Inference LP inside outside Has integral solutions z root

38 Matching Inference LP Has integral solutions z degree What is the anticipated cost of collecting fees under the new proposal ? En vertu de les nouvelles propositions, quel est le coût prévu de perception de le droits ? j k

39 LP Duality Linear programming duality Variables  constraints Constraints  variables Optimal values are the same When both feasible regions are bounded

40 Min-max Formulation LP duality

41 Min-max formulation summary Formulation produces concise QP for Low-treewidth Markov networks Associative MNs (K=2) Context free grammars Bipartite matchings Approximate for untriangulated MNs, AMNs with K>2 *Taskar et al 04

42 Unfactored Primal/Dual QP duality Exponentially many constraints/variables

43 Factored Primal/Dual By QP duality Dual inherits structure from problem-specific inference LP Variables  correspond to a decomposition of  variables of the flat case

44 The Connection b c a r e b r o r e b r o c e b r a c e r c a o c r.2.15.25.4.2.35.65.8.4.6 1 b 1 e 2 2 1 0 

45 Duals and Kernels Kernel trick works: Factored dual Local functions (log-potentials) can use kernels

46 Simple iterative method Unstable for structured output: fewer instances, big updates May not converge if non-separable Noisy Voted / averaged perceptron [Freund & Schapire 99, Collins 02] Regularize / reduce variance by aggregating over iterations Alternatives: Perceptron

47 Add most violated constraint Handles more general loss functions Only polynomial # of constraints needed Need to re-solve QP many times Worst case # of constraints larger than factored Alternatives: Constraint Generation [Collins 02; Altun et al, 03; Tsochantaridis et al, 04]

48 Handwriting Recognition Length: ~8 chars Letter: 16x8 pixels 10-fold Train/Test 5000/50000 letters 600/6000 words Models: Multiclass-SVMs* CRFs M 3 nets *Crammer & Singer 01 0 5 10 15 20 25 30 CRFs MC–SVMsM^3 nets Test error (average per-character) raw pixels quadratic kernel cubic kernel 45% error reduction over linear CRFs 33% error reduction over multiclass SVMs better

49 Hypertext Classification WebKB dataset Four CS department websites: 1300 pages/3500 links Classify each page: faculty, course, student, project, other Train on three universities/test on fourth 53% error reduction over SVMs 38% error reduction over RMNs relaxed dual *Taskar et al 02 better loopy belief propagation

50 3D Mapping Laser Range Finder GPS IMU Data provided by: Michael Montemerlo & Sebastian Thrun Label: ground, building, tree, shrub Training: 30 thousand points Testing: 3 million points

51

52

53

54

55 Segmentation results Hand labeled 180K test points ModelAccuracy SVM68% V-SVM73% M3NM3N93%

56 Fly-through

57 Word Alignment Results Model*Error Local learning+matching10.0 Our approach8.5 Data: [Hansards – Canadian Parliament] Features induced on  1 mil unsupervised sentences Trained on 100 sentences (10,000 edges) Tested on 350 sentences (35,000 edges) [Taskar+al 05] *Error: weighted combination of precision/recall [Lacoste-Julien+Taskar+al 06] GIZA/IBM4 [Och & Ney 03]6.5 +Our approach+QAP4.5 +Local learning+matching5.4 +Our approach4.9

58 Outline Structured prediction models Sequences (CRFs) Trees (CFGs) Associative Markov networks (Special MRFs) Matchings Structured large margin estimation Margins and structure Min-max formulation Linear programming inference Certificate formulation

59 Non-bipartite matchings: O(n 3 ) combinatorial algorithm No polynomial-size LP known Spanning trees No polynomial-size LP known Simple certificate of optimality Intuition: Verifying optimality easier than optimizing Compact optimality condition of wrt. 1 23 4 65 ij kl

60 Certificate for non-bipartite matching Alternating cycle: Every other edge is in matching Augmenting alternating cycle: Score of edges not in matching greater than edges in matching Negate score of edges not in matching Augmenting alternating cycle = negative length alternating cycle Matching is optimal  no negative alternating cycles 1 23 4 65 Edmonds ‘65

61 Certificate for non-bipartite matching Pick any node r as root = length of shortest alternating path from r to j Triangle inequality: Theorem: No negative length cycle  distance function d exists Can be expressed as linear constraints: O(n) distance variables, O(n 2 ) constraints 1 23 4 65

62 Certificate formulation Formulation produces compact QP for Spanning trees Non-bipartite matchings Any problem with compact optimality condition *Taskar et al. ‘05

63 Disulfide Bonding Prediction Data [Swiss Prot 39] 450 sequences (4-10 cysteines) Features: windows around C-C pair physical/chemical properties [Taskar+al 05] Model*Acc Local learning+matching41% Recursive Neural Net [Baldi+al’04] 52% Our approach (certificate)55% *Accuracy: % proteins with all correct bonds

64 Formulation summary Brute force enumeration Min-max formulation ‘Plug-in’ convex program for inference Certificate formulation Directly guarantee optimality of

65 Omissions Kernels Non-parametric models Structured generalization bounds Bounds on hamming loss Scalable algorithms (no QP solver needed) Structured SMO (works for chains, trees) [Taskar 04] Structured ExpGrad (works for chains, trees) [Bartlett+al 04] Structured ExtraGrad (works for matchings, AMNs) [Taskar+al 06]

66 Open questions Statistical consistency Hinge loss not consistent for non-binary output [See Tewari & Bartlett 05, McAllester 07] Learning with approximate inference Does constant factor approximate inference guarantee anything about learning? No [See Kulesza & Pereira 07] Perhaps other assumptions needed Discriminative structure learning Using sparsifying priors

67 Conclusion Two general techniques for structured large-margin estimation Exact, compact, convex formulations Allow efficient use of kernels Tractable when other estimation methods are not Efficient learning algorithms Empirical success on many domains

68 References Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. ICML03. M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. EMNLP02 K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR01 J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. ICML04 More papers at http://www.cis.upenn.edu/~taskarhttp://www.cis.upenn.edu/~taskar

69

70 Modeling First Order Effects MonotonicityLocal inversionLocal fertility QAP NP-complete Sentences (  30 words,  1k vars)  few seconds (Mosek) Learning: use LP relaxation Testing: using LP, 83.5% sentences, 99.85% edges integral

71 Segmentation Model  Min-Cut 0 1 Local evidence Spatial smoothness Computing is hard in general, but if edge potentials attractive  min-cut algorithm Multiway-cut for multiclass case  use LP relaxation [Greig+al 89, Boykov+al 99, Kolmogorov & Zabih 02, Taskar+al 04]

72 Scalable Algorithms Batch and online Linear in the size of the data Iterate until convergence For each example in the training sample Run inference using current parameters (varies by method) Online: Update parameters using computed example values Batch: Update parameters using computed sample values Structured SMO (Taskar et al, 03; Taskar 04) Structured Exponentiated Gradient (Bartlett et al, 04) Structured Extragradient (Taskar et al, 05)

73 Experimental Setup Standard Penn treebank split (2-21/22/23) Generative baselines Klein & Manning 03 and Collins 99 Discriminative Basic = max-margin version of K&M 03 Lexical & Lexical + Aux Lexical features (on constituent parts only) t s-1 [t s … t e ] t e+1  predicted tags x s-1 [x s … x e ] x e+1 Auxillary features Flat classifier using same features Prediction of K&M 03 on each span

74 Results for sentences ≤40 words ModelLPLRF1F1 Generative86.3785.2785.82 Lexical+Aux*87.5686.8587.20 Collins 99*85.3385.9485.73 *Trained only on sentences ≤20 words *Taskar et al 04

75 Example The Egyptian president said he would visit Libya today to resume the talks. Generative model: Libya today is base NP Lexical model: today is a one word constituent


Download ppt "Structured Prediction: A Large Margin Approach Ben Taskar University of Pennsylvania Joint work with: V. Chatalbashev, M. Collins, C. Guestrin, M. Jordan,"

Similar presentations


Ads by Google