Presentation is loading. Please wait.

Presentation is loading. Please wait.

Page 1 SRL via Generalized Inference Vasin Punyakanok, Dan Roth, Wen-tau Yih, Dav Zimak, Yuancheng Tu Department of Computer Science University of Illinois.

Similar presentations


Presentation on theme: "Page 1 SRL via Generalized Inference Vasin Punyakanok, Dan Roth, Wen-tau Yih, Dav Zimak, Yuancheng Tu Department of Computer Science University of Illinois."— Presentation transcript:

1 Page 1 SRL via Generalized Inference Vasin Punyakanok, Dan Roth, Wen-tau Yih, Dav Zimak, Yuancheng Tu Department of Computer Science University of Illinois at Urbana-Champaign

2 Page 2 Semantic Role Labeling For each verb in a sentence 1. identify all constituents that fill a semantic role 2. determine their roles Agent, Patient or Instrument, … Their adjuncts, e.g., Locative, Temporal or Manner PropBank project [Kingsbury & Palmer02] provides a large human-annotated corpus of semantic verb-argument relations. CoNLL-2004 shared task [Carreras & Marquez 04]

3 Page 3 Example  A0 represents the leaver,  A1 represents the thing left,  A2 represents the benefactor,  AM-LOC is an adjunct indicating the location of the action,  V determines the verb.

4 Page 4 Argument Types A0-A5 and AA have different semantics for each verb as specified in the PropBank Frame files. 13 types of adjuncts labeled as AM-XXX where XXX specifies the adjunct type. C-XXX is used to specify the continuity of the argument XXX. In some cases, the actual agent is labeled as the appropriate argument type, XXX, while the relative pronoun is instead labeled as R-XXX.

5 Page 5 Examples C-XXX R-XXX

6 Page 6 Outline Find potential argument candidates Classify arguments to types Inference for Argument Structure  Cost Function  Constraints  Integer linear programming (ILP) Results & Discussion

7 Page 7 Find Potential Arguments An argument can be any consecutive words I left my nice pearls to her [ [ [ [ [ ] ] ] ] ] Restrict potential arguments  BEGIN (word) BEGIN (word) = 1  “word begins argument”  END (word) END (word) = 1  “word ends argument” Argument  (w i,...,w j ) is a potential argument iff BEGIN (w i ) = 1 and END (w j ) = 1 Reduce set of potential arguments

8 Page 8 Details – Word-level Classifier BEGIN (word)  Learn a function  B (word,context,structure)  {0,1} END (word)  Learn a function  E (word,context,structure)  {0,1} P OT A RG = {arg | BEGIN (first(arg)) and END (last(arg))}

9 Page 9 Arguments Type Likelihood Assign type-likelihood  How likely is it that arg a is type t?  For all a  P OT A RG, t  T P (argument a = type t ) I left my nice pearls to her [ [ [ [ [ ] ] ] ] ] I left my nice pearls to her 0.3 0.2 0.2 0.3 0.6 0.0 0.0 0.4 A0 C-A1A1Ø

10 Page 10 Details – Phrase-level Classifier Learn a classifier  ARGTYPE (arg)   P (arg)  {A0,A1,...,C-A0,...,AM-LOC,...}  argmax t  {A0,A1,...,C-A0,...,LOC,...} w t  P (arg) Estimate Probabilities  Softmax  P(a = t) = exp(w t  P (a)) / Z

11 Page 11 What is a Good Assignment? Likelihood of being correct  P(Arg a = Type t) if t is the correct type for argument a For a set of arguments a 1, a 2,..., a n  Expected number of arguments that are correct  i P( a i = t i ) We search for the assignment with the maximum expected number of correct arguments.

12 Page 12 Inference Maximize expected number correct  T* = argmax T  i P( a i = t i ) Subject to some constraints  Structural and Linguistic (R-A1  A1) 0.3 0.2 0.2 0.3 0.6 0.0 0.0 0.4 0.1 0.3 0.5 0.1 0.1 0.2 0.3 0.4 I left my nice pearls to her Cost = 0.3 + 0.4 + 0.5 + 0.4 = 1.6Non-OverlappingCost = 0.3 + 0.4 + 0.3 + 0.4 = 1.4 Blue  Red & N-O Cost = 0.3 + 0.6 + 0.5 + 0.4 = 1.8Independent Max

13 Page 13 LP Formulation – Linear Cost Cost function   a  P OT A RG P(a=t) =  a  P OT A RG, t  T P(a=t) x {a=t} Indicator variables x {a1= A0 }, x {a1= A1 }, …, x {a4= AM-LOC }, x {P4=  }  {0,1} Total Cost = p (a1= A0 ) · x (a1= A1 ) + p (a1=  ) · x (a1=  ) +… + p (a4=  ) · x (a4=  )

14 Page 14 Binary values  a  P OT A RG, t  T, x { a = t }  {0,1} Unique labels  a  P OT A RG,  t  T x { a = t } = 1 No overlapping or embedding a1 and a2 overlap  x {a1= Ø } + x {a2= Ø }  1 Linear Constraints (1/2)

15 Page 15 No duplicate argument classes  a  P OT A RG x { a = A0 }  1 R-XXX  a2  P OT A RG,  a  P OT A RG x { a = A0 }  x { a2 = R-A0 } C-XXX  a2  P OT A RG,  (a  P OT A RG )  (a is before a2 ) x { a = A0 }  x { a2 = C-A0 } Linear Constraints (2/2)

16 Page 16 Results on Perfect Boundaries PrecisionRecallF1F1 without inference 86.9587.2487.10 with inference 88.0388.2388.13 Assume the boundaries of arguments (in both training and testing) are given. Development Set

17 Page 17 Results Overall F 1 on Test Set : 66.39

18 Page 18 Discussion Data analysis is important !!  F 1 : ~45%  ~65% Feature engineering, parameter tuning, … Global inference helps !  Using all constraints gains more than 1% F 1 compared to just using non-overlapping constraints  Easy and fast: 15~20 minutes Performance difference ?  Not from word-based vs. chunk-based

19 Page 19 Thank you yih@uiuc.edu


Download ppt "Page 1 SRL via Generalized Inference Vasin Punyakanok, Dan Roth, Wen-tau Yih, Dav Zimak, Yuancheng Tu Department of Computer Science University of Illinois."

Similar presentations


Ads by Google