Presentation is loading. Please wait.

Presentation is loading. Please wait.

Information Extraction Yunyao Li EECS /SI 767 03/29/2006.

Similar presentations


Presentation on theme: "Information Extraction Yunyao Li EECS /SI 767 03/29/2006."— Presentation transcript:

1 Information Extraction Yunyao Li EECS /SI 767 03/29/2006

2 The Problem Date Time: Start - End Speaker Person Location

3 What is “Information Extraction” Filling slots in a database from sub-segments of text. As a task: October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation. Today, Microsoft claims to "love" the open- source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. "We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… NAME TITLE ORGANIZATION Courtesy of William W. Cohen

4 What is “Information Extraction” Filling slots in a database from sub-segments of text. As a task: Courtesy of William W. Cohen October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation. Today, Microsoft claims to "love" the open- source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. "We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… NAME TITLE ORGANIZATION Bill Gates CEO Microsoft Bill Veghte VP Microsoft Richard Stallman founder Free Soft.. IE

5 What is “Information Extraction” Courtesy of William W. Cohen Information Extraction = segmentation + classification + association + clustering October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation. Today, Microsoft claims to "love" the open- source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. "We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… Microsoft Corporation CEO Bill Gates Microsoft Gates Microsoft Bill Veghte Microsoft VP Richard Stallman founder Free Software Foundation aka “named entity extraction”

6 What is “Information Extraction” Courtesy of William W. Cohen Microsoft Corporation CEO Bill Gates Microsoft Gates Microsoft Bill Veghte Microsoft VP Richard Stallman founder Free Software Foundation October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation. Today, Microsoft claims to "love" the open- source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. "We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… Information Extraction = segmentation + classification + association + clustering

7 What is “Information Extraction” Courtesy of William W. Cohen October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation. Today, Microsoft claims to "love" the open- source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. "We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… Microsoft Corporation CEO Bill Gates Microsoft Gates Microsoft Bill Veghte Microsoft VP Richard Stallman founder Free Software Foundation Information Extraction = segmentation + classification + association + clustering

8 What is “Information Extraction” Courtesy of William W. Cohen October 14, 2002, 4:00 a.m. PT For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation. Today, Microsoft claims to "love" the open- source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers. "We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“ Richard Stallman, founder of the Free Software Foundation, countered saying… Information Extraction = segmentation + classification + association + clustering Microsoft Corporation CEO Bill Gates Microsoft Gates Microsoft Bill Veghte Microsoft VP Richard Stallman founder Free Software Foundation NAME TITLE ORGANIZATION Bill Gates CEOMicrosoft Bill Veghte VP Microsoft Richard StallmanfounderFree Soft.. * * * *

9 Live Example: SeminarLive Example

10 Landscape of IE Techniques Courtesy of William W. Cohen Lexicons Alabama Alaska … Wisconsin Wyoming Abraham Lincoln was born in Kentucky. member? Classify Pre-segmented Candidates Abraham Lincoln was born in Kentucky. Classifier which class? Sliding Window Abraham Lincoln was born in Kentucky. Classifier which class? Try alternate window sizes: Boundary Models Abraham Lincoln was born in Kentucky. Classifier which class? BEGINENDBEGINEND BEGIN Context Free Grammars Abraham Lincoln was born in Kentucky. NNPVPNPVNNP NP PP VP S Most likely parse? Finite State Machines Abraham Lincoln was born in Kentucky. Most likely state sequence? Our Focus today!

11 Markov Property S2 S1 1/2 1/3 2/3 1 The state of a system at time t+1, q t+1, is conditionally independent of {q t-1, q t-2, …, q 1, q 0 } given q t In another word, current state determines the probability distribution for the next state. S1: rain S2: cloud S3: sun

12 Markov Property S2 S3 S1 1/2 1/3 2/3 1 State-transition probabilities, A = S1: rain S2: cloud S3: sun Q: given today is sunny (i.e., q 1 =3), what is the probability of “sun-cloud” with the model?

13 Hidden Markov Model S1: rain S2: cloud S3: sun S2 S3 S1 1/2 1/3 2/3 14/5 1/10 7/10 1/5 3/10 9/10 observations O1O1 O2O2 O3O3 O4O4 O5O5 state sequences

14 IE with Hidden Markov Model SI/EECS 767 is held weekly at SIN2. SI/EECS 767 is held weekly at SIN2 Course name: SI/EECS 767 Given a sequence of observations: and a trained HMM: Find the most likely state sequence: (Viterbi) Any words said to be generated by the designated “course name” state extract as a course name: course name location name background

15 Name Entity Extraction [Bikel, et al 1998] Person Org Other (Five other name classes) start-of- sentence end-of- sentence Hidden states

16 Name Entity Extraction Transition probabilities Observation probabilities P(s t | s t-1, o t-1 ) P(o t | s t, s t-1 ) P(o t | s t, o t-1 ) or (1) Generating first word of a name-class (2) Generating the rest of words in the name-class (3) Generating “+end+” in a name-class

17 Training: Estimating Probabilities

18 Back-Off “unknown words” and insufficient training data P(s t | s t-1 ) P(s t ) P(o t | s t ) P(o t ) Transition probabilities Observation probabilities

19 HMM-Experimental Results Train on ~500k words of news wire text. Results:

20 Learning HMM for IE [Seymore, 1999] Consider labeled, unlabeled, and distantly-labeled data

21 Some Issues with HMM Need to enumerate all possible observation sequences Not practical to represent multiple interacting features or long-range dependencies of the observations Very strict independence assumptions on the observations

22 Maximum Entropy Markov Models S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” Idea: replace generative model in HMM with a maxent model, where state depends on observations Courtesy of William W. Cohen [Lafferty, 2001]

23 MEMM S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” Idea: replace generative model in HMM with a maxent model, where state depends on observations and previous state history Courtesy of William W. Cohen

24 HMM vs. MEMM S t-1 StSt OtOt S t+1 O t+1 O t-1... S t-1 StSt OtOt S t+1 O t+1 O t-1...

25 Label Bias Problem with MEMM Consider this MEMM Pr(12|ro) = Pr(2|1,ro)Pr(1,ro) = Pr(2| 1,o)Pr(1,r) Pr(2|1,o) = Pr(2|1,r) = 1 Pr(12|ri) = Pr(2|1,ri)Pr(1,ri) = Pr(2| 1,i)Pr(1,r) Pr(12|ro) = Pr(12|ri) But it should be Pr(12|ro) < Pr(12|ri)!

26 Solve the Label Bias Problem Change the state-transition structure of the model –Not always practical to change the set of states Start with a fully-connected model and let the training procedure figure out a good structure –Prelude the use of prior, which is very valuable (e.g. in information extraction)

27 Random Field Courtesy of Rongkun Shen

28 Conditional Random Field Courtesy of Rongkun Shen

29 Conditional Distribution x is a data sequence y is a label sequence v is a vertex from vertex set V = set of label random variables e is an edge from edge set E over V f k and g k are given and fixed. g k is a Boolean vertex feature; f k is a Boolean edge feature k is the number of features are parameters to be estimated y| e is the set of components of y defined by edge e y| v is the set of components of y defined by vertex v If the graph G = (V, E) of Y is a tree, the conditional distribution over the label sequence Y = y, given X = x, by fundamental theorem of random fields is:

30 Conditional Distribution CRFs use the observation-dependent normalization Z(x) for the conditional distributions: Z(x) is a normalization over the data sequence x

31 HMM like CRF Single feature for each state-state pair (y’,y) and state- observation pair in the data to train CRF Y t-1 YtYt XtXt Y t+1 X t+1 X t-1... = if y u = y’ and y v = y otherwise = if y v = y and x v = x otherwise y’,y and µ y,x are equivalent to the logarithm of the HMM transition probability Pr(y’|y) and observation probability Pr(x|y)

32 HMM like CRF For a chain structure, the conditional probability of a label sequence can be expressed in matrix form. For each position i in the observed sequence x, define matrix Where e i is the edge with label (y i-1, y i ) and v i is the vertex with label y i

33 HMM like CRF The normalization function is the (start, stop) entry of the product of these matrices The conditional probability of label sequence y is: where, y 0 = start and y n+1 = stop

34 Parameter Estimation The problem: determine the parameters From training data with empirical distribution The goal: maximize the log-likelihood objective function

35 Parameter Estimation – Iterative Scaling Algorithms Update the weights as and for Appropriately chosen for edge feature f k is the solution of T(x, y) is a global property of (x,y) and efficiently computing the Right-hand sides of the above equation is a problem

36 Algorithm S Define slack feature: For each index i = 0, …, n+1 we define forward vectors And backward vectors

37 Algorithm S = = =

38 The rate of convergence is governed by step size which is Inversely proportional to constant S, but S is generally quite large, resulting in slow convergence.

39 Algorithm T Keeps track of partial T total. It accumulates feature expectations into counters indexed by T(x) Use forward-back ward recurrences to compute the expectation a k,t of feature f k and b k,t of feature g k given that T(x) = t

40 Experiments Modeling label bias problem – 2000 training and 500 test samples generated by HMM –CRF error is 4.6% –MEMM error is 42% CRF solves label bias problem

41 Experiments Modeling mixed order sources –CRF converge in 500 iterations –MEMM converge in 100 iterations

42 MEMM vs. HMM The HMM outperforms the MEMM

43 CRF vs. MEMM CRF usually outperforms the MEMM

44 CRF vs. HMM Each open square represents a data set with  < ½, and a sold square indicates a data set with a   ½. When the data is mostly second order   ½, the discriminatively trained CRF usually outperforms the MEMM

45 POS Tagging Experiments First-order HMM, MEMM and CRF model Data set: Penn Tree bank 50-50% test-train split Uses MEMM parameter vector as a starting point for training the corresponding CRF to accelerate convergence speed.

46 Interactive IE using CRF Interactive parser updates IE results according to user’s changes. Color coding used to alert the ambiguity of IE.

47 Some IE tools Available MALLET (UMass) –statistical natural language processing, –document classification, –clustering, –information extraction –other machine learning applications to text. Sample Application: GeneTaggerCRF: a gene-entity tagger based on MALLET (MAchine Learning for LanguagE Toolkit). It uses conditional random fields to find genes in a text file.

48 http://minorthird.sourceforge.net/ “a collection of Java classes for storing text, annotating text, and learning to extract entities and categorize text” Stored documents can be annotated in independent files using TextLabels (denoting, say, part-of-speech and semantic information) MinorThird

49 GATE http://gate.ac.uk/ie/annie.html leading toolkit for Text Mining distributed with an Information Extraction component set called ANNIE (demo)Information Extractiondemo Used in many research projects –Long list can be found on its website –Under integration of IBM UIMA

50 Sunita Sarawagi's CRF package http://crf.sourceforge.net/ A Java implementation of conditional random fields for sequential labeling.

51 UIMA (IBM) Unstructured Information Management Architecture. –A platform for unstructured information management solutions from combinations of semantic analysis (IE) and search components.

52 Some Interesting Website based on IE ZoomInfo CiteSeer.org (some of us using it everyday!)CiteSeer.org Google Local, Google ScholarGoogle LocalGoogle Scholar and many more…


Download ppt "Information Extraction Yunyao Li EECS /SI 767 03/29/2006."

Similar presentations


Ads by Google