Presentation is loading. Please wait.

Presentation is loading. Please wait.

A S URVEY ON I NFORMATION E XTRACTION FROM D OCUMENTS U SING S TRUCTURES OF S ENTENCES Chikayama Taura Lab. M1 Mitsuharu Kurita 1.

Similar presentations


Presentation on theme: "A S URVEY ON I NFORMATION E XTRACTION FROM D OCUMENTS U SING S TRUCTURES OF S ENTENCES Chikayama Taura Lab. M1 Mitsuharu Kurita 1."— Presentation transcript:

1 A S URVEY ON I NFORMATION E XTRACTION FROM D OCUMENTS U SING S TRUCTURES OF S ENTENCES Chikayama Taura Lab. M1 Mitsuharu Kurita 1

2 I NTRODUCTION Current search systems are based on 2 assumptions 1. Users send words, not sentences 2. The aim is finding documents which is related to the query words We are unconsciously get to select words which will appear nearby the target information In some cases this clue doesn’t work well 2

3 I NTRODUCTION For more convenient access to the information Analysis of the detail of question To know the target information Analysis of the information in retrieved documents To find the requested information Information Extraction 3

4 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 4

5 I NFORMATION E XTRACTION What is Information Extraction? A kind of task in natural language processing Addresses extraction of information from texts Not to retrieve the documents Originated with an international conference named MUC Message Understanding Conference (MUC) Competition of IE among research groups Set information extraction tasks every year between 1987-1997 5

6 MUC C OMPETITION An example of MUC task MUC-3 terrorism domain Input: news articles (some of them include terrorism event) Input: news articles (some of them include terrorism event) Output: the instances involved in each incident 6

7 MUC C OMPETITION Pattern matching or linguistic analysis At that time (1987-1997), there were many difficulties to use advanced natural language processing Therefore, most of competitors adopted pattern matching to find instances 7

8 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 8

9 E XAMPLE OF P ATTERN M ATCHING CIRCUS [92 Lehnert et al.] Each pattern consists of “trigger word” and “linguistic pattern” Pattern: kidnap-passive Trigger: “kidnap” Linguistic pattern: “ passive-verb” Variable: “target” Pattern: kidnap-passive Trigger: “kidnap” Linguistic pattern: “ passive-verb” Variable: “target” “The mayor was kidnapped by terrorists.” 1.“kidnap” activates the pattern 2.“was kidnapped” is a passive verb phrase 3.The subject “mayor” is the target 9

10 P ROBLEMS OF P ATTERN M ATCHING It takes a huge amount of time to create patterns In many cases, they were handwritten It depends a lot on the target domain It is difficult to adapt to the new task Automatic construction of patterns Automatic construction of patterns 10

11 T HE E ARLIEST A UTOMATIC P ATTERN G ENERATION AutoSlog [93 Riloff et al.] Creates the patterns for CIRCUS automatically Training data: articles tagged the target word Created 1237 patterns from 1500 tagged texts Only 450 of them were judged to be valid by human “The mayor was kidnapped by terrorists.” Pattern: kidnap-passive Trigger: “kidnap” Linguistic pattern: “ passive-verb” Variable: “target” Pattern: kidnap-passive Trigger: “kidnap” Linguistic pattern: “ passive-verb” Variable: “target” 11

12 Recently it has become possible to use deeper linguistic analysis Some studies are addressing new IE tasks using these linguistic resources and machine learning approach 12

13 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 13

14 S ENTENCE S TRUCTURES Dependency Structure Describes modification relations between words One sentence makes up a tree structure Predicate-Argument structure Describes the semantic relations between predicate and argument One sentence makes up a graph structure 14

15 D IFFICULTIES TO U SE S TRUCTURED D ATA Most of the machine learning algorithms deal with the data as feature vectors It is difficult to express structured data (e.g. trees, graphs) as vectors The ways to use sentence structures for IE Frequent substructures Shortest paths between 2 words Applying the kernel method for structured data 15

16 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 16

17 IE WITH S UBGRAPH OF S ENTENCE S TRUCTURES On-Demand Information Extraction [06 Sekine et al.] Create extraction patterns on-demand and extract information with it query Relevant articles Relevant articles Frequent Subtree Mining Frequent Subtree Mining Article databaseDependency analyzer Table of Information Dependency trees Subtree patterns 17

18 E XPERIMENTAL R ESULTS Generated patterns Found patterns for a query “merger and acquisition” (M&A) Extracted Information For the query “acquire, acquisition, merger, buy, purchase” 18

19 E XPERIMENTAL R ESULTS Very quick construction of patterns In MUC, it is allowed to take one month ODIE takes only a few minutes to return the result No training corpus is needed ODIE learns extraction patterns from the data Information about reprising event can be extracted well Merger and acquisition Nobel prize winners 19

20 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 20

21 IE WITH S HORTEST P ATH BETWEEN W ORDS Extraction of interacting protein pair [06 Yakushiji et al.] Extract the interacting protein pairs from biomedical articles Focus on the shortest path between 2 protein names on predicate-argument structure Discriminate with Support Vector Machine (SVM) Entity1 is interacted with a hydrophilic loop region of Entity2. be entity1 interact with region of a a hydrophilic loop entity2 21

22 P ATTERN G ENERATION Variation of Patterns The extracted patterns are not enough Divide the patterns and combine them into new patterns MainPrep Entity ……… X X interact Y Y with protein region of 22

23 P ATTERN G ENERATION Validation of patterns Some of these patterns are inappropriate Each patterns are scored by its adequacy to the learning data Feature vector 23 TP: True Positive FP: False Positive TP: True Positive FP: False Positive

24 S UPPORT V ECTOR M ACHINE (SVM) 2 class linear classifier Divide the data space with hyperplane Margin maximization 24

25 E XPERIMENTAL R ESULTS Learning AImed corpus 225 abstracts of biomedical papers Annotated with protein names and interactions Extraction MEDLINE 14 million titles and 8 million abstracts Extracted data 7775 protein pairs 64.0% precision 83.8% recall 25

26 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 26

27 IE WITH T HE K ERNEL M ETHOD ON S ENTENCE S TRUCTURES Kernel Method e.g. SVM Data are used only in the form of dot products If you can calculate the dot product directly, you do not have to calculate the vector Furthermore, you can use other functions as long as they meet some conditions 27 Raw data vector space classifier Kernel function

28 R ELATION E XTRACTION Relation Extraction with Tree Kernel [04 Culotta et al.] Classify the relation between 2 entities 5 entity types (person, organization, geo-political-entity, location, facility) 5 major types of relations (at, near, part, role, social) Classify the smallest subtree of dependency tree which includes the entities 28

29 T REE K ERNEL Represents the similarity between 2 tree-shaped data Calculated as the sum of similarity of nodes 29 Dequeue a node pair Add the similarity Find all child node sequence pairs whose main features of the nodes are common Find all child node sequence pairs whose main features of the nodes are common Enqueue the child node pairs Is the queue empty? Is the queue empty? Return the similarity Enqueue root node pair Start End Yes No

30 C ALCULATION OF T REE K ERNEL Features of nodes The similarity between nodes are defined as the number of common features (except the main features) 30 Main features

31 C ALCULATION OF T REE K ERNEL 31 A A B B C C D D E E A’ B’ D’ E’ F’ A A B B A A D D D D E E C’ A’ B’ C’ A’ A A B’ A’ D’ A A B B C C E’ X and X’ denote the nodes whose main features are common A A C C A’ C’

32 E XPERIMENTAL R ESULTS Data set: ACE corpus 800 annotated documents (gathered from newspapers and broadcasts) 5 entity types (person, organization, geo-political-entity, location, facility) 5 major types of relations (at, near, part, role, social) 32 KernelPrecision (%)Recall (%) Bag-of-words kernel47.010.0 Tree kernel69.625.3

33 O UTLINE Introduction Overview of Information Extraction (IE) IE with pattern matching IE with sentence structures Frequent substructure Shortest path between 2 words Applying the kernel method for structured data Conclusion 33

34 C ONCLUSION Overview of Information Extraction The aim of information extraction Recent movement to use deep linguistic resource The way to use sentence structures for IE Difficulties of using structured data in machine learning Three different approaches to exploit them 34


Download ppt "A S URVEY ON I NFORMATION E XTRACTION FROM D OCUMENTS U SING S TRUCTURES OF S ENTENCES Chikayama Taura Lab. M1 Mitsuharu Kurita 1."

Similar presentations


Ads by Google