Presentation is loading. Please wait.

Presentation is loading. Please wait.

Querying Text Databases for Efficient Information Extraction Eugene Agichtein Luis Gravano Columbia University.

Similar presentations


Presentation on theme: "Querying Text Databases for Efficient Information Extraction Eugene Agichtein Luis Gravano Columbia University."— Presentation transcript:

1 Querying Text Databases for Efficient Information Extraction Eugene Agichtein Luis Gravano Columbia University

2 2 Extracting Structured Information “Buried” in Text Documents Apple's programmers "think different" on a "campus" in Cupertino, Cal. Nike employees "just do it" at what the company refers to as its "World Campus," near Portland, Ore. Microsoft's central headquarters in Redmond is home to almost every product group and division. OrganizationLocation Microsoft Apple Computer Nike Redmond Cupertino Portland Brent Barlow, 27, a software analyst and beta-tester at Apple Computer’s headquarters in Cupertino, was fired Monday for "thinking a little too different."

3 3 Information Extraction Applications Over a corporation’s customer report or email complaint database: enabling sophisticated querying and analysis Over biomedical literature: identifying drug/condition interactions Over newspaper archives: tracking disease outbreaks, terrorist attacks; intelligence Significant progress over the last decade [MUC]

4 4 Information Extraction Example: Organizations’ Headquarters Input: Documents Named-Entity Tagging Pattern Matching Output: Tuples

5 5 Goal: Extract All Tuples of a Relation from a Document Database Information Extraction System One approach: feed every document to information extraction system Problem: efficiency! Extracted Tuples

6 6 Information Extraction is Expensive Efficiency is a problem even after training information extraction system Example: NYU’s Proteus extraction system takes around 9 seconds per document Over 15 days to process 135,000 news articles “Filtering” before further processing a document might help Can’t afford to “scan the web” to process each page! “Hidden-Web” databases don’t allow crawling

7 7 Information Extraction Without Processing All Documents Observation: Often only small fraction of database is relevant for an extraction task Our approach: Exploit database search engine to retrieve and process only “promising” documents

8 8 Extracted Relation Architecture of our QXtract System User-Provided Seed Tuples Queries Promising Documents Query Generation Information Extraction MicrosoftRedmond AppleCupertino MicrosoftRedmond AppleCupertino ExxonIrving IBMArmonk IntelSanta Clara Key problem: Learn queries to retrieve “promising” documents

9 Generating Queries to Retrieve Promising Documents 1.Get document sample with “likely negative” and “likely positive” examples. 2.Label sample documents using information extraction system as “oracle.” 3.Train classifiers to “recognize” useful documents. 4.Generate queries from classifier model/rules. Query Generation Information Extraction Seed Sampling Classifier Training Queries User-Provided Seed Tuples

10 10 Getting a Training Document Sample Microsoft AND Redmond Apple AND Cupertino “Random” Queries Get document sample with “likely negative” and “likely positive” examples. User-Provided Seed Tuples Seed Sampling User-Provided Seed Tuples

11 11 Labeling the Training Document Sample Information Extraction System MicrosoftRedmond AppleCupertino IBMArmonk Use information extraction system as “oracle” to label examples as “true positive” and “true negative.”

12 12 Training Classifiers to Recognize “Useful” Documents Classifier Training isbasedinnearcity spokespersonreportednewsearningsrelease productsmadeusedexportedfar pastoldhomerunsponsoredevent + + - - RipperSVM based AND near => Useful based3 spokesperson2 sponsored Okapi (IR) is based near spokesperson earnings sponsored event far homerun Document features: words

13 13 Queries Query Generation Generating Queries from Classifiers based AND near spokesperson based QCombined based spokesperson spokesperson earnings based AND near RipperSVM based3 spokesperson2 sponsored Okapi (IR) based AND near => Useful is based near spokesperson earnings sponsored event far homerun

14 14 Extracted Relation Architecture of our QXtract System User-Provided Seed Tuples Queries Promising Documents Query Generation Information Extraction MicrosoftRedmond AppleCupertino MicrosoftRedmond AppleCupertino ExxonIrving IBMArmonk IntelSanta Clara

15 15 Experimental Evaluation: Data Training Set: –1996 New York Times archive of 137,000 newspaper articles –Used to tune QXtract parameters Test Set: –1995 New York Times archive of 135,000 newspaper articles

16 16 Final Configuration of QXtract, from Training

17 17 Experimental Evaluation: Information Extraction Systems and Associated Relations DIPRE [Brin 1998] –Headquarters(Organization, Location) Snowball [Agichtein and Gravano 2000] –Headquarters(Organization, Location) Proteus [Grishman et al. 2002] –DiseaseOutbreaks(DiseaseName, Location, Country, Date, …)

18 18 Experimental Evaluation: Seed Tuples OrganizationLocation MicrosoftRedmond ExxonIrving BoeingSeattle IBMArmonk IntelSanta Clara DiseaseNameLocation MalariaEthiopia TyphusBergen-Belsen FluThe Midwest Mad Cow DiseaseThe U.K. PneumoniaThe U.S. Headquarters DiseaseOutbreaks

19 19 Experimental Evaluation: Metrics Gold standard: relation R all, obtained by running information extraction system over every document in D all database Recall: % of R all captured in approximation extracted from retrieved documents Precision: % of retrieved documents that are “useful” (i.e., produced tuples)

20 20 Experimental Evaluation: Relation Statistics Relation and Extraction System| D all |% Useful| R all | Headquarters: Snowball135,0002324,536 Headquarters: DIPRE135,0002220,952 DiseaseOutbreaks: Proteus135,00048,859

21 21 Alternative Query Generation Strategies QXtract, with final configuration from training Tuples: Keep deriving queries from extracted tuples –Problem: “disconnected” databases Patterns: Derive queries from extraction patterns from information extraction system –“, based in ” => “based in” –Problems: pattern features often not suitable for querying, or not visible from “black-box” extraction system Manual: Construct queries manually [MUC] –Obtained for Proteus from developers –Not available for DIPRE and Snowball Plus simple additional “baseline”: retrieve a random document sample of appropriate size

22 22 Recall and Precision Headquarters Relation; Snowball Extraction System Recall Precision

23 23 Recall and Precision Headquarters Relation; DIPRE Extraction System Recall Precision

24 24 Extraction Efficiency and Recall DiseaseOutbreaks Relation; Proteus Extraction System 60% of relation extracted from just 10% of documents of 135,000 newspaper article database

25 25 Snowball/Headquarters Queries

26 26 DIPRE/Headquarters Queries

27 27 Proteus/DiseaseOutbreaks Queries

28 28 Current Work: Characterizing Databases for an Extraction Task Sparse? yesno ScanQXtract, Tuples Connected? yesno TuplesQXtract

29 29 Related Work Information Extraction: focus on quality of extracted relations [MUC]; most relevant sub-task: text filtering –Filters derived from extraction patterns, or consisting of words (manually created or from supervised learning) –Grishman et al.’s manual pattern-based filters for disease outbreaks –Related to Manual and Patterns strategies in our experiments –Focus not on querying using simple search interface Information Retrieval: focus on relevant documents for queries –In our scenario, relevance determined by “extraction task” and associated information extraction system Automatic Query Generation: several efforts for different tasks: –Minority language corpora construction [Ghani et al. 2001] –Topic-specific document search (e.g., [Cohen & Singer 1996])

30 30 Contributions: An Unsupervised Query-Based Technique for Efficient Information Extraction Adapts to “arbitrary” underlying information extraction system and document database Can work over non-crawlable “Hidden Web” databases Minimal user input required –Handful of example tuples Can trade off relation completeness and extraction efficiency Particularly interesting in conjunction with unsupervised/bootstrapping-based information extraction systems (e.g., DIPRE, Snowball)

31 Questions?

32 Overflow Slides

33 33 Related Work (II) Focused Crawling (e.g., [Chakrabarti et al. 2002]): uses link and page classification to crawl pages on a topic Hidden-Web Crawling [Raghavan & Garcia-Molina 2001]: retrieves pages from non-crawlable Hidden-Web databases –Need rich query interface, with distinguishable attributes –Related to Tuples strategy, but “tuples” derived from pull- down menus, etc. from search interfaces as found –Our goal: retrieve as few documents as possible from one database to extract relation Question-Answering Systems

34 34 Related Work (III) [Mitchell, Riloff, et al. 1998] use “linguistic phrases” derived from information extraction patterns as features for text categorization Related to Patterns strategy; requires document parsing, so can’t directly generate simple queries [Gaizauskas & Robertson 1997] use 9 manually generated keywords to search for documents relevant to a MUC extraction task

35 35 Recall and Precision DiseaseOutbreaks Relation; Proteus Extraction System RecallPrecision

36 36 Running Times

37 37 Extracting Relations from Text: Snowball Exploit redundancy on web to focus on “easy” instances Require only minimal training (handful of seed tuples) Initial Seed TuplesOccurrences of Seed Tuples Tag Entities Generate Extraction Patterns Generate New Seed Tuples Augment Table ACM DL’00


Download ppt "Querying Text Databases for Efficient Information Extraction Eugene Agichtein Luis Gravano Columbia University."

Similar presentations


Ads by Google