Presentation is loading. Please wait.

Presentation is loading. Please wait.

Integration of Heterogeneous Databases without Common Domains Using Queries Based on Textual Similarity: William W. Cohen Machine Learning Dept. and Language.

Similar presentations


Presentation on theme: "Integration of Heterogeneous Databases without Common Domains Using Queries Based on Textual Similarity: William W. Cohen Machine Learning Dept. and Language."— Presentation transcript:

1 Integration of Heterogeneous Databases without Common Domains Using Queries Based on Textual Similarity: William W. Cohen Machine Learning Dept. and Language Technologies Inst. School of Computer Science Carnegie Mellon University Embodied Cognition and Knowledge

2 What was that paper, and who is this guy talking? Representation languages: DBs, KR Human languages: NLP, IR Machine Learning WHIRL Word-Based Heterogeneous Information Representation Language

3 History 1982/1984: Ehud Shapiro’s thesis: –MIS: Learning logic programs as debugging an empty Prolog program –Thesis contained 17 figures and a 25-page appendix that were a full implementation of MIS in Prolog –Incredibly elegant work “ Computer science has a great advantage over other experimental sciences: the world we investigate is, to a large extent, our own creation, and we are the ones to determine if it is simple or messy.” 82 84 86 88 90 92 94 96 98 00 04 08 13 18

4 History Grad school in AI at Rutgers MTS at AT&T Bell Labs in group doing KR, DB, learning, information retrieval, … My work: learning logical (description-logic-like, Prolog-like, rule-based) representations that model large noisy real-world datasets. 82 84 86 88 90 92 94 96 98 00 04 08 13 18

5 History AT&T Bells Labs becomes AT&T Labs Research The web takes off –as predicted by Vinge and Gibson IR folks start looking at retrieval and question-answering with the Web Alon Halevy starts the Information Manifold project to integrate data on the web –VLDB 2006 10-year Best Paper Award for 1996 paper on IM I started thinking about the same problem in a different way…. 82 84 86 88 90 92 94 96 98 00 04 08 13 18

6 History: WHIRL motivation 1 As the world of computer science gets richer and more complex, computer science can no longer limit itself to studying “our own creation”. Tension exists between –Elegant theories of representation –The not-so-elegant real world that is being represented 82 84 86 88 90 92 94 96 98 00 04 08 13 18 CA

7 History: WHIRL motivation 1 The beauty of the real world is its complexity…. 82 84 86 88 90 92 94 96 98 00 04 08 13 18

8 History: integration by mediation 82 84 86 88 90 92 94 96 98 00 04 08 13 18 Mediator translates between the knowledge in multiple separate KBs Each KB is a separate “symbol system” –No formal connection between them except via the mediator

9 WHIRL idea: exploit linguistic properties of the HTML “veneer” of web-accessible DBs 82 84 86 88 90 92 94 96 98 00 04 08 13 18 TFIDF similarity WHIRL Motivation 2: Web KBs are embodied

10 Link items as needed by Q Query Q SELECT R.a,S.a,S.b,T.b FROM R,S,T WHERE R.a=S.a and S.b=T.b R.aS.aS.bT.b Anhai Doan Dan Weld Strongest links: those agreeable to most users WilliamWillCohenCohn SteveStevenMintonMitton Weaker links: those agreeable to some users WilliamDavidCohenCohn even weaker links…

11 Link items as needed by Q WHIRL approach: Query Q SELECT R.a,S.a,S.b,T.b FROM R,S,T WHERE R.a~S.a and S.b~T.b (~ TFIDF-similar) R.aS.aS.bT.b Anhai Doan Dan Weld Incrementally produce a ranked list of possible links, with “best matches” first. User (or downstream process) decides how much of the list to generate and examine. WilliamWillCohenCohn SteveStevenMintonMitton WilliamDavidCohenCohn

12

13 WHIRL queries Assume two relations: review(movieTitle,reviewText): archive of reviews listing(theatre, movieTitle, showTimes, …): now showing The Hitchhiker’s Guide to the Galaxy, 2005 This is a faithful re-creation of the original radio series – not surprisingly, as Adams wrote the screenplay …. Men in Black, 1997 Will Smith does an excellent job in this … Space Balls, 1987 Only a die-hard Mel Brooks fan could claim to enjoy … …… Star Wars Episode III The Senator Theater 1:00, 4:15, & 7:30pm. Cinderella Man The Rotunda Cinema 1:00, 4:30, & 7:30pm. ………

14 WHIRL queries “Find reviews of sci-fi comedies [movie domain] FROM review SELECT * WHERE r.text~’sci fi comedy’ (like standard ranked retrieval of “sci-fi comedy”) “ “Where is [that sci-fi comedy] playing?” FROM review as r, LISTING as s, SELECT * WHERE r.title~s.title and r.text~’sci fi comedy’ (best answers: titles are similar to each other – e.g., “Hitchhiker’s Guide to the Galaxy” and “The Hitchhiker’s Guide to the Galaxy, 2005” and the review text is similar to “sci-fi comedy”)

15 WHIRL queries Similarity is based on TFIDF  rare words are most important. Search for high-ranking answers uses inverted indices…. The Hitchhiker ’s Guide to the Galaxy, 2005 Men in Black, 1997 Space Balls, 1987 … Star Wars Episode III Hitchhiker ’s Guide to the Galaxy Cinderella Man … Years are common in the review archive, so have low weight hitchhikermovie00137 themovie001,movie003,movie007,movie008, movie013,movie018,movie023,movie0031, ….. - It is easy to find the (few) items that match on “ important ” terms - Search for strong matches can prune “unimportant terms”

16 After WHIRL 82 84 86 88 90 92 94 96 98 00 04 08 13 18 Efficient text joins On-the-fly, best-effort, imprecise integration Interactions between information extraction quality and results of queries on extracted data Keyword search on databases Use of statistics on text corpora to build intelligent “embodied” systems Turney: solving SAT analogies with PMI over word pairs Mitchell & Just: predicting FMI brain images resulting from reading a common noun (“hammer”) from co-occurrence information between nouns and verbs

17 Recent work: non-textual similarity 82 84 86 88 90 92 94 96 98 00 04 08 13 18 “William W. Cohen, CMU” “Dr. W. W. Cohen” cohen williamw dr cmu “George W. Bush” “George H. W. Bush” “Christos Faloutsos, CMU”

18 Recent Work Personalized PageRank aka Random Walk with Restart: –Similarity measure for nodes in a graph, analogous to TFIDF for text in a WHIRL database –natural extension to PageRank –amenable to learning parameters of the walk (gradient search, w/ various optimization metrics): Toutanova, Manning & NG, ICML2004; Nie et al, WWW2005; Xi et al, SIGIR 2005 –various speedup techniques exist –queries: Given type t* and node x, find y:T(y)=t* and y~x 82 84 86 88 90 92 94 96 98 00 04 08 13 18

19 proposal CMU CALO graph William 6/18/07 6/17/07 Sent To Term In Subject einat@cs.cmu.edu Learning to Search Email [SIGIR 2006, CEAS 2006, WebKDD/SNA 2007] Einat Minkov, CMU; Andrew Ng, Stanford

20 Tasks that are like similarity queries Person name disambiguation Threading Alias finding [ term “andy” file msgId ] “person” [ file msgId ] “file”  What are the adjacent messages in this thread?  A proxy for finding “more messages like this one” What are the email-addresses of Jason ?... [ term Jason ] “email-address” Meeting attendees finder Which email-addresses (persons) should I notify about this meeting? [ meeting mtgId ] “email-address”

21 Results on one task Mgmt. game PERSON NAME DISAMBIGUATION

22 Results on several tasks (MAP) Name disambiguation Threading Alias finding * * * * * * * * * * * + + + ++ *

23 Set Expansion using the Web Fetcher: download web pages from the Web Extractor: learn wrappers from web pages Ranker: rank entities extracted by wrappers 1.Canon 2.Nikon 3.Olympus 4.Pentax 5.Sony 6.Kodak 7.Minolta 8.Panasonic 9.Casio 10.Leica 11.Fuji 12.Samsung 13.… Richard Wang, CMU

24 The Extractor Learn wrappers from web documents and seeds on the fly –Utilize semi-structured documents –Wrappers defined at character level No tokenization required; thus language independent However, very specific; thus page-dependent –Wrappers derived from document d is applied to d only

25

26 Ranking Extractions A graph consists of a fixed set of… –Node Types: {seeds, document, wrapper, mention} –Labeled Directed Edges: {find, derive, extract} Each edge asserts that a binary relation r holds Each edge has an inverse relation r -1 (graph is cyclic) “ford”, “nissan”, “toyota” curryauto.com Wrapper #3 Wrapper #2 Wrapper #1 Wrapper #4 “honda” 26.1% “acura” 34.6% “chevrolet” 22.5% “bmw pittsburgh” 8.4% “volvo chicago” 8.4% find derive extract northpointcars.com Minkov et al. Contextual Search and Name Disambiguation in Email using Graphs. SIGIR 2006

27 Evaluation Method Mean Average Precision –Commonly used for evaluating ranked lists in IR –Contains recall and precision-oriented aspects –Sensitive to the entire ranking –Mean of average precisions for each ranked list Evaluation: Average over 36 datasets in three languages (Chinese, Japanese, English) 1.Average over several 2- or 3-seed queries for each dataset. 2.MAP performance: high 80s - mid 90s 3.Google Sets: MAP in 40s, only English where L = ranked list of extracted mentions, r = rank Prec ( r ) = precision at rank r (a) Extracted mention at r matches any true mention (b) There exist no other extracted mention at rank less than r that is of the same entity as the one at r # True Entities = total number of true entities in this dataset

28 Evaluation Datasets

29 Top three mentions are the seeds Try it out at http://rcwang.com/seal

30 Relational Set Expansion Seeds

31 Future? 82 84 86 88 90 92 94 96 98 00 04 08 13 18 Representation languages: DBs, KR Human languages: NLP, IR Machine Learning


Download ppt "Integration of Heterogeneous Databases without Common Domains Using Queries Based on Textual Similarity: William W. Cohen Machine Learning Dept. and Language."

Similar presentations


Ads by Google