Presentation is loading. Please wait.

Presentation is loading. Please wait.

Creating and Exploiting a Web of (Semantic) Data, Tim Finin Zareen Syed and Anupam Joshi University of Maryland, Baltimore County James Mayfield, Paul.

Similar presentations


Presentation on theme: "Creating and Exploiting a Web of (Semantic) Data, Tim Finin Zareen Syed and Anupam Joshi University of Maryland, Baltimore County James Mayfield, Paul."— Presentation transcript:

1 Creating and Exploiting a Web of (Semantic) Data, Tim Finin Zareen Syed and Anupam Joshi University of Maryland, Baltimore County James Mayfield, Paul McNamee and Christine Piatko JHU Human Language Technology Center of Excellence http://ebiquity.umbc.edu/get/a/resource/267.ppt

2 Overview Introduction Recent Semantic Web trends Leveraging Linked Data on the Web Applications to human language understanding Conclusion

3 The Age of Big Data Massive amounts of data is available today Advances in human language processing have been driven by the availability of unstructured data, text and speech Increasingly, large amounts of structured and semi-structured data is also online Much of this available in the Semantic Web language RDF, fostering integration We can exploit this data to enhance language understanding systems

4 Twenty years ago… Tim Berners-Lee’s 1989 WWW proposal described a web of relationships among named objects unifying many info. management tasks. Capsule history Guha’s MCF (~94) XML+MCF=>RDF (~96) RDF+OO=>RDFS (~99) RDFS+KR=>DAML+OIL (00) W3C’s SW activity (01) W3C’s OWL (03) SPARQL, RDFa (08) http://www.w3.org/History/1989/proposal.html

5 Ten years ago …. The W3C started developing standards for the Semantic Web The vision, technology and use cases are still evolving Moving from a web of documents to a web of data

6 Web of documents

7 Web of (Linked) Data

8 One month ago …. 4.5 billion facts in RDF in the Linked Data Collection

9 A Linked Data story Wikipedia as a source of knowledge – Wikis are a great ways to collaborate on building up knowledge resources Wikipedia as an ontology – Every Wikipedia page is a concept or object Wikipedia as RDF data – Map this ontology into RDF DBpedia as the lynchpin for Linked Data – Exploit its breadth of coverage to integrate things

10 Populating Freebase KB

11 Underlying Powerset’s KB

12 Mined by TrueKnowledge

13 Wikipedia as an ontology Using Wikipedia as an ontology – each article (~3M) is an ontology concept or instance – terms linked via category system (~200k), infobox template use, inter-article links, infobox links – Article history contains metadata for trust, provenance, etc. It’s a consensus ontology with broad coverage Created and maintained by a diverse community for free! Multilingual Very current Overall content quality is high

14 Wikipedia as an ontology Uncategorized and miscategorized articles Many ‘administrative’ categories: articles needing revision; useless ones: 1949 births Multiple infobox templates for the same class Multiple infobox attribute names for same property No datatypes or domains for infobox attribute values etc.

15 http://lookup.dbpedia.org/

16

17

18

19

20

21

22 4.5 billion triples for free The full public LOD dataset has about 4.5 billion triples as of March 2009 Linking assertions are spotty, but probably include order 10M equivalences Availability: – download the data in RDF – Query it via a public SPARQL servers – load it as an Amazon EC2 public dataset – Launch it and required software as an Amazon public AMI image

23 Wikitology We’ve been exploring a different approach to derive an ontology from Wikipedia through a series of use cases: – Identifying user context in a collaboration system from documents viewed (2006) – Improve IR accuracy by adding Wikitology tags to documents (2007) – ACE: cross document co-reference resolution for named entities in text (2008) – TAC KBP: Knowledge Base population from text (2009) – Improve Web search engine by tagging documents and queries (2009)

24 Wikitology 2.0 (2008) WordNet Yago Human input & editingDatabases Freebase KB RDF textgraphs

25 ACE entity co-reference task In 2008 we participated in the NIST ACE task with the JHU Human Language Technology Center of Excellence Given 10K English and 10K Arabic documents, find all ‘named entities’ (people, organizations) Cluster into sets that refer to the same entity – “Dr. Rice” mentioned in doc 18397 is the same as “Secretary of State” in doc 46281 – Distinguish Michael Jordan of the Bulls from Michael Jordan of Berkeley

26 HLTCOE ACE approach BBN’s Serif system produces text annotated with named entities (people or organizations) Dr. Rice, Ms. Rice, the secretary, she, secretary Rice Featurizers score pairs of entities for co-reference (CNN-264772-E32, AFP-7373726-E19, 0.6543) A machine learning system combines the evidence A simple clustering algorithm identifies clusters NLP ML clust FEAT Documents KB entities

27 Wikitology tagging Using BBN’s Serif system, we produced an entity document for each entity. Included the entity’s name, nominal and pronominal mentions, APF type and subtype, and words in a window around the mentions We tagged entity documents using Wiki- tology producing vectors of (1) terms and (2) categories for the entity We used the vectors to compute features measuring entity pair similarity/dissimilarity

28 Wikitology Entity Document & Tags Wikitology entity document ABC19980430.1830.0091.LDC2000T44-E2 Webb Hubbell PER Individual NAM: "Hubbell” "Hubbells” "Webb Hubbell” "Webb_Hubbell" PRO: "he” "him” "his" abc's accountant after again ago all alleges alone also and arranged attorney avoid been before being betray but came can cat charges cheating circle clearly close concluded conspiracy cooperate counsel counsel's department did disgrace do dog dollars earned eightynine enough evasion feel financial firm first four friend friends going got grand happening has he help him hi s hope house hubbell hubbells hundred hush income increase independent indict indicted indictment inner investigating jackie jackie_judd jail jordan judd jury justice kantor ken knew lady late law left lie little make many mickey mid money mr my nineteen nineties ninetyfour not nothing now office other others paying peter_jennings president's pressure pressured probe prosecutors questions reported reveal rock saddened said schemed seen seven since starr statement such tax taxes tell them they thousand time today ultimately vernon washington webb webb_hubbell were what's whether which white whitewater why wife years Wikitology article tag vector Webster_Hubbell 1.000 Hubbell_Trading_Post National Historic Site 0.379 United_States_v._Hubbell 0.377 Hubbell_Center 0.226 Whitewater_controversy 0.222 Wikitology category tag vector Clinton_administration_controversies 0.204 American_political_scandals 0.204 Living_people 0.201 1949_births 0.167 People_from_Arkansas 0.167 Arkansas_politicians 0.167 American_tax_evaders 0.167 Arkansas_lawyers 0.167 Name Type & subtype Mention heads Words surrounding mentions

29 Top ten features (by F1) Prec. RecallF1 Feature Description 90.8% 76.6%83.1% some NAM mention has an exact match 92.9% 71.6%80.9% Dice score of NAM strings (based on the intersection of NAM strings, not words or n-grams of NAM strings) 95.1% 65.0%77.2% the/a longest NAM mention is an exact match 86.9% 66.2%75.1% Similarity based on cosine similarity of Wikitology Article Medium article tag vector 86.1% 65.4%74.3% Similarity based on cosine similarity of Wikitology Article Long article tag vector 64.8% 82.9%72.8% Dice score of character bigrams from the 'longest' NAM string 95.9% 56.2%70.9% all NAM mentions have an exact match in the other pair 85.3% 52.5%65.0% Similarity based on a match of entities' top Wikitology article tag 85.3% 52.3%64.8% Similarity based on a match of entities' top Wikitology article tag 85.7% 32.9%47.5% Pair has a known alias

30 Knowledge Base Population The 2009 NIST Text Analysis Conference (TAC) includes a Knowledge Base Population track Goal: discover information about named entities (people, organizations, places) and incorporate it into a KB TAC KBP has two related tasks: – Entity linking: doc. entity mention -> KB entity – Slot filling: given a document entity mention, find missing slot values in large corpus

31 Wikitology Planned Extensions Make greater use of data from Linked Open Data (LOD) resources: DBpedia, Geonames, Freebase Replace ad hoc processing of RDF data in Lucene with a triple store Add additional graphs (e.g., derived from infobox links and develop algorithms to exploit them Develop a better hybrid query creation tools

32 Infobox Graph Infobox Graph IR collection Relational Database Relational Database Triple Store RDF reasoner Page Link Graph Category Links Graph Category Links Graph Articles Wikitology Code Application Specific Algorithms Application Specific Algorithms Application Specific Algorithms Application Specific Algorithms Application Specific Algorithms Application Specific Algorithms Wikitology 3.0 (2009) Linked Semantic Web data & ontologies Infobox Graph Infobox Graph

33 Wikipedia’s social network Wikipedia has an implicit ‘social network’ that can help disambiguate mentions of a person entity This provides evidence useful in disambiguating entity mentions and mapping them to known KB entities The same can be done for entities that are organizations or places

34 WSN Data We extracted 213K people from the DBpedia’s Infobox dataset, ~30K of which participate in an infobox link to another person We extracted 875K people from Freebase, 616K of were linked to Wikipedia pages, 431K of which are in one of 4.8M person-person article links Consider a document that mentions two people: George Bush and Mr. Quayle

35 Which Bush & which Quayle? Six George BushesNine Male Quayles

36 A simple closeness metric Let N(i) = {two hop neighbors of entity i} Assoc(I,j) = |intersection(N(i),N(j))| / |union(N(i),N(j))| Assoc(I,J) > 0 for six of the 56 possible pairs 0.43 George_H._W._Bush -- Dan_Quayle 0.24 George_W._Bush -- Dan_Quayle 0.18 George_Bush_(biblical_scholar) -- Dan_Quayle 0.02 George_Bush_(biblical_scholar) -- James_C._Quayle 0.02 George_H._W._Bush -- Anthony_Quayle 0.01 George_H._W._Bush -- James_C._Quayle

37 Application to TAC KBP Using entity network data extracted from Dbpedia and Wikipedia provides evidence to support KBP tasks: – Mapping document mentions into infobox entities – Mapping potential slot fillers into infobox entities – Evaluating the coherence of entities as potential slot fillers

38 Conclusion Wikipedia is increasingly being used as a knowledge source of choice Useful KBs can be extracted from it and related resources (e.g., Dbpedia, Freebase) Linked Open Data significantly enriches the KBs Hybrid systems like Wikitology combining IR, RDF, and custom graph algorithms are promising Wikitology performed well in the 2008 ACE task The 2009 TAC KBP task will be a good evaluation opportunity


Download ppt "Creating and Exploiting a Web of (Semantic) Data, Tim Finin Zareen Syed and Anupam Joshi University of Maryland, Baltimore County James Mayfield, Paul."

Similar presentations


Ads by Google