Presentation is loading. Please wait.

Presentation is loading. Please wait.

Natural Language Questions for the Web of Data Mohamed Yahya 1, Klaus Berberich 1, Shady Elbassuoni 2 Maya Ramanath 3, Volker Tresp 4, Gerhard Weikum 1.

Similar presentations


Presentation on theme: "Natural Language Questions for the Web of Data Mohamed Yahya 1, Klaus Berberich 1, Shady Elbassuoni 2 Maya Ramanath 3, Volker Tresp 4, Gerhard Weikum 1."— Presentation transcript:

1 Natural Language Questions for the Web of Data Mohamed Yahya 1, Klaus Berberich 1, Shady Elbassuoni 2 Maya Ramanath 3, Volker Tresp 4, Gerhard Weikum 1 1 Max Planck Institute for Informatics, Germany 2 Qatar Computing Research Institute 3 Dept. of CSE, IIT-Delhi, India 4 Siemens AG, Corporate Technology, Munich, Germany EMNLP 2012

2 Example of question “Which female actor played in Casablanca and is married to a writer who was born in Rome?”. Which actress from Casablanca is married to a writer from Rome? Translation to SPARQL : – ?x hasGender female – ?x isa actor – ?x actedIn Casablanca_(film) – ?x marriedTo ?w – ?w isa writer – ?w bornIn Rome Characteristics of SPARQL : – Complex query – good results – Difficult for the user Author wants: automatically create such structured queries by mapping the user’s question into this representation

3 Translate q NL to q FL q NL → q FL q NL : natural language question q FL : formal language query KB : knowledge base q FL (target language) is SPARQL 1.0

4 Yago2 YAGO2s is a huge semantic knowledge base, derived from Wikipedia, WordNet and GeoNames.

5 sample facts from Yago2 Examples of relations: – type, subclassOf, and actedIn. Examples of class: – person and film. Examples of Entities : – Entities are represented in canonical form such as ‘Ingrid_Bergman’ and ‘Casablanca_(film)’. – special type of entities : strings, numbers, and dates.

6 DEANNA DEANNA (DEep Answers for maNy Naturally Asked questions)

7 question sentence q NL = (t 0, t 1,..., t n ). Phrase = (t i, t i+1,..., t i+l ) ⊆ q NL, 0 ≤ i, 0 ≤ l ≤ n Phrase focus on entities, classes, and relations – e.g., “Which actress from Casablanca is married to a writer from Rome?” entities : Casablanca, … Classes : actresses, … relations : marriedTo, …

8 Phrase detection Phrases are detected that potentially correspond to semantic items such as ‘Who’, ‘played in’, ‘movie’ and ‘Casablanca’.

9 Phrase detection A detected phrase p is a pair – Toks : phrase – l : label (l ∈ {concept, relation}) P r : the set of all detected relation phrases. P c : the set of all detected concept phrases. null phrase : – null phrase is special type of detected relation phrase – e.g., adjectives, such as ‘Australian movie’,

10 concept detection works against a phrase-concept dictionary – phrase-concept dictionary : instances of the means relation in Yago2 e.g., – {‘Rome’, ‘eternal city’} → Rome – {‘Casablanca’} → Casablanca_(film)

11 relation detection rely on a relation detector based on ReVerb (Fader et al., 2011) with additional POS tag patterns, in addition to our own which looks for patterns in dependency parses.

12 Phrase Mapping

13 each phrase is mapped to a set of semantic items. To map concept phrases: – also relies on the phrase-concept dictionary. To map relation phrases: – rely on a corpus of textual patterns to relation mappings of the form – {‘play’, ‘star in’, ‘act’, ‘leading role’} → actedIn – {‘married’, ‘spouse’, ‘wife’} → marriedTo

14 Example of Phrase Mapping ‘played in’ can either refer to the semantic relation actedIn or to playedForTeam and ‘Casablanca’ can potentially refer to Casablanca_(film) or Casablanca,_Morocco.

15 Dependency Parsing & Q-Unit Generation

16 Dependency parsing Dependency parsing identifies triples of tokens,or triploids –, where t rel, t arg1, t arg2 ∈ q NL – t rel : the seed for the relation phrase – t arg1, t arg2 : seeds for the concept phrase. – there is no attempt to assign subject/object roles to the arguments.

17 Q-Unit Generation By combining triploids with detected phrases, we obtain q- units. q-unit is a triple of sets of phrases, – – t rel ∈ p rel, t arg1 ∈ p arg1, and t arg2 ∈ p arg2.

18 Joint Disambiguation

19

20 goal of the disambiguation step each phrase is assigned to at most one semantic item resolves the phrase boundary ambiguity – (only nonoverlapping phrases are mapped) all phrases jointly in one big disambiguation task

21 resulting subgraph for the disambiguation graph of Figure 3

22 Disambiguation Graph Joint disambiguation takes place over a disambiguation graph DG = (V, E), – V = V s ∪ V p ∪ V q – E = E sim ∪ E coh ∪ E q

23 Type of vertices V = V s ∪ V p ∪ V q – V s : the set of semantic items v s ∈ V s is an s-node – V p : the set of phrases v p ∈ V p is called a p-node. V rp : relation phrases V rc : concept phrases – V q : a set of placeholder nodes for q–units

24 Type of edges E sim ⊆ V p × V s – a set of weighted similarity edges E coh ⊆ V s × V s – a set of weighted coherence edges E q ⊆ V q × V p × d, d ∈ {rel, arg1, arg2} – Called q-edge

25 Coh sem (Semantic Coherence) define the semantic coherence (Cohsem) – between two semantic items s1 and s2 as the Jaccard coefficient of their sets of inlinks. For entities e – InLinks(e) : the set of Yago2 entities whose corresponding Wikipedia pages link to the entity. For class c with entities e – InLinks(c) = ∪ e ∈ c Inlinks(e) For relations r – InLinks(r) = ∪ (e1, e2) ∈ r (InLinks(e 1 ) ∩ InLinks(e 2 ))

26 Similarity Weights For entities – how often a phrase refers to a certain entity in Wikipedia. For classes – reflects the number of members in a class For relations – reflects the maximum n-gram similarity between the phrase and any of the relation’s surface forms

27 Disambiguation Graph Processing The result of disambiguation is a subgraph of the disambiguation graph, yielding the most coherent mappings. We employ an ILP to this end.

28 Definitions (part1)

29 Definitions (part2)

30 objective function

31 Constraints(1~3)

32 Constraints(4~7)

33 Constraints(8)

34 Constraints(9)

35 resulting subgraph for the disambiguation graph of Figure 3

36 Query Generation not assign subject/object roles in triploids and q-units Example: – “Which singer is married to a singer?” ?x type singer, ?x marriedTo ?y, and ?y type singer

37 5 Evaluation Datasets Evaluation Metrics Results & Discussion

38 Datasets QALD-1 – 1st Workshop on Question Answering over Linked Data (QALD-1) – context of the NAGA project NAGA collection – The NAGA collection is based on linking data from the Yago2 knowledge base Training set – 23 QALD-1 questions – 43 NAGA questions Test set – 27 QALD-1 questions – 44 NAGA questions Get hyperparameters (α, β, γ) in the ILP objective function. 19 QALD-1 questions in Test set

39 Evaluation Metrics author evaluated the output of DEANNA at three stages – 1. after the disambiguation of phrases – 2. after the generation of the SPARQL query – 3. after obtaining answers from the underlying linked- data sources Judgement – two human assessors who judged whether an output item was good or not – If the two were in disagreement, then a third person resolved the judgment.

40 disambiguation stage The task of judges – looked at each q-node/s-node pair, in the context of the question and the underlying data schemas, – determined whether the mapping was correct or not – determined whether any expected mappings were missing.

41 query-generation stage The task of judges – Looked at each triple pattern – determined whether the pattern was meaningful for the question or not – whether any expected triple pattern was missing.

42 query-answering stage the judges were asked to identify if the result sets for the generated queries are satisfactory.

43 Micro-averaging aggregates over all assessed items regardless of the questions to which they belong. Macro-averaging first aggregates the items for the same question, and then averages the quality measure over all questions. For a question q and item set s in one of the stages of evaluation correct(q, s) : the number of correct items in s ideal(q) : the size of the ideal item set retrieved(q, s) : the number of retrieved items define coverage and precision as follows: cov(q, s) = correct(q, s) / ideal(q) prec(q, s) = correct(q, s) / retrieved(q, s).

44

45 Conclusions Author presented a method for translating natural language questions into structured queries. Although author’s model, in principle, leads to high combinatorial complexity, they observed that the Gurobi solver could handle they judiciously designed ILP very efficiently. Author’s experimental studies showed very high precision and good coverage of the query translation, and good results in the actual question answers.

46 q NL focus on entities, classes, and relations – Ex: “Which actress from Casablanca is married to a writer from Rome?” entities : Casablanca, … Classes : actresses, … relations : marriedTo, …


Download ppt "Natural Language Questions for the Web of Data Mohamed Yahya 1, Klaus Berberich 1, Shady Elbassuoni 2 Maya Ramanath 3, Volker Tresp 4, Gerhard Weikum 1."

Similar presentations


Ads by Google