Download presentation

Presentation is loading. Please wait.

Published byRhiannon Nickol Modified over 2 years ago

1
An F-Measure for Context-Based Information Retrieval Michael Kandefer and Stuart C. Shapiro University at Buffalo Department of Computer Science and Engineering Center for Multisource Information Fusion Center for Cognitive Science {mwk3,shapiro}@cse.buffalo.edu

2
Introduction Commonsense 2009 – One of the major long-term goals of AI is to endow computers with common sense – One challenge is the accumulation of large amounts of knowledge about our everyday world Managing a large-scale knowledge store is necessary

3
Introduction Building commonsense reasoners requires access to large amounts of information – Deductive reasoners suffer performance issues when working with large KBs Optimal solution: – Use only that information that is needed for reasoning Considered the relevant information – Not practical Can take as long as reasoning

4
Introduction Solution: Context-based Information Retrieval – Use context to help establish information that is likely to be relevant The environment and other constraints Not the KR sense of context – Heuristic that sacrifices precision for rapid retrieval – Useful for many applications: HCI Devices, Embodied acting agents Problem: Which CBIR techniques are better? – How do you measure CBIR output?

5
CBIR Process Input (I) CBIR Process Reasoning Engine Retrieved Propositions Background Knowledge Sources (BKS) Query (Q)

6
F-Measure Retrieved Propositions (RET) Retrieved Propositions (RET) Relevant Propositions (REL) Relevant Propositions (REL) Recall (r)Precision (p)F-Measure vs.

7
Establishing Relevant Propositions Relevant propositions are only those needed for performing the required reasoning – Establish what’s really relevant Can be generated manually – Not practical for large KBs Automatic procedures are desirable – Run prior to use of CBIR procedure Runtime is not a huge issue – Two will be discussed Relevance-theoretic Distance from the Optimal

8
CBIR Input (I) CBIR Process Reasoning Engine Retrieved Propositions Background Knowledge Sources (BKS) Query (Q)

9
Relevant Proposition Tagging Input (I) Retrieved Propositions Background Knowledge Sources (BKS) Query (Q) Relevant Propositions RPT vs.

10
Relevance-Theoretic Sperber and Wilson’s Relevancy Theory Model of utterance interpretation – Receives an input utterance and determines how relevant it is to an agent’s beliefs – Can be used for other cognitive processes Proposed for measuring relevance in IR – Establishing the set of relevant propositions

11
S & W Relevance After {I Q} is inserted into BKS, a proposition p BKS is relevant if it causes a positive cognitive effect – ¬p {I Q} – p helps strengthens some q {I Q}, or – p contributes to a contextual implication: {{I Q} BKS} non-trivially derives using p some proposition q {I Q} alone does not non-trivially derive q, and BKS alone does not non-trivially derive q p strengthens q if: – q was already derived in {I Q} and BKS can non-trivially derive q using p – i.e., q is independently derived Non-trivial derivations are not easy to formalize – No formalization provided by S & W – Consider propositions used in forward chaining as non-trivial

12
Example BKS A1 : ∀ (x, y)(Blunt(x) ∧ Conical(x) ∧ Drawer(y) ∧ ConnectedByTip(x, y) → Handle(x)). A2 : ∀ (x)(Handle(x) → CanBePulled(x)). A3 : Blunt(h1). A4 : Conical(h1). A5 : ∀ (x, y)(Rope(x) ∧ Light(y) ∧ Connected(x, y) → CanBePulled(x) A6 : ∀ (x, y)(Blunt(x) ∧ Conical(y) ∧ ConnectedByBase(x, y) → ¬Handle(x) A7 : ∀ (x)(Drawer(x) → ContainsItems(x)). {I Q}: {Drawer(d1) ∧ ConnectedByTip(h1, d1) ∧ CanBePulled(h1)}. rel: {A1, A2, A3, A4, A7}

13
Example CBIR Name |REL||RET| |RET REL| RecallPrecisionF-Measure CBIR15440.81.000.899 CBIR25540.80.800.800 CBIR35540.80.800.800 BKS5751.00.710.830 rel: {A1, A2, A3, A4, A7} CBIR Result NameCBIR Retrieved Propositions CBIR1{A1,A2,A3,A4} CBIR2{A1,A2,A3,A4,A6} CBIR3{A2,A3,A4,A5,A7}

14
Distance from the Optimal Using I, Q, and BKS and some reasoner capable of maintaining origin sets Origin sets – Product of relevance logic/ATMS – The propositions required for deriving some proposition Procedure: – Generate the origin set required for deriving Q – Use the origin set as the relevant propositions – Compare CBIR results to the optimal solution

15
Finding the Optimal Solution 1.Given: Q, BKS, and I 2. Load the BKS into a reasoner. 3. Add I to the BKS. 4. Query the reasoner on Q. 5. Examine the origin set for Q,, defined as: {A - I| A {BKS I} A ├ Q ¬ ∃ A’((A’ A) A’ ├ Q) } 6. Select the sets in that have the minimal cardinality. This new set of origin sets will be denoted with min( )

16
Example BKS A1 : ∀ (x, y)(Blunt(x) ∧ Conical(x) ∧ Drawer(y) ∧ ConnectedByTip(x, y) → Handle(x)). A2 : ∀ (x)(Handle(x) → CanBePulled(x)). A3 : Blunt(h1). A4 : Conical(h1). A5 : ∀ (x, y)(Rope(x) ∧ Light(y) ∧ Connected(x, y) → CanBePulled(x) A6 : ∀ (x, y)(Blunt(x) ∧ Conical(y) ∧ ConnectedByBase(x, y) → ¬Handle(x) A7 : ∀ (x)(Drawer(x) → ContainsItems(x)). I: {Drawer(d1) ∧ ConnectedByTip(h1, d1)} rel: {A1, A2, A3, A4} Q: {CanBePulled(h1)}

17
Example CBIR Name |REL||RET| |RET REL| RecallPrecisionF-Measure CBIR14441.01.001.0 CBIR24541.00.800.889 CBIR34530.750.600.667 BKS4741.00.570.726 rel: {A1, A2, A3, A4} CBIR Result NameCBIR Retrieved Propositions CBIR1{A1,A2,A3,A4} CBIR2{A1,A2,A3,A4,A6} CBIR3{A2,A3,A4,A5,A7}

18
Relevance-theoretic vs. Distance from the Optimal Similarities – Rules of inference used to create relevant proposition set Differences – Distance of the optimal generates relevant proposition sets that precisely match the original definition – Relevance-theoretic values CBIR outputs with multiple paths of inference to a solution – Relevance-theoretic requires a formalization of the non-trivial derivation concept

19
Conclusions and Future Work Conclusions – Relevance-theoretic approach is less successful at measuring some CBIR results than the distance from the optimal – Uses Comparing different CBIR algorithms Improving CBIR Procedures – Many CBIR procedures have various parameters that can be modified to change their performance Future Work – Use the theoretical discussion to help construct comparisons of CBIR results

Similar presentations

OK

Knowledge Representation for Self-Aware Computer Systems Stuart C. Shapiro Department of Computer Science and Engineering, and Center for Cognitive.

Knowledge Representation for Self-Aware Computer Systems Stuart C. Shapiro Department of Computer Science and Engineering, and Center for Cognitive.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google