Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Tefko Saracevic, Rutgers University1 Search strategy & tactics Governed by effectiveness & feedback.

Similar presentations


Presentation on theme: "© Tefko Saracevic, Rutgers University1 Search strategy & tactics Governed by effectiveness & feedback."— Presentation transcript:

1 © Tefko Saracevic, Rutgers University1 Search strategy & tactics Governed by effectiveness & feedback

2 © Tefko Saracevic, Rutgers University2 Some definitions Search statement (query): –set of search terms with logical connectors and attributes - file and system dependent Search strategy (big picture): –overall approach to searching of a question selection of systems, files, search statements & tactics, sequence, output formats; cost, time aspects Search tactics (action choices): –choices & variations in search statements terms, connectors, attributes

3 © Tefko Saracevic, Rutgers University3 Some definitions (cont.) Cycle : –set of commands from start (begin) to viewing (type) results, or from viewing to viewing command Move : –modifications of search strategies or tactics that are aimed at improving the results

4 © Tefko Saracevic, Rutgers University4 Some definitions (cont.) Effectiveness : –performance as to objectives to what degree did a search accomplish what desired? how well done in terms of relevance? Efficiency : –performance as to costs at what cost and/or effort, time? Both KEY concepts & criteria for selection of strategy, tactics & evaluation

5 © Tefko Saracevic, Rutgers University5 Effectiveness criteria Search tactics chosen & changed following some criteria of accomplishment –none - no thought given –relevance –magnitude –output attributes –topic/strategy Tactics altered interactively –role & types of feedback Knowing what tactics may produce what results –key to professional searcher

6 © Tefko Saracevic, Rutgers University6 Relevance: key concept in IR Attribute/criterion reflecting effectiveness of exchange of inf. between people (users) & IR systems in communication contacts, based on valuation by people Some attributes: –in IR - user dependent –multidimensional or faceted –dynamic –measurable - somewhat –intuitively well understood

7 © Tefko Saracevic, Rutgers University7 Types of relevance Several types considered: –Systems or algorithmic relevance relation between between a query as entered and objects in the file of a system as retrieved or failed to be retrieved by a given procedure or algorithm. Comparative effectiveness. –Topical or subject relevance: relation between topic in the query & topic covered by the retrieved objects, or objects in the file(s) of the system, or even in existence; Aboutness..

8 © Tefko Saracevic, Rutgers University8 Types of relevance (cont.) –Cognitive relevance or pertinence: relation between state of knowledge & cognitive inf. need of a user and the objects provided or in the file(s). Informativeness, novelty... – Motivational or affective relevance relation between intents, goals & motivations of a user & objects retrieved by a system or in the file, or even in existence. Satisfaction... –Situational relevance or utility: relation between the task or problem- at-hand. and the objects retrieved (or in the files). Relates to usefulness in decision-making, reduction of uncertainty...

9 © Tefko Saracevic, Rutgers University9 Effectiveness measures Precision: – probability that given that an object is retrieved it is relevant, or the ratio of relevant items retrieved to all items retrieved Recall: – probability that given that an object is relevant it is retrieved, or the ratio of relevant items retrieved to all relevant items in a file Precision easy to establish, recall is not –union of retrievals as a “trick” to establish recall

10 © Tefko Saracevic, Rutgers University10 Calculation Precision = a a + b Recall = a a + c High precision = maximize a, minimize b High recall = maximize a, minimize c

11 © Tefko Saracevic, Rutgers University11 Precision-recall trade-off USUALLY: precision & recall are inversely related –higher recall usually lower precision & vice versa 100 % 0 Ideal Usual Improvements Precision Recall

12 © Tefko Saracevic, Rutgers University12 Search tactics What variations possible? –Several ‘things’ in a query can be selected or changed that affect effectiveness: 1. LOGIC –choice of connectors among terms (AND, OR, NOT, W …) 2. SCOPE –no. of concepts linked - ANDs (A AND B vs A AND B AND C) 3.EXHAUSTIVITY –for each concept no. of related terms - OR connections (A OR B vs. A OR B OR C)

13 © Tefko Saracevic, Rutgers University13 Search tactics (cont.) 4. TERM SPECIFICITY –for each concept level in hierarchy (broader vs narrower terms) 5. SEARCHABLE FIELDS –choice for text terms & non-text attributes (titles only, limits) 6. FILE OR SYSTEM SPECIFIC CAPABILITIES (ranking, target, sorting)

14 © Tefko Saracevic, Rutgers University14 Effectiveness “laws” SCOPE –more ANDs EXHAUSTIVITY –more ORs USE OF NOTs BROAD TERM USE –low specificity Output size: down Recall: down Precision: up Output size: up Recall: up Precision: down Output size down Recall: down Precision: up Output size: up Recall: up Precision: down Output size: down Recall: down Precision: up PHRASE USE - high specificity

15 © Tefko Saracevic, Rutgers University15 Recall, precision devices BROADENING - higher recall: Fewer ANDs More ORs Fewer NOTs More free text Fewer controlled More synonyms Broader terms Less specific More truncation Fewer qualifiers Fewer LIMITs Citation growing NARROWING - higher precision: More ANDs Fewer ORs More NOTs Less free text More controlled Less synonyms Narrower terms More specific Less truncation More qualifiers More LIMITs Building blocks

16 © Tefko Saracevic, Rutgers University16 Examples from a study 40 users; question each 4 intermediaries; triadic HCI regular setting videotaped, logged 48 hrs of tape (72 min. avrg) –presearch: 16 min avrg. –online: 56 min avrg. User judgments: 6225 items –3565 relevant or part. rel. –2660 not relevant Many variables, measures & analyses

17 © Tefko Saracevic, Rutgers University17 Feedback Relevance feedback loops: Content relevance feedback –judging relevance of items Term relevance feedback –looking for new terms Magnitude feedback –number of postings Strategy feedback loops: Tactical review feedback –review of strategy (DS) Terminology review feedback –term review & evaluation

18 © Tefko Saracevic, Rutgers University18 Data on feedback types Total feedback loops Content rel. fdb. Term rel. fdb. Magnitude fdb. Tactic. rev. fdb. Termin. rev. fdb Feedbacks initiated by: User Intermediary 885 (in 40 questions) R ank 354 (40%) 2 67 (8%) 3 396 (45%) 1 56 (6%) 4 12 (1%) 5 351 (40%) 534 (60%) (mostly magnitude)

19 © Tefko Saracevic, Rutgers University19 DIALOG commands Total number: By type: Select Type Change db. Display sets Limit Expand 1677 (in 40 questions) In no. of quest. 1057 (63%) 40 462 (28%) 40 67 (4%) 24 57 (3%) 22 19 (1%) 11 6 (1%) 6


Download ppt "© Tefko Saracevic, Rutgers University1 Search strategy & tactics Governed by effectiveness & feedback."

Similar presentations


Ads by Google