ASSOCIATIVE BROWSING Evaluating 1 Jinyoung Kim / W. Bruce Croft / David Smith for Personal Information.

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Developing and Evaluating a Query Recommendation Feature to Assist Users with Online Information Seeking & Retrieval With graduate students: Karl Gyllstrom,
COMP423 Intelligent Agents. Recommender systems Two approaches – Collaborative Filtering Based on feedback from other users who have rated a similar set.
GENERATING AUTOMATIC SEMANTIC ANNOTATIONS FOR RESEARCH DATASETS AYUSH SINGHAL AND JAIDEEP SRIVASTAVA CS DEPT., UNIVERSITY OF MINNESOTA, MN, USA.
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Dialogue – Driven Intranet Search Suma Adindla School of Computer Science & Electronic Engineering 8th LANGUAGE & COMPUTATION DAY 2009.
Explorations in Tag Suggestion and Query Expansion Jian Wang and Brian D. Davison Lehigh University, USA SSM 2008 (Workshop on Search in Social Media)
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning Presented by Pinar Donmez joint work with Jaime G. Carbonell Language Technologies.
1 Question Answering in Biomedicine Student: Andreea Tutos Id: Supervisor: Diego Molla.
Evaluating Search Engine
A Web of Concepts Dalvi, et al. Presented by Andrew Zitzelberger.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
Presented by Li-Tal Mashiach Learning to Rank: A Machine Learning Approach to Static Ranking Algorithms for Large Data Sets Student Symposium.
Modern Information Retrieval
INFO 624 Week 3 Retrieval System Evaluation
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
FACT: A Learning Based Web Query Processing System Hongjun Lu, Yanlei Diao Hong Kong U. of Science & Technology Songting Chen, Zengping Tian Fudan University.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
Recall: Query Reformulation Approaches 1. Relevance feedback based vector model (Rocchio …) probabilistic model (Robertson & Sparck Jones, Croft…) 2. Cluster.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
Digital Library Service Integration (DLSI) --> Looking for Collections and Services to be DLSI Testbeds
J. Chen, O. R. Zaiane and R. Goebel An Unsupervised Approach to Cluster Web Search Results based on Word Sense Communities.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Online Learning for Web Query Generation: Finding Documents Matching a Minority Concept on the Web Rayid Ghani Accenture Technology Labs, USA Rosie Jones.
Finding Advertising Keywords on Web Pages Scott Wen-tau YihJoshua Goodman Microsoft Research Vitor R. Carvalho Carnegie Mellon University.
Query session guided multi- document summarization THESIS PRESENTATION BY TAL BAUMEL ADVISOR: PROF. MICHAEL ELHADAD.
Retrieval and Evaluation Techniques for Personal Information Jin Young Kim 7/26 Ph.D Dissertation Seminar.
HUMANE INFORMATION SEEKING: GOING BEYOND THE IR WAY JIN YOUNG IBM RESEARCH 1.
Retrieval Model and Evaluation Jinyoung Kim UMass Amherst CS646 Lecture 1.
CS344: Introduction to Artificial Intelligence Vishal Vachhani M.Tech, CSE Lecture 34-35: CLIR and Ranking in IR.
Result presentation. Search Interface Input and output functionality – helping the user to formulate complex queries – presenting the results in an intelligent.
CS598CXZ Course Summary ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Slide Image Retrieval: A Preliminary Study Guo Min Liew and Min-Yen Kan National University of Singapore Web IR / NLP Group (WING)
©2008 Srikanth Kallurkar, Quantum Leap Innovations, Inc. All rights reserved. Apollo – Automated Content Management System Srikanth Kallurkar Quantum Leap.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Mining the Web to Create Minority Language Corpora Rayid Ghani Accenture Technology Labs - Research Rosie Jones Carnegie Mellon University Dunja Mladenic.
Markup and Validation Agents in Vijjana – A Pragmatic model for Self- Organizing, Collaborative, Domain- Centric Knowledge Networks S. Devalapalli, R.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Relevance Detection Approach to Gene Annotation Aid to automatic annotation of databases Annotation flow –Extraction of molecular function of a gene from.
Assessing The Retrieval A.I Lab 박동훈. Contents 4.1 Personal Assessment of Relevance 4.2 Extending the Dialog with RelFbk 4.3 Aggregated Assessment.
Contextual Ranking of Keywords Using Click Data ICDE`09 Utku Irmak Vadim von Brzeski Vadim von Brzeski Reiner Kraft.
Modeling term relevancies in information retrieval using Graph Laplacian Kernels Shuguang Wang Joint work with Saeed Amizadeh and Milos Hauskrecht.
Qi Guo Emory University Ryen White, Susan Dumais, Jue Wang, Blake Anderson Microsoft Presented by Tetsuya Sakai, Microsoft Research.
Information Retrieval
Post-Ranking query suggestion by diversifying search Chao Wang.
Final Project Mei-Chen Yeh May 15, General In-class presentation – June 12 and June 19, 2012 – 15 minutes, in English 30% of the overall grade In-class.
Finding the Right Facts in the Crowd: Factoid Question Answering over Social Media J. Bian, Y. Liu, E. Agichtein, and H. Zha ACM WWW, 2008.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Michael Bendersky, W. Bruce Croft Dept. of Computer Science Univ. of Massachusetts Amherst Amherst, MA SIGIR
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
The Development of a search engine & Comparison according to algorithms Sung-soo Kim The final report.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
ASSOCIATIVE BROWSING Evaluating 1 Jin Y. Kim / W. Bruce Croft / David Smith by Simulation.
WHIM- Spring ‘10 By:-Enza Desai. What is HCIR? Study of IR techniques that brings human intelligence into search process. Coined by Gary Marchionini.
Image Retrieval and Ranking using L.S.I and Cross View Learning Sumit Kumar Vivek Gupta
COMP423 Intelligent Agents. Recommender systems Two approaches – Collaborative Filtering Based on feedback from other users who have rated a similar set.
Personalized Ontology for Web Search Personalization S. Sendhilkumar, T.V. Geetha Anna University, Chennai India 1st ACM Bangalore annual Compute conference,
User Modeling for Personal Assistant
Large-Scale Content-Based Audio Retrieval from Text Queries
Learning to Rank Shubhra kanti karmaker (Santu)
Combining Keyword and Semantic Search for Best Effort Information Retrieval  Andrew Zitzelberger 1.
Jonathan Elsas LTI Student Research Symposium Sept. 14, 2007
Learning to Rank with Ties
Lab 2: Information Retrieval
Ranking using Multiple Document Types in Desktop Search
Presentation transcript:

ASSOCIATIVE BROWSING Evaluating 1 Jinyoung Kim / W. Bruce Croft / David Smith for Personal Information

* What do you remember about your documents? 2 Registration James Use search if you recall keywords!

* What if keyword search is not enough? 3 Registration Search first, then browse through documents!

* But I don’t remember any keywords! 4 You might remember a related concept! James William *Concept : entities and terms of interest to the user

* How can we build associations? 5 Manually? Participants wouldn’t create associations beyond simple tagging operations - Sauermann et al Participants wouldn’t create associations beyond simple tagging operations - Sauermann et al Automatically? How would it match user’s preference?

* Building the Associative Browsing Model 6 2. Concept Extraction 3. Link Extraction 4. Link Refinement 1. Document Collection Term Similarity Temporal Similarity Co-occurrence Click-based Training

* Concept: Search Engine Link Extraction and Refinement 7 Link Scoring Linear combination of link type scores S(c 1,c 2 ) = Σ i [ w i × Link i (c 1,c 2 ) ] Link Presentation Ranked list of suggested items Users click on them for browsing Link Refinement (training w i ) Maximize click-based relevance Grid Search : Maximize retrieval effectiveness (MRR) RankSVM : Minimize error in pairwise preference ConceptsDocuments Term Vector Similarity Temporal Similarity Tag Similarity String SimilarityPath / Type Similarity Co-occurrenceConcept Similarity

E VALUATION 8

* Evaluation based on Known-item Finding Data Collection Collect public documents in UMass CS dept CS dept. people competed in known-item finding tasks 30 participants, 53 search sessions in total Two rounds of user study Metrics Value of browsing % of sessions browsing was used % of sessions browsing was used & led to success Quality of browsing suggestions Mean Reciprocal Rank using clicks as judgments 10-fold cross validation over the click data collected 9

* DocTrack [Kim&Croft10] 10 Concept: Computer Architecture Relevant Documents:

* Evaluation Results Comparison with Simulation Results [Kim&Croft&Smith11] Roughly matches in terms of overall usage and success ratio The Value of Browsing Browsing was used in 30% of all sessions Browsing saved 75% of sessions when used Evaluation TypeTotal (#sessions) Browsing used Successful outcome Simulation [KCS11] 63,2609,410 (14.8%)3,957 (42.0%) User Study (1)29042 (14.5%)15 (35.7%) User Study (2)14243 (30.2%)32 (74.4%) Document Only Document + Concept

* Quality of Browsing Suggestions – CS Collection Concept Browsing (MRR) Document Browsing (MRR) 12

* Summary 13 Associative Browsing Model for Personal Information Evaluation based on User Study Any Questions?

* Evaluation by Simulated User Model [KCS11] Query generation model [Kim&Croft09] Select terms from a target document State transition model Use browsing when result looks marginally relevant Link selection model Click on browsing suggestions based on perceived relevance 14

* Community Efforts based on the Datasets 15

* Future Work 16 User Interface Evaluation Learning Method Learning Method Concept Map Visualization Query-based Concept Generation Exploratory Search Large-scale User Study Combine with Faceted Search More Features Active Learning

O PTIONAL S LIDES 17

* Role of Personalization 18 Using one person’s click data for training results in much higher learning effectiveness

* Quality of Browsing Suggestions – Person 1/2 Concept Browsing (MRR) Document Browsing (MRR) 19

* Building the Associative Browsing Model 20 Link Types Links between concepts Links between documents ConceptsDocuments Term Vector Similarity Temporal Similarity Tag Similarity String Similarity Path / Type Similarity Co-occurrenceConcept Similarity

* Quality of Browsing Suggestions (optional) For Concept Browsing For Document Browsing 21 (Using the CS Collection, Measured in MRR)

* Evaluation Results (optional) Success Ratio of Browsing Lengths of Successful Sessions