Cs466-251 Future Direction : Collaborative Filtering Motivating Observations:  Relevance Feedback is useful, but expensive a)Humans don’t often have time.

Slides:



Advertisements
Similar presentations
Recommender Systems & Collaborative Filtering
Advertisements

1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
SEARCHING QUESTION AND ANSWER ARCHIVES Dr. Jiwoon Jeon Presented by CHARANYA VENKATESH KUMAR.
COMP423 Intelligent Agents. Recommender systems Two approaches – Collaborative Filtering Based on feedback from other users who have rated a similar set.
Section Based Relevance Feedback Student: Nat Young Supervisor: Prof. Mark Sanderson.
Exploring Reduction for Long Web Queries Niranjan Balasubramanian, Giridhar Kuamaran, Vitor R. Carvalho Speaker: Razvan Belet 1.
A Quality Focused Crawler for Health Information Tim Tang.
Web Search - Summer Term 2006 III. Web Search - Introduction (Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University.
Evaluating Search Engine
Search Engines and Information Retrieval
Personalizing Search via Automated Analysis of Interests and Activities Jaime Teevan Susan T.Dumains Eric Horvitz MIT,CSAILMicrosoft Researcher Microsoft.
Machine Learning Case study. What is ML ?  The goal of machine learning is to build computer systems that can adapt and learn from their experience.”
Basic IR: Queries Query is statement of user’s information need. Index is designed to map queries to likely to be relevant documents. Query type, content,
PROBLEM BEING ATTEMPTED Privacy -Enhancing Personalized Web Search Based on:  User's Existing Private Data Browsing History s Recent Documents 
CS246 Search Engine Bias. Junghoo "John" Cho (UCLA Computer Science)2 Motivation “If you are not indexed by Google, you do not exist on the Web” --- news.com.
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
Information Access Douglas W. Oard College of Information Studies and Institute for Advanced Computer Studies Design Understanding.
Recommender Systems; Social Information Filtering.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
CEDROM-SNi’s DITA- based Project From Analysis to Delivery By France Baril Documentation Architect.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Search Engines and Information Retrieval Chapter 1.
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
Recommender systems Drew Culbert IST /12/02.
Aardvark Anatomy of a Large-Scale Social Search Engine.
IR Evaluation Evaluate what? –user satisfaction on specific task –speed –presentation (interface) issue –etc. My focus today: –comparative performance.
©2008 Srikanth Kallurkar, Quantum Leap Innovations, Inc. All rights reserved. Apollo – Automated Content Management System Srikanth Kallurkar Quantum Leap.
Personalization features to accelerate research Presented by: Armond DiRado Account Development Manager
PageRank for Product Image Search Kevin Jing (Googlc IncGVU, College of Computing, Georgia Institute of Technology) Shumeet Baluja (Google Inc.) WWW 2008.
Implicit An Agent-Based Recommendation System for Web Search Presented by Shaun McQuaker Presentation based on paper Implicit:
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Web Searching Basics Dr. Dania Bilal IS 530 Fall 2009.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Universit at Dortmund, LS VIII
The Development of the Ceramics and Glass website Mia Ridge Museum Systems Team Museum of London.
Personalized Search Xiao Liu
Collaborative Information Retrieval - Collaborative Filtering systems - Recommender systems - Information Filtering Why do we need CIR? - IR system augmentation.
Searching the web Enormous amount of information –In 1994, 100 thousand pages indexed –In 1997, 100 million pages indexed –In June, 2000, 500 million pages.
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
Getting Started with SharePoint 2010 Gareth Johns IT Skills Development Advisor.
What is an Annotated Bibliography? First, what is an annotation?  More than just a brief summary of an article, book, Web site etc.  It combines summary.
To create an Interactive Learning Template To create new learning content for new courses to be hosted on the Virtual Learning Environment (vle) Learning.
1 Collaborative Filtering & Content-Based Recommending CS 290N. T. Yang Slides based on R. Mooney at UT Austin.
B. Trousse, R. Kanawati - JTE : Advanced Services on the Web, Paris 7 may 1999 Broadway: a recommendation computation approach based on user behaviour.
Augmenting (personal) IR Readings Review Evaluation Papers returned & discussed Papers and Projects checkin time.
A System for Automatic Personalized Tracking of Scientific Literature on the Web Tzachi Perlstein Yael Nir.
Personalization Services in CADAL Zhang yin Zhuang Yuting Wu Jiangqin College of Computer Science, Zhejiang University November 19,2006.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
My Path Awareness Part 1 Kuder Career Planning Systems Presented by: Veronica Allen, Career Coach Betsy Richards, Director of Career Resources Wade Britt,
INFORMATION RETRIEVAL MEASUREMENT OF RELEVANCE EFFECTIVENESS 1Adrienn Skrop.
Session 5: How Search Engines Work. Focusing Questions How do search engines work? Is one search engine better than another?
Relational Databases Today we will look at: Different ways of searching a database Creating queries Aggregate Queries More complex queries involving different.
Summon® 2.0 Discovery Reinvented
Developments in Evaluation of Search Engines
Recommender Systems & Collaborative Filtering
Future Direction #3: Collaborative Filtering
Information Organization: Overview
Search Engine Architecture
IR Theory: Evaluation Methods
Author: Kazunari Sugiyama, etc. (WWW2004)
DEVELOPMENTAL LEARNING AND TARGETED TEACHING
Recommender Systems Copyright: Dietmar Jannah, Markus Zanker and Gerhard Friedrich (slides based on their IJCAI talk „Tutorial: Recommender Systems”)
Identify Different Chinese People with Identical Names on the Web
Comparing your papers to the rest of the world
Web Information retrieval (Web IR)
Information Organization: Overview
Future Direction : Collaborative Filtering
Presentation transcript:

cs Future Direction : Collaborative Filtering Motivating Observations:  Relevance Feedback is useful, but expensive a)Humans don’t often have time to give positive/negative judgments on a long list of returned web pages to improve individual searches b)Effort is used once, then wasted  want pooling and re-use of efforts access individuals

cs Collaborative Filtering Motivating Observations (continued) :  Relevance  Quality Queries : bootleg CD’sNAFTA Medical School AdmissionsSimulated Annealing REMAlzheimer’s Many web pages can be “about” a topic (specialized unit) But there are great differences in quality of presentation, detail, professionalism, substance, etc.

cs Possible Solution: build a supervised learner for quality/ NOT topic matter Train on examples of each, learn distinguishing properties

cs Supervised Learner for “Quality” of a Page P(Quality|Features) in addition to topic similarity salient features may include: # of links Size How often cited Variety of content “Top 5 th of Web” awards etc, assessment of usage counter (hit count) Complexity of graphics  quality?? Prior quality rating of server One Solution:

cs Collaborative Filtering Problem: Different humans have different profiles of relevance/quality Query: Alzheimer’s disease = A document or web page Relevant (high quality) for 6 th Grader Appropriate for Care Giver Medical Researcher

cs One Solution: Pool collective wisdom and compute weighted average of page rankings across multiple users in an affinity group (taking into account topic relevance, quality, and other intangibles) Hypothesis : humans have a better idea than machines of what other humans will find interesting

cs Collaborative Filtering Idea: instead of trying to model (often intangible) quality judgments, keep a record of previous human relevance and quality judgments Query: Alzheimer’s Table of user rankings of web pages for a query Web pages Users A B C D E F G

cs Solution 1: Identify individual with similar tastes (high Pearson’s coefficient on similar ranking judgments) instead of: P(relevant to me | Page i content) compute: P(relevant to me | relevant to you)  My similarity to you * P(relevant to you | Page i content)  Your Judgments

cs Solution 2: Model Group Profiles for relevance judgments (e.g. Junior High School vs. Medical Researchers) compute: P(relevant to me | relevant to group g )  My similarity to the group * P(relevant to group g | Page i content)  group’s collective (avg) relevance judgments Supervised Learning