Download presentation
Presentation is loading. Please wait.
Published byDella Richards Modified over 9 years ago
1
Information Retrieval Effectiveness of Folksonomies on the World Wide Web P. Jason Morrison
2
Information retrieval (IR) on the Web Traditionally, there are 2 options: 1. Search Engines – documents added to collection automatically, full text searching using some algorithm; 2. Subject Directories – documents collected and organized into a hierarchy or taxonomy by experts. Many sites now use a new system: 3. Folksonomies – documents collected and tagged with keywords by all users, brought together into a loose organizational system.
3
Folksonomies Very little empirical study has been done on Folksonomies. Used by social bookmarking sites like Del.icio.us, photography sites like Flickr, and video sites like YouTube. Even large, established retailers like Amazon are starting to experiment with tagging.
4
Research Questions: 1. Do web sites that employ folksonomies return relevant results to users performing information retrieval tasks, specifically searching? 2. Do folksonomies perform as well as subject directories and search engines?
5
Hypotheses: 1. Despite different index sizes and categorization strategies, the top results from search engines, directories, and folksonomies will show some overlap. Items that show up in the results of more than one will be more likely to be judged. 2. There will be significant difference between the IR effectiveness of search engines, expert-maintained directories, and folksonomies. 3. Folksonomies will perform as well or better than search engines and directories for information needs that fall into entertainment or current event categories. They will perform less well for factual or specific-document searches.
6
Gordon and Pathak’s (1999) Seven Features: 1. Searches should use real information needs 2. Studies should try to capture the information need, not just the query used, if possible 3. A large enough number of searches must be done to do a meaningful evaluation. 4. Most major search engines should be included 5. The special features of each engine should be utilized. 6. Relevance should be judged by the person with the information need.
7
Gordon and Pathak’s Seven Features, cont: 7) Experiments need to be conducted so they provide meaningful measures: Good experimental design, such as returning results in a random order; Use of accepted IR measurements like Recall and Precision; Use of appropriate statistical tests.
8
Hawking, et al.’s (2001) additional feature: 8) Search topics should include different types of information needs Four different types based on the desired results: 1. A short factual statement that directly answers a question; 2. A specific document or web site that the user knows or suspects exists; 3. A selection of documents that pertain to an area of interest; or 4. An exhaustive list of every document that meets their need. (
9
Leighton and Srivastava (1997) Gordon and Pathak (1999) Hawking et al (2001)Can et al (2003) The Present Study Information Needs Provided by Library reference desk, other studies Faculty membersQueries from web logsComputer Science Students and Professors Graduate students Queries Created by The researchers Skilled searchersQueries from web logsSame Relevance Judged by The researchers (by consensus) Same faculty members Research AssistantsSame Participants233 Faculty members 61934 Total queries15335425103
10
Leighton and Srivastava (1997) Gordon and Pathak (1999) Hawking et al (2001)Can et al (2003) The Present Study Engines tested 582088 Results evaluated per engine 20 Total results evaluated / evaluator: 15001603600160 or 320About 160 Relevancy Scale 4 categories4-point scaleBinary Precision Measures: P(20), weighted groups by rank P(1-5), P(1-10), P(5-10), P(15-20) P(1), P(1-5), P(5) P(20) P(10), P(20) P(20), P(1-5) Recall Measures: noneRelative recall; R(15-20), R(15-25), R(40-60), R(90- 110), R(180-200) noneRelative recall: R(10), R(20) Relative recall: R(20), R(1-5)
11
IR systems studied Two directories: Open Directory and Yahoo. Three search engines: Alta Vista, Live (Microsoft), and Google. Three social bookmarking systems representing the folksonomies: Del.icio.us, Furl, and Reddit.
12
General results 34 users, 103 queries and 9266 total results returned. The queries generated by participants were generally similar to previous studies in terms of word count and use of operators. Previous studies of search engine logs have shown that users rarely try multiple searches and rarely look past the first set or results. This fits the current study. For many queries, some IR systems did not return the full 20 results. In fact there were many queries where some IR systems returning 0 results.
13
Hypothesis 1: Overlap in results Number of engines returning the URL Number of unique results Relevancy rateSD 17223.1631.36947 2617.2950.45640 3176.3580.48077 443.4884.50578 515.4667.51640 62.0000.00000 Total8076.1797.38393
14
IR system type combination Engine types returning same URLNMean DirectoryFolksonomySearch Engine no yes4801.2350 noyesno2484.0676 yesno 592.1419 noyes 94.3191 yesnoyes67.4179 yes no12.1667 yes 26.4231 Total8076.1797
15
Overlap of results findings Almost 90% of results were returned by just one engine – fits well with previous studies. Results found by both search engines and folksonomies were significantly more likely to be relevant The directory/search engine group had a higher relevancy rate than the folksonomy/search engine group, but the difference was not significant. Allowing tagging or meta-searching a folksonomy could improve search engine performance. Hypothesis 1 is supported.
16
Hypothesis 2: Performance differences Performance measures: Precision Relative Recall Retrieval Rate also calculated
17
Performance (dcv 20) IR SystemPrecisionRecallRetrieval Rate Open DirectoryMean.172297.0239340.1806 N3798103 Yahoo DirectoryMean.270558.0637670.1709 N3698103 Del.icio.usMean.210853.0412390.1908 N4398103 FurlMean.093840.0449750.5311 N7598103 RedditMean.041315.0420030.5617 N6298103 GoogleMean.286022.3517360.8942 N9398103 LiveMean.235437.3412940.9845 N10398103 Alta VistaMean.262990.4312670.9845 N10298103 TotalMean.204095.1675270.5623 N551784824
18
Precision at positions 1-20
19
Recall at positions 1-20
20
Average performance at dcv 1-5 IR System TypeAvg PrecisionAvg Recall Avg Retrieval Rate DirectoryMean 0.2647 0.0183 0.2899 N 73 196 206 FolksonomyMean 0.1214 0.0119 0.5290 N 180 294 309 Search EngineMean 0.4194 0.1294 0.9631 N 298 294 309
21
Performance differences findings There are statistically significant differences among individual IR systems and IR system types. Search engines had the best performance by all measures. In general directories had better precision than folksonomies, but difference not usually statistically significant. Del.icio.us performed as well or better than the directories. Hypothesis 2 is supported.
22
Hypothesis 3: Performance for different needs Do Folksonomies perform better than the other IR systems for some information needs, and worse for others?
23
Comparing information need categories Info Need Category IR System Type Avg Precision Avg RecallAvg Retrieval Short Factual Answer DirectoryMean.218610.009491.349404 N1228 FolksonomyMean.060118.007089.601270 N2842 Search Engine Mean.440501.095157.952381 N4042 Specific Item DirectoryMean.193333.033187.332540 N173842 FolksonomyMean.027187.008421.447513 N325763 Search Engine Mean.353550.268214.968254 N615763 Selection of Relevant Items DirectoryMean.304849.015789.264510 N44130136 FolksonomyMean.160805.013932.539314 N120195204 Search Engine Mean.435465.096227.963644 N197195204
24
News and entertainment searches Information Need IR System Type Avg Precision Avg RecallRetrieval Rate NewsDirectory Mean.000000.069365 N44042 Folksonomy Mean.154666.016822.573439 N406063 Search Engine Mean.372350.096911.961640 N616063 EntertainmentDirectory Mean.302223.021324.241111 N61618 Folksonomy Mean.136221.016272.483457 N152427 Search Engine Mean.299065.127569.925926 N252427
25
Factual and exact site searches Information Need IR System Type Avg Precision Avg RecallRetrieval Rate FactualDirectory Mean.218610.009491.349404 N1228 Folksonomy Mean.060118.007089.601270 N2842 Search Engine Mean.440501.095157.952381 N4042 Exact SiteDirectory Mean.193333.033187.332540 N173842 Folksonomy Mean.027187.008421.447513 N325763 Search Engine Mean.353550.268214.968254 N615763
26
Performance for different info needs findings Significant differences were found among folksonomies, search engines, and directories for the three info need categories. When comparing within info need categories, the search engines had significantly better precision. Recalls scores were similar but not significant. Folksonomies did not perform significantly better for news and entertainment searches; but They did perform significantly worse than search engines for factual and exact site searches. Hypothesis 3 only partly supported.
27
What other factors impacted performance? For the study as a whole, the use of query operators correlated negatively with recall and retrieval rate. Non-boolean operators correlated negatively with precision scores. When looking at just folksonomy searches, query operator use lead to even lower recall and retrieval scores. Some specific cases were not handled by the folksonomies. A search for movie show times at a certain zip code (“showtimes 45248 borat”) had zero results on all folksonomies. Queries that were limited by geography and queries with obscure topics can perform poorly in folksonomies because users might not have added/tagged items yet.
28
User factors For the most part, user experience did not correlate significantly with performance measures. Expert users were more likely to have lower precision scores. Same correlation found when correcting for query factors Experienced users probably less likely to deem something relevant.
29
Recommendations Further research is needed Additional folksonomies should be studied as well. It might be useful to collect additional types of data, such as whether or not participants clicked through to look at sites before judging. Additional analysis on ranking would be interesting. Any similar study must also deal with difficult technical issues like server and browser timeouts.
30
Conclusions The overlap between folksonomy results and search engine results could be used to improve Web IR performance. The search engines, with their much larger collections, performed better than directories and folksonomies in almost every case. Folksonomies may be better than directories for some needs, but more data is required. Folksonomies are particularly bad at finding a factual answer or one specific site.
31
Conclusions (cont.) Although search engines had better performance across the board, folksonomies are promising because: 1.They are relatively new and may improve with time and additional users; 2.Search results could be improved with relatively small changes to the way query operators and search terms are used. 3.There are many variations in organization to be tried.
32
Future research Look at the difference between systems that primarily use tagging (Del.icio.us, Furl) and those that use ranking (Reddit, Digg) Which variations are more successful? Tags, titles, categories, descriptions, comments, and even full text are collected by various folksonomies. Where should weight be placed? Should a document that matches the query closely rank higher than one with many votes, or vice versa?
33
Future research (cont.) Artificial situations could be set up to study absolute recall and searches for an exhaustive list of items. Similar studies on IR systems covering smaller domains, like video, should be done. Blog search systems in particular would be interesting. What about other IR behaviors such as browsing? There are many other fascinating topics such as the social networks in some folksonomies and what motivates users to tag items among others.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.