Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute By Paul Matthews and Arne Wunder

Similar presentations


Presentation on theme: "Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute By Paul Matthews and Arne Wunder"— Presentation transcript:

1 Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute By Paul Matthews (p.matthews@odi.org.uk) and Arne Wunder (a.wunder@lse.ac.uk)

2 Background Web 2.0 approaches: Communities of Practice share recommended sources and bookmarks Focuss.eu: Initiative of European development think tanks. Growing popularity of social bookmarking, interest in usage within organisations Folksonomy over taxonomy and serendipity in addition to traditional search and retrieval

3 Objective 1 Comparative relevance assessment of specialised international development search engine Focuss.eu (using Google Coop) against Google web search

4 Objective 2 Investigate how staff use bookmarking and test a pilot intranet-based bookmarking system

5 Overseas Development Institute ODI is a registered charity and Britain's leading independent think tank on international development and humanitarian issues. Main task: Policy-focused research and dissemination, mainly for the Department of International Development (DFID). 127 staff members, most of them researchers.

6 Search engines: research design No. of search engines compared2 (Google.com, Focuss.eu) Features of evaluationBlind relevance judgement of top eight live results, following hyperlinks was possible QueriesUser-defined; range of subjects: development policy Basic population127 No. of jurors (=sample)14 No. of queries30 Average no. of search terms2.66 Impressions; items judged447 Originators, jurorsODI staff (research, support) Qualitative dimensionSemi-structured expert interviews to capture user narratives and general internet research behaviour

7 Search engines: application

8 Search engines: analysis (1) Mean relevance Total score for relevance ratings divided by number of ratings (2) Term-sensitive relevance Relevance comparison for searches using strictly development-related terms vs. “ambiguous” terms (3) Direct case-by-case comparison Comparison of relevance scores for each query: Which search engines “wins”? (4) High relevance per search Number of high-quality results (relevance of 4 or 5) per search (5) User narratives What role do search engines play in individual research strategies?

9 Search engines: findings: (1) Mean overall relevance Interpretation: Globally, Focuss outperforms Google web search significantly

10 Search engines: findings: (2) Term-sensitive relevance Interpretation: The true strength of Focuss lies in dealing with relatively ambiguous terms. In other words: It succeeds in avoiding the noise of unrelated ambiguous results

11 Findings: (3) Direct case-by-case comparison Interpretation: Focuss outperforms Google web search in a significant number of searches, although this advantage is less clear in searches using strictly development related terms

12 Search engines: findings: (4) High relevance per search Interpretation: Focuss is slightly more likely to produce at least one highly relevant result for each search than Google web search.

13 Search engines: findings: (5) Interviews Search engines used for less complex research tasks or for getting quick results. Search engines criticised for failing to include the most relevant and authoritative knowledge contained in databases as well as books. Google Scholar praised for including some relevant scholarly journals but was criticised for its weak coverage and degree of noise. For more complex research tasks, online journals and library catalogues are preferred research sources. Interpretation: Even specialised search engines are far from being a panacea as they do not solve the “invisible web” issue.

14 Search engines: Conclusion Focuss’s strength is its context-specificitiy Here, Focuss achieves a better overall relevance and has a better likelihood of producing at least one highly relevant result per search. However, both still have structural limitations. Doing good development research is therefore not about choosing the “right search engine” but about choosing the right tools for each individual research task.

15 Bookmarking: Design Survey of user requirements and behaviour Creation of bookmarking module for intranet (MS SharePoint) Usability testing Preliminary analysis

16 Bookmarking: Survey (n=18)

17

18 Bookmarking: Application

19 Bookmarking: testing, task completion 1) Manual add (100%) 2) Favourites upload ( 60%) –Non-standard chars in links –Wrong destination URL 3) Bookmarklet (46%) –Pop-up blockers –IE security zones

20 Bookmarking - testing - feedback What are incentives for and advantages of sharing? Preference for structured over free tagging Public v private bookmarking. Tedious to sort which to share.

21 Bookmarking - analysis Emergence of a long-tail folksonomy

22 Bookmarking - conclusions Use of implicit taxonomy useful & time – saving User base unsophisticated Users want both order (taxonomy) and flexibility (free tagging) We need to prove the value of sharing & reuse (maybe harness interest in RSS)

23 References Brophy, J. and D. Bawden (2005) ‘Is Google enough? Comparison of an internet search engine with academic library resources’. Aslib Proceedings Vol. 57(6): 498- 512. Kesselman, M. and S.B. Watstein (2005) ‘Google Scholar™ and libraries: point/counterpoint’. Reference Services Review Vol. 33(4): 380-387. Mathes, A. (2004) ‘Folksonomies - Cooperative Classification and Communication Through Shared Metadata’ Millen, D., Feinberg, J., and Kerr, B. (2005) 'Social bookmarking in the enterprise', ACM Queue 3 (9): 28-35.


Download ppt "Evaluation of Google Coop and Social Bookmarking at the Overseas Development Institute By Paul Matthews and Arne Wunder"

Similar presentations


Ads by Google