Tutorial: User Interfaces & Visualization for Information Access Prof. Marti Hearst University of California, Berkeley

Slides:



Advertisements
Similar presentations
Recuperação de Informação B Cap. 10: User Interfaces and Visualization 10.1,10.2,10.3 November 17, 1999.
Advertisements

Testing Relational Database
Critical Reading Strategies: Overview of Research Process
Chapter 11 Designing the User Interface
Chapter 11 user support. Issues –different types of support at different times –implementation and presentation both important –all need careful design.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Information Retrieval: Human-Computer Interfaces and Information Access Process.
1 DAFFODIL Effective Support for Using Digital Libraries Norbert Fuhr University of Duisburg-Essen, Germany.
Search Engines and Information Retrieval
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
More Interfaces for Retrieval. Information Retrieval Activities Selecting a collection –Lists, overviews, wizards, automatic selection Submitting a request.
Human Computer Interface. HCI and Designing the User Interface The user interface is a critical part of an information system -- it is what the users.
Database Management Systems, R. Ramakrishnan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides.
Information Retrieval February 24, 2004
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
1 / 31 CS 425/625 Software Engineering User Interface Design Based on Chapter 15 of the textbook [SE-6] Ian Sommerville, Software Engineering, 6 th Ed.,
SIMS 247 Information Visualization and Presentation Prof. Marti Hearst October 5, 2000.
Information Retrieval: Human-Computer Interfaces and Information Access Process.
1 SIMS 247: Information Visualization and Presentation Marti Hearst March 3, 2004.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
Recommender systems Ram Akella February 23, 2011 Lecture 6b, i290 & 280I University of California at Berkeley Silicon Valley Center/SC.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
WMES3103: INFORMATION RETRIEVAL WEEK 10 : USER INTERFACES AND VISUALIZATION.
SIMS 213: User Interface Design & Development Marti Hearst Thurs, Jan 20, 2005.
SIMS 213: User Interface Design & Development Marti Hearst Thurs, Jan 22, 2004.
UCB CS Research Fair Search Text Mining Web Site Usability Marti Hearst SIMS.
Recommender systems Ram Akella November 26 th 2008.
IMT530- Organization of Information Resources1 Feedback Like exercises –But want more instructions and feedback on them –Wondering about grading on these.
SIMS 213: User Interface Design & Development Marti Hearst Thurs, Jan 18, 2007.
Introduction to HCI Marti Hearst (UCB SIMS) SIMS 213, UI Design & Development January 21, 1999.
Marti Hearst SIMS 247 SIMS 247 Lecture 19 Visualizing Text and Text Collections March 31, 1998.
ISP 433/633 Week 12 User Interface in IR. Why care about User Interface in IR Human Search using IR depends on –Search in IR and search in human memory.
Research Problems in IR Interfaces Marti Hearst University of California, Berkeley Language Engineering Workshop 2000.
Chapter 13: Designing the User Interface
1. Learning Outcomes At the end of this lecture, you should be able to: –Define the term “Usability Engineering” –Describe the various steps involved.
Systems Analysis – Analyzing Requirements.  Analyzing requirement stage identifies user information needs and new systems requirements  IS dev team.
Search Engines and Information Retrieval Chapter 1.
Evaluation Experiments and Experience from the Perspective of Interactive Information Retrieval Ross Wilkinson Mingfang Wu ICT Centre CSIRO, Australia.
Information Seeking Behavior Prof. Marti Hearst SIMS 202, Lecture 25.
Modern Information Retrieval Computer engineering department Fall 2005.
Scientific Inquiry & Skills
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
CHAPTER TEN AUTHORING.
Information Visualization: Ten Years in Review Xia Lin Drexel University.
1 Computing Relevance, Similarity: The Vector Space Model.
Introduction to Digital Libraries hussein suleman uct cs honours 2003.
Binxing Jiao et. al (SIGIR ’10) Presenter : Lin, Yi-Jhen Advisor: Dr. Koh. Jia-ling Date: 2011/4/25 VISUAL SUMMARIZATION OF WEB PAGES.
Recuperação de Informação B Cap. 10: User Interfaces and Visualization , , 10.9 November 29, 1999.
Information Architecture & Design Week 7 Schedule - Design Critiques due Now - Metaphors, Graphics and Labels - Other Readings - Research Topic Presentations.
Understanding User Goals in Web Search University of Seoul Computer Science Database Lab. Min Mi-young.
Information Retrieval
UWMS Data Mining Workshop Content Analysis: Automated Summarizing Prof. Marti Hearst SIMS 202, Lecture 16.
LECTURE 18 16/11/15. MAKING THE INTERFACE CONSISTENT Consistency is one way to develop and reinforce the users conceptual model of applications and give.
Visualization in Text Information Retrieval Ben Houston Exocortex Technologies Zack Jacobson CAC.
Topic 4 - Database Design Unit 1 – Database Analysis and Design Advanced Higher Information Systems St Kentigern’s Academy.
Support for the dynamic process - history mechanisms Vijayshankar Raman.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
User Interfaces for Information Access Prof. Marti Hearst SIMS 202, Lecture 26.
Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24.
Microsoft Office 2008 for Mac – Illustrated Unit D: Getting Started with Safari.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
SIMS 202, Marti Hearst Content Analysis Prof. Marti Hearst SIMS 202, Lecture 15.
Human Computer Interaction Lecture 21 User Support
Unified Modeling Language
Visualizing Documents and Search
SIMS 202 Information Organization and Retrieval
Windows Internet Explorer 7-Illustrated Essentials
Document Clustering Matt Hughes.
Chapter 11 user support.
Information Visualization
Presentation transcript:

Tutorial: User Interfaces & Visualization for Information Access Prof. Marti Hearst University of California, Berkeley SIGIR 2000

Outline Search Interfaces TodaySearch Interfaces Today HCI FoundationsHCI Foundations The Information Seeking ProcessThe Information Seeking Process Visualizing Text CollectionsVisualizing Text Collections Incorporating Context and TasksIncorporating Context and Tasks Promising Future DirectionsPromising Future Directions

Introductory Remarks Much of HCI is art, still not scienceMuch of HCI is art, still not science In this tutorial, I discuss user studies whenever availableIn this tutorial, I discuss user studies whenever available I do not have time to do justice to most of the topics.I do not have time to do justice to most of the topics.

What do current search interfaces do well?

Web search interfaces solutions Single-word queriesSingle-word queries –Standard IR assumed long queries –Web searches average words Problems:Problems: –One word can have many meanings –What context is the word used in? –Which of many articles to retrieve?

Web search interfaces solutions Single-word queriesSingle-word queries Solutions:Solutions: –Incorporation of manually-created categories Provides useful starting points Disambiguates the term(s) Development of sideways hierarchy representation –Ranking that emphasizes starting points Link analysis finds server home pages, etc –Use of behavior of other users Suggests related pages Popularity of pages

Web search interfaces are interesting in terms of what they do NOT do Current Web interfacesCurrent Web interfaces –Are the results of each site experimenting –Only those ideas that work for most people survive –Only very simple ideas remain Abandoned strategiesAbandoned strategies –Scores and graphical bars that show degree of match –Associated term graph (altavista) –Suggested terms for expansion (excite) Why did these die?Why did these die?

What is lacking in Web search?

What is lacking? Support for complex information needsSupport for complex information needs –Info on the construction on highway 80 –Research chemo vs. surgery –How does the 6 th circuit tend to rule on intellectual property cases? –What is the prior art for this invention?

What is lacking? Integration of search and analysisIntegration of search and analysis –Support for a series of searches –Backing up and moving forward –Suggested next steps –Comparisons and contrasts –Personal prior history and interests More generally:More generally: –CONTEXT –INTERACTIVITY

What is lacking? Question AnsweringQuestion Answering –Answers, not documents! –Active area of research and industry –Not always appropriate Should I have chemo or surgery? Who will win the election?

Question Answering State-of-the-Art Kupiec SIGIR 93, Srihari & Li NAACL 00, Cardie et al. NAACL 00Kupiec SIGIR 93, Srihari & Li NAACL 00, Cardie et al. NAACL 00 Goal: Find a paragraph, phrase or sentence that (hopefully) answers the question.Goal: Find a paragraph, phrase or sentence that (hopefully) answers the question. Approach:Approach: –Identify certain types of noun phrases People Dates –Hook these up with question types Who When –Match keywords in question to keywords in candidate answer that contain the right kind of NP –Use syntactic or simple frame semantics to help with matching (optional)

Question Answering Example Q: What Pulitzer prize-winning author ran unsuccessfully for mayor of New York? Relevant sentences from encyclopedia: In 1968 Norman Mailer won the Pulitzer Prize for …. In 1968 Norman Mailer won the Pulitzer Prize for …. In 1975, Mailer ran for the office of New York City mayor. … In 1975, Mailer ran for the office of New York City mayor. …

The future of search tools: A Prediction of a Dichotomy Information IntensiveInformation Intensive –Business analysis –Scientific research –Planning & design Quick lookup –Question answering –Context-dependent info (location, time)

Human-Computer Interaction

What is HCI? HCI: Human-Computer InteractionHCI: Human-Computer Interaction A discipline concerned withA discipline concerned with –design –evaluation –implementation of interactive computing systems for human use The study of major phenomena surrounding the interaction of humans with computers.The study of major phenomena surrounding the interaction of humans with computers.

Shneiderman on HCI Well-designed interactive computer systems promote:Well-designed interactive computer systems promote: –Positive feelings of success, competence, and mastery. –Allow users to concentrate on their work, rather than on the system.

What is HCI? HumansTechnology Task Design Organizational & Social Issues

User-centered Design Focus first on what people need to do, not what the system needs to do.Focus first on what people need to do, not what the system needs to do. Formulate typical scenarios of use.Formulate typical scenarios of use. Take into accountTake into account –Cognitive constraints –Organizational/Social constraints Keep users involved throughout the projectKeep users involved throughout the project

Slide by James Landay Waterfall Design Model (from Software Engineering) Application Description Requirements Specification System Design Product Initiation Analysis Design Implementation ?

UI Design = Iteration Design Prototype Evaluate

Comparing Design Processes Waterfall modelWaterfall model –The customer is not the user User-centered designUser-centered design –Assess what the user needs –Design for this –Redesign if user needs are not met

Steps in Standard UI Design Needs Assessment / Task AnalysisNeeds Assessment / Task Analysis Low-fidelity Prototype & EvaluationLow-fidelity Prototype & Evaluation RedesignRedesign Interactive PrototypeInteractive Prototype Heuristic EvaluationHeuristic Evaluation RedesignRedesign Revised Interactive PrototypeRevised Interactive Prototype Pilot User StudyPilot User Study RedesignRedesign Revised Interactive PrototypeRevised Interactive Prototype Larger User StudyLarger User Study

Task Analysis Observe existing work practicesObserve existing work practices Create examples and scenarios of actual useCreate examples and scenarios of actual use Try out new ideas before building softwareTry out new ideas before building software

Slide by James Landay Rapid Prototyping Build a mock-up of designBuild a mock-up of design Low fidelity techniquesLow fidelity techniques –paper sketches –cut, copy, paste –video segments Interactive prototyping toolsInteractive prototyping tools –Visual Basic, HyperCard, Director, etc. UI buildersUI builders –NeXT, etc.

Usability Evaluation Standard Techniques  User studies  Have people use the interface to complete some tasks  Requires an implemented interface  "Discount" vs. Scientific Results  Heuristic Evaluation  Usability expert assesses guidelines

Cognitive Considerations: Norman’s Action Cycle Human action has two aspectsHuman action has two aspects –execution and evaluation Execution: doing somethingExecution: doing something Evaluation: comparison of what happened to what was desiredEvaluation: comparison of what happened to what was desired

Action Cycle Goals EvaluationExecution The World

Action Cycle Goals Evaluation Evaluation of interpretations Interpreting the perception Perceiving the state of the world Execution Intention to act Sequence of actions Execution of seq uence of actions The World

Norman’s Action Cycle Execution has three stages:Execution has three stages: –Start with a goal –Translate into an intention –Translate into a sequence of actions Now execute the actionsNow execute the actions Evaluation has three stages:Evaluation has three stages: –Perceive world –Interpret what was perceived –Compare with respect to original intentions

Gulf of Evaluation The amount of effort a person must exert to interpretThe amount of effort a person must exert to interpret –the physical state of the system –how well the expectations and intentions have been met We want a small gulf!We want a small gulf!

Based on slide by Saul Greenberg Mental Models People have mental models of how things work:People have mental models of how things work: –how does your car start? –how does an ATM machine work? –how does your computer boot? Allows people to make predictions about how things will workAllows people to make predictions about how things will work

Based on slide by Saul Greenberg Strategy for Design Provide a good conceptual modelProvide a good conceptual model –allows users to predict consequences of actions –communicated through the image of the system –relations between user’s intentions, required actions, and results should be sensible consistent meaningful (non-arbitrary)

Design Guidelines ConsistencyConsistency Shortcuts (for experts)Shortcuts (for experts) FeedbackFeedback ClosureClosure Error preventionError prevention Easy reversal of actionsEasy reversal of actions User controlUser control Low memory burdenLow memory burden Shneiderman (8 design rules) There are hundreds of design guidelines listings!

Design Guidelines for Search UIs I think the most important are: –Reduce memory burden / Provide feedback Previews History Context –User control Query modification Flexible manipulation of results –Easy reversal of actions

Designing for Error Norman on designing for error: –Understand the causes of error and design to minimize these causes –Make it possible to reverse actions –Make it hard to do non-reversible actions –Make it easy to discover the errors that do occur –Change attitude towards errors: A user is attempting to do a task, getting there by imperfect approximations; actions are approximations to what is actually desired.

HCI Intro Summary UI design involves users UI design involves users UI design is iterative UI design is iterative – An art, not a science – Evaluation is key Design guidelines Design guidelines – are useful – but application to information-centric systems can be difficult

Recommended HCI Books Alan Dix et al., Human-Computer Interaction, 2nd edition (Feb 1998) Prentice Hall;Alan Dix et al., Human-Computer Interaction, 2nd edition (Feb 1998) Prentice Hall; Ben Shneiderman, Designing the user interface : strategies for effective human--computer interaction, 3rd ed. Addison- Wesley, 1998.Ben Shneiderman, Designing the user interface : strategies for effective human--computer interaction, 3rd ed. Addison- Wesley, Jakob Nielsen, Usability Engineering, Morgan Kaufmann, 1994Jakob Nielsen, Usability Engineering, Morgan Kaufmann, 1994 Holtzblatt and Beyer, Making Customer-Centered Design Work for Teams, CACM, 36 (10), October 1993.Holtzblatt and Beyer, Making Customer-Centered Design Work for Teams, CACM, 36 (10), October world.std.com/~uiewebworld.std.com/~uieweb usableweb.comusableweb.com

Supporting the Information Seeking Process Two parts to the process: search and retrieval analysis and synthesis of search results

Standard IR Model Assumptions: –Maximizing precision and recall simultaneously –The information need remains static –The value is in the resulting document set

Problem with Standard Model: Users learn during the search process:Users learn during the search process: –Scanning titles of retrieved documents –Reading retrieved documents –Viewing lists of related topics/thesaurus terms –Navigating hyperlinks Some users don’t like long disorganized lists of documentsSome users don’t like long disorganized lists of documents

A sketch of a searcher… “moving through many actions towards a general goal of satisfactory completion of research related to an information need.” (after Bates 89) Q0 Q1 Q2 Q3 Q4 Q5

Berry-picking model (Bates 90) The query is continually shiftingThe query is continually shifting Users may move through a variety of sourcesUsers may move through a variety of sources New information may yield new ideas and new directionsNew information may yield new ideas and new directions The query is not satisfied by a single, final retrieved set, but rather by a series of selections and bits of information found along the way.The query is not satisfied by a single, final retrieved set, but rather by a series of selections and bits of information found along the way.

Implications Interfaces should make it easy to store intermediate resultsInterfaces should make it easy to store intermediate results Interfaces should make it easy to follow trails with unanticipated resultsInterfaces should make it easy to follow trails with unanticipated results Makes evaluation more difficult.Makes evaluation more difficult.

Orienteering (O’Day & Jeffries 93) Interconnected but diverse searches on a single, problem-based themeInterconnected but diverse searches on a single, problem-based theme Focus on information delivery rather than search performanceFocus on information delivery rather than search performance Classifications resulting from an extended observational study:Classifications resulting from an extended observational study: –15 clients of professional intermediaries –financial analyst, venture capitalist, product marketing engineer, statistician, etc.

Orienteering (O’Day & Jeffries 93) Identified three main search types:Identified three main search types: –Monitoring –Following a plan –Exploratory A series of interconnected but diverse searches on one problem-based themeA series of interconnected but diverse searches on one problem-based theme –Changes in direction caused by “triggers” Each stage followed by reading, assimilation, and analysis of resulting material.Each stage followed by reading, assimilation, and analysis of resulting material.

Orienteering (O’Day & Jeffries 93) Defined three main search typesDefined three main search types –monitoring a well-known topic over time e.g., research four competitors every quarter –following a plan a typical approach to the task at hand e.g., improve business process X –exploratory explore topic in an undirected fashion get to know an unfamiliar industry

Orienteering (O’Day & Jeffries 93) Trends:Trends: –A series of interconnected but diverse searches on one problem-based theme –This happened in all three search modes –Each analyst did at least two search types Each stage followed by reading, assimilation, and analysis of resulting materialEach stage followed by reading, assimilation, and analysis of resulting material

Orienteering (O’Day & Jeffries 93) *Searches tended to trigger new directions*Searches tended to trigger new directions –Overview, then detail, repeat –Information need shifted between search requests –Context of problem and previous searches were carried to next stage of search *The value was contained in the accumulation of search results, not the final result set*The value was contained in the accumulation of search results, not the final result set –*These observations verified Bates’ predictions.

Orienteering (O’Day & Jeffries 93) Triggers: motivation to switch from one strategy to anotherTriggers: motivation to switch from one strategy to another –next logical step in a plan –encountering something interesting –explaining change –finding missing pieces

Stop Conditions (O’Day & Jeffries 93) Stopping conditions not as clear as for triggersStopping conditions not as clear as for triggers People stopped searching whenPeople stopped searching when –no more compelling triggers –finished an appropriate amount of searching for the task –specific inhibiting factor e.g., learning market was too small –lack of increasing returns 80/20 rule Missing information/inferences okMissing information/inferences ok –business world different than scholarship

After the Search: Analyzing and Synthesizing Search Results Orienteering Post-Search Behaviors: –Read and Annotate –Analyze: 80% fell into six main types

Post-Search Analysis Types (O’Day & Jeffries 93) TrendsTrends ComparisonsComparisons Aggregation and ScalingAggregation and Scaling Identifying a Critical SubsetIdentifying a Critical Subset AssessingAssessing InterpretingInterpreting The rest:The rest: cross-reference summarize find evocative visualizations miscellaneous

SenseMaking (Russell et al. 93) The process of encoding retrieved information to answer task-specific questionsThe process of encoding retrieved information to answer task-specific questions CombineCombine –internal cognitive resources –external retrieved resources Create a good representationCreate a good representation –an iterative process –contend with a cost/benefit tradoff

UIs for Supporting the Search Process

Infogrid (design mockup) (Rao et al. 92)

Slide by Shankar Raman A general search interface architectureA general search interface architecture –Itemstash – store retrieved docs –Search Event -- current query –History -- history of queries –Result Item -- view selected docs + metadata InfoGrid/Protofoil (Rao et al. 92)

Infogrid Design Mockups (Rao et al. 92)

Slide by Shankar Raman DLITE (Cousins 97) Drag and Drop interfaceDrag and Drop interface Reify queries, sources, retrieval resultsReify queries, sources, retrieval results Animation to keep track of activityAnimation to keep track of activity

Slide by Shankar RamanDLITE UI to a digital libraryUI to a digital library Direct manipulation interfaceDirect manipulation interface Workcenter approachWorkcenter approach –experts create workcenters –tools specialized for the workcenter’s task –contents persistent Web browser used to display document or collection metadataWeb browser used to display document or collection metadata

Slide by Shankar Raman Interaction Pointing at object brings up tooltip -- metadataPointing at object brings up tooltip -- metadata Activating object -- component specific actionActivating object -- component specific action –5 types for result set component Drag-and-drop data onto programDrag-and-drop data onto program Animation used to show what happens with drag-and-drop (e.g. “waggling”)Animation used to show what happens with drag-and-drop (e.g. “waggling”)

User Reaction to DLITE Two participant poolsTwo participant pools –7 Stanford CS – 11 NASA researchers & librarians Requires learning, initially unfamiliarRequires learning, initially unfamiliar –Many requested help pages –After the model was understood, few errors Overall positive attitude, even stronger after a two week delayOverall positive attitude, even stronger after a two week delay Successfully remembered most features after 2 week lagSuccessfully remembered most features after 2 week lag

Keeping Track of History TechniquesTechniques –List of prior queries and results (standard) –“Slide sorter” view, snapshots of earlier interactions –Graphical hierarchy for web browsing

Slide by Shankar Raman Keeping Track of History PadPrints (Hightower et al. 98) –Tree-based history of recently visited web- pages history map placed to left of browser window –Zoomable, can shrink sub-hierarchies] –Node = title + thumbnail

PadPrints (Hightower et al. 98)

Slide by Shankar Raman 13.4% unable to find recently visited pages13.4% unable to find recently visited pages only 0.1% use History button, 42% use Backonly 0.1% use History button, 42% use Back problems with history list (according to authors)problems with history list (according to authors) –incomplete, lose out on every branch –textual (not necessarily a problem! ) –pull down menu cumbersome -- cannot see history along with current document Initial User Study of Browser History Mechanism

Slide by Shankar Raman User Study of Padprints Changed the task to involve revisiting web pagesChanged the task to involve revisiting web pages –CHI database, National Park Service website Only correctly answered questions consideredOnly correctly answered questions considered –20-30% fewer pages accessed –faster response time for tasks that involve revisiting pages –slightly better user satisfaction ratings

Info Seeking Summary The standard model (issue query, get results, repeat) is not fully adequateThe standard model (issue query, get results, repeat) is not fully adequate “Berry picking” view offers an alternative to the standard IR model“Berry picking” view offers an alternative to the standard IR model Interfaces can be devised to support the interactive process over timeInterfaces can be devised to support the interactive process over time More work needs to be doneMore work needs to be done

Interactive Query Modification

Query Modification Problem: how to reformulate the query?Problem: how to reformulate the query? –Thesaurus expansion: Suggest terms similar to query terms –Relevance feedback: Suggest terms (and documents) similar to retrieved documents that have been judged to be relevant

Relevance Feedback Usually do both:Usually do both: –expand query with new terms –re-weight terms in query There are many variationsThere are many variations –usually positive weights for terms from relevant docs –sometimes negative weights for terms from non-relevant docs

Using Relevance Feedback Known to improve resultsKnown to improve results –in TREC-like conditions (no user involved) What about with a user in the loop?What about with a user in the loop? –How might you measure this? –Let’s examine a user study of relevance feedback by Koenneman & Belkin 1996.

Questions being Investigated Koenemann & Belkin 96 How well do users work with statistical ranking on full text?How well do users work with statistical ranking on full text? Does relevance feedback improve results?Does relevance feedback improve results? Is user control over operation of relevance feedback helpful?Is user control over operation of relevance feedback helpful? How do different levels of user control effect results?How do different levels of user control effect results?

How much of the guts should the user see? Opaque (black box)Opaque (black box) –(like web search engines) TransparentTransparent –(see available terms after the r.f. ) PenetrablePenetrable –(see suggested terms before the r.f.) Which do you think worked best?Which do you think worked best?

Terms available for relevance feedback made visible (from Koenemann & Belkin)

Details on User Study Koenemann & Belkin 96 Subjects have a tutorial session to learn the systemSubjects have a tutorial session to learn the system Their goal is to keep modifying the query until they’ve developed one that gets high precisionTheir goal is to keep modifying the query until they’ve developed one that gets high precision This is an example of a routing query (as opposed to ad hoc)This is an example of a routing query (as opposed to ad hoc) Reweighting:Reweighting: –They did not reweight query terms –Instead, only term expansion pool all terms in rel docs take top N terms, where n = 3 + (number-marked-relevant-docs*2) (the more marked docs, the more terms added to the query)

Details on User Study Koenemann & Belkin novice searchers64 novice searchers –43 female, 21 male, native English TREC test bedTREC test bed –Wall Street Journal subset Two search topicsTwo search topics –Automobile Recalls –Tobacco Advertising and the Young Relevance judgements from TREC and experimenterRelevance judgements from TREC and experimenter System was INQUERY (vector space with some bells and whistles)System was INQUERY (vector space with some bells and whistles)

Sample TREC query

Evaluation Precision at 30 documentsPrecision at 30 documents Baseline: (Trial 1)Baseline: (Trial 1) –How well does initial search go? –One topic has more relevant docs than the other Experimental condition (Trial 2)Experimental condition (Trial 2) –Subjects get tutorial on relevance feedback –Modify query in one of four modes no r.f., opaque, transparent, penetration

Precision vs. RF condition (from Koenemann & Belkin 96)

Effectiveness Results Subjects with R.F. did 17-34% better performance than no R.F.Subjects with R.F. did 17-34% better performance than no R.F. Subjects with penetration case did 15% better as a group than those in opaque and transparent cases.Subjects with penetration case did 15% better as a group than those in opaque and transparent cases.

Number of iterations in formulating queries ( from Koenemann & Belkin 96)

Behavior Results Search times approximately equalSearch times approximately equal Precision increased in first few iterationsPrecision increased in first few iterations Penetration case required fewer iterations to make a good query than transparent and opaquePenetration case required fewer iterations to make a good query than transparent and opaque R.F. queries much longerR.F. queries much longer –but fewer terms in penetrable case -- users were more selective about which terms were added in.

Relevance Feedback Summary Iterative query modification can improve precision and recall for a standing queryIterative query modification can improve precision and recall for a standing query In at least one study, users were able to make good choices by seeing which terms were suggested for R.F. and selecting among themIn at least one study, users were able to make good choices by seeing which terms were suggested for R.F. and selecting among them So … “more like this” can be useful!So … “more like this” can be useful! But … usually requires more than one document, unlike how web versions work.But … usually requires more than one document, unlike how web versions work.

Alternative Notions of Relevance Feedback

Social and Implicit Relevance Feedback Find people whose taste is “similar” to yours. Will you like what they like?Find people whose taste is “similar” to yours. Will you like what they like? Follow a users’ actions in the background. Can this be used to predict what the user will want to see next?Follow a users’ actions in the background. Can this be used to predict what the user will want to see next? Track what lots of people are doing. Does this implicitly indicate what they think is good and not good?Track what lots of people are doing. Does this implicitly indicate what they think is good and not good?

Slide by Michael Pazzani Collaborative Filtering (social filtering) If Pam liked the paper, I’ll like the paperIf Pam liked the paper, I’ll like the paper If you liked Star Wars, you’ll like Independence DayIf you liked Star Wars, you’ll like Independence Day Rating based on ratings of similar peopleRating based on ratings of similar people –Ignores the text, so works on text, sound, pictures etc. –But: Initial users can bias ratings of future users

Social Filtering Ignores the content, only looks at who judges things similarlyIgnores the content, only looks at who judges things similarly Works well on data relating to “taste”Works well on data relating to “taste” –something that people are good at predicting about each other too Does it work for topic?Does it work for topic? –GroupLens results suggest otherwise (preliminary) –Perhaps for quality assessments –What about for assessing if a document is about a topic?

Slide by Brent Chun Learning interface “agents” Use machine learning to improve performanceUse machine learning to improve performance –learn user behavior, preferences Useful when:Useful when: –1) past behavior is a useful predictor of the future –2) wide variety of behaviors amongst users Examples:Examples: –mail clerk: sort incoming messages in right mailboxes –calendar manager: automatically schedule meeting times?

Example Systems Example SystemsExample Systems –WebWatcher –Letizia Vary according toVary according to –User states topic or not –User rates pages or not

Slide by Michael Pazzani WebWatcher (Freitag et al.) A "tour guide" agent for the "tour guide" agent for the WWW. –User tells it what kind of information is wanted –System tracks web actions –Highlights hyperlinks that it computes will be of interest. Strategy for giving advice is learned from feedback from earlier tours.Strategy for giving advice is learned from feedback from earlier tours. –Uses WINNOW as a learning algorithm

Slide by Michael Pazzani

Slide by Brent Chun Letizia (Lieberman 95) Recommends web pages during browsing based on user profileRecommends web pages during browsing based on user profile Learns user profile using simple heuristicsLearns user profile using simple heuristics Passive observation, recommend on requestPassive observation, recommend on request Provides relative ordering of link interestingnessProvides relative ordering of link interestingness Assumes recommendations “near” current page are more valuable than othersAssumes recommendations “near” current page are more valuable than others user letizia user profile heuristicsrecommendations

Letizia (Lieberman 95) Infers user preferences from behaviorInfers user preferences from behavior Interesting pagesInteresting pages –record in hot list –save as a file –follow several links from pages –returning several times to a document Not InterestingNot Interesting –spend a short time on document –return to previous document without following links –passing over a link to document (selecting links above and below document)

Slide by Brent Chun Consequences of passive observation No ability to fine-tune profile or express interest without visiting “appropriate” pagesNo ability to fine-tune profile or express interest without visiting “appropriate” pages Weak heuristicsWeak heuristics –Must click through multiple uninteresting pages en route to interestingness –Hierarchies tend to get more hits near root –But page “read” time does seem to robustly indicate interest (across many pages and many users)

Slide by Michael Pazzani MARS (Riu et al. 97) Relevance feedback based on image similarity

Slide by Michael Pazzani Time Series R.F. (Keogh & Pazzani 98)

Social and Implicit Relevance Feedback Several different criteria to consider:Several different criteria to consider: –Implicit vs. Explicit judgements –Individual vs. Group judgements –Standing vs. Dynamic topics –Similarity of the items being judged vs. similarity of the judges themselves

Classifying R. F. Systems: Amazon.com –Books on related topics –Books bought by others who bought this –Community, implicit, standing, judges + items, similar items

Classifying R.F. Systems Standard Relevance FeedbackStandard Relevance Feedback –Individual, explicit, dynamic, item comparison Standard Filtering (NewsWeeder)Standard Filtering (NewsWeeder) –Individual, explicit, standing profile, item comparison Standard RoutingStandard Routing –“Community” (gold standard), explicit, standing profile, item comparison

Classifying R.F. Systems Letizia and WebWatcherLetizia and WebWatcher –Individual, implicit, dynamic, item comparison Ringo and GroupLens:Ringo and GroupLens: –Group, explicit, standing query, judge- based comparison

Query Modification Summary Relevance feedback is an effective means for user-directed query modification.Relevance feedback is an effective means for user-directed query modification. Modification can be done with either direct or indirect user inputModification can be done with either direct or indirect user input Modification can be done based on an individual’s or a group’s past input.Modification can be done based on an individual’s or a group’s past input.

Information Visualization

Images from yahoo.com Visualization Success Stories

From Visual Explanations by Edward Tufte, Graphics Press, 1997 Illustration of John Snow’s deduction that a cholera epidemic was caused by a bad water pump, circa Horizontal lines indicate location of deaths.

Visualizing Text Collections Some Visualization PrinciplesSome Visualization Principles Why Text is ToughWhy Text is Tough Visualizing Collection OverviewsVisualizing Collection Overviews Evaluations involving UsersEvaluations involving Users

Preattentive Processing A limited set of visual properties are processed preattentivelyA limited set of visual properties are processed preattentively –(without need for focusing attention). This is important for design of visualizationsThis is important for design of visualizations –what can be perceived immediately –what properties are good discriminators –what can mislead viewers All Preattentive Processing figures from Healey 97 (on the web)

Example: Color Selection Viewer can rapidly and accurately determine whether the target (red circle) is present or absent. Difference detected in color.

Example: Shape Selection Viewer can rapidly and accurately determine whether the target (red circle) is present or absent. Difference detected in form (curvature)

Pre-attentive Processing < ms qualifies as pre-attentive< ms qualifies as pre-attentive –eye movements take at least 200ms –yet certain processing can be done very quickly, implying low-level processing in parallel

Example: Conjunction of Features cannot Viewer cannot rapidly and accurately determine whether the target (red circle) is present or absent when target has two or more features, each of which are present in the distractors. Viewer must search sequentially.

SUBJECT PUNCHED QUICKLY OXIDIZED TCEJBUS DEHCNUP YLKCIUQ DEZIDIXO CERTAIN QUICKLY PUNCHED METHODS NIATREC YLKCIUQ DEHCNUP SDOHTEM SCIENCE ENGLISH RECORDS COLUMNS ECNEICS HSILGNE SDROCER SNMULOC GOVERNS PRECISE EXAMPLE MERCURY SNREVOG ESICERP ELPMAXE YRUCREM CERTAIN QUICKLY PUNCHED METHODS NIATREC YLKCIUQ DEHCNUP SDOHTEM GOVERNS PRECISE EXAMPLE MERCURY SNREVOG ESICERP ELPMAXE YRUCREM SCIENCE ENGLISH RECORDS COLUMNS ECNEICS HSILGNE SDROCER SNMULOC SUBJECT PUNCHED QUICKLY OXIDIZED TCEJBUS DEHCNUP YLKCIUQ DEZIDIXO CERTAIN QUICKLY PUNCHED METHODS NIATREC YLKCIUQ DEHCNUP SDOHTEM SCIENCE ENGLISH RECORDS COLUMNS ECNEICS HSILGNE SDROCER SNMULOC

Accuracy Ranking of Quantitative Perceptual Tasks (Mackinlay 88 from Cleveland & McGill) Position Length AngleSlope Area Volume ColorDensity More Accurate Less Accurate

Why Text is Tough to Visualize Text is not pre-attentiveText is not pre-attentive Text consists of abstract conceptsText consists of abstract concepts Text represents similar concepts in many different waysText represents similar concepts in many different ways –space ship, flying saucer, UFO, figment of imagination Text has very high dimensionalityText has very high dimensionality –Tens or hundreds of thousands of features –Many subsets can be combined together

Why Text is Tough The Dog.

Why Text is Tough The Dog. The dog cavorts. The dog cavorted.

Why Text is Tough The man. The man walks.

Why Text is Tough The man walks the cavorting dog. So far, we can sort of show this in pictures.

Why Text is Tough As the man walks the cavorting dog, thoughts arrive unbidden of the previous spring, so unlike this one, in which walking was marching and dogs were baleful sentinals outside unjust halls. How do we visualize this?

Why Text is Tough Abstract concepts are difficult to visualizeAbstract concepts are difficult to visualize Combinations of abstract concepts are even more difficult to visualizeCombinations of abstract concepts are even more difficult to visualize –time –shades of meaning –social and psychological concepts –causal relationships

Why Text is Tough Language only hints at meaningLanguage only hints at meaning Most meaning of text lies within our minds and common understandingMost meaning of text lies within our minds and common understanding –“How much is that doggy in the window?” how much: social system of barter and trade (not the size of the dog) “doggy” implies childlike, plaintive, probably cannot do the purchasing on their own “in the window” implies behind a store window, not really inside a window, requires notion of window shopping

Why Text is Tough General categories have no standard ordering (nominal data)General categories have no standard ordering (nominal data) Categorization of documents by single topics misses important distinctionsCategorization of documents by single topics misses important distinctions Consider an article aboutConsider an article about –NAFTA –The effects of NAFTA on truck manufacture –The effects of NAFTA on productivity of truck manufacture in the neighboring cities of El Paso and Juarez

Why Text is Tough I saw Pathfinder on Mars with a telescope.I saw Pathfinder on Mars with a telescope. Pathfinder photographed Mars.Pathfinder photographed Mars. The Pathfinder photograph mars our perception of a lifeless planet.The Pathfinder photograph mars our perception of a lifeless planet. The Pathfinder photograph from Ford has arrived.The Pathfinder photograph from Ford has arrived. The Pathfinder forded the river without marring its paint job.The Pathfinder forded the river without marring its paint job.

Why Text is Easy Text is highly redundantText is highly redundant –When you have lots of it –Pretty much any simple technique can pull out phrases that seem to characterize a document Instant summary:Instant summary: –Extract the most frequent words from a text –Remove the most common English words

Guess the Texts 64 president64 president 38 jones38 jones 38 information38 information 32 evidence32 evidence 31 lewinsky31 lewinsky 28 oic28 oic 28 investigation28 investigation 26 court26 court 26 clinton26 clinton 22 office22 office 21 discovery21 discovery 20 sexual20 sexual 20 case20 case 17 testimony17 testimony 16 judge16 judge 478 said 233 god 201 father 187 land 181 jacob 160 son 157 joseph 134 abraham 121 earth 119 man 118 behold 113 years 104 wife 101 name 94 pharaoh

Text Collection Overviews How can we show an overview of the contents of a text collection?How can we show an overview of the contents of a text collection? –Show info external to the docs e.g., date, author, source, number of inlinks does not show what they are about –Show the meanings or topics in the docs a list of titles results of clustering words or documents organize according to categories (next time)

Visualizing Collection Clusters –Scatter/Gather show main themes as groups of text summaries –Scatter Plots show docs as points; closeness indicates nearness in cluster space show main themes of docs as visual clumps or mountains –Kohonen Feature maps show main themes as adjacent polygons –BEAD show main themes as links within a force-directed placement network

Text Clustering Finds overall similarities among groups of documentsFinds overall similarities among groups of documents Finds overall similarities among groups of tokensFinds overall similarities among groups of tokens Picks out some themes, ignores othersPicks out some themes, ignores others

Clustering for Collection Overviews Two main stepsTwo main steps –cluster the documents according to the words they have in common –map the cluster representation onto a (interactive) 2D or 3D representation

Scatter/Gather Cutting, Pedersen, Tukey & Karger 92, 93, Hearst & Pedersen 95 First use of text clustering in the interface –Showing clusters to users had not been done –Focus on interaction Show topical terms and typical titles Allow users to change the views –Did not emphasize visualization

Scatter/Gather

S/G Example: query on “star” Encyclopedia text 14 sports 8 symbols47 film, tv 8 symbols47 film, tv 68 film, tv (p) 7 music 97 astrophysics 67 astronomy(p)12 steller phenomena 10 flora/fauna 49 galaxies, stars 29 constellations 7 miscelleneous 7 miscelleneous Clustering and re-clustering is entirely automated

Northern Light: used to cluster exclusively. Now combines categorization with clustering

Northern Light second level clusters: are these really about NLP? Note that next level corresponds to URLs

Scatter Plot of Clusters (Chen et al. 97)

BEAD (Chalmers 97)

BEAD (Chalmers 96) An example layout produced by Bead, seen in overview, of 831 bibliography entries. The dimensionality (the number of unique words in the set) is A search for ‘cscw or collaborative’ shows the pattern of occurrences coloured dark blue, mostly to the right. The central rectangle is the visualizer’s motion control.

Example: Themescapes (Wise et al. 95) Themescapes (Wise et al. 95)

Clustering for Collection Overviews Since text has tens of thousands of featuresSince text has tens of thousands of features –the mapping to 2D loses a tremendous amount of information –only very coarse themes are detected

Galaxy of News Rennison 95

Galaxy of News Rennison 95

Kohonen Feature Maps (Lin 92, Chen et al. 97) (594 docs)

How Useful is Collection Cluster Visualization for Search? Three studies find negative results

Study 1 Kleiboemer, Lazear, and Pedersen. Tailoring a retrieval system for naive users. In Proc. of the 5th Annual Symposium on Document Analysis and Information Retrieval, 1996Kleiboemer, Lazear, and Pedersen. Tailoring a retrieval system for naive users. In Proc. of the 5th Annual Symposium on Document Analysis and Information Retrieval, 1996 This study comparedThis study compared –a system with 2D graphical clusters –a system with 3D graphical clusters –a system that shows textual clusters Novice usersNovice users Only textual clusters were helpful (and they were difficult to use well)Only textual clusters were helpful (and they were difficult to use well)

Study 2: Kohonen Feature Maps H. Chen, A. Houston, R. Sewell, and B. Schatz, JASIS 49(7)H. Chen, A. Houston, R. Sewell, and B. Schatz, JASIS 49(7) Comparison: Kohonen Map and YahooComparison: Kohonen Map and Yahoo Task:Task: –“Window shop” for interesting home page –Repeat with other interface Results:Results: –Starting with map could repeat in Yahoo (8/11) –Starting with Yahoo unable to repeat in map (2/14)

Study 2 (cont.) Participants liked:Participants liked: –Correspondence of region size to # documents –Overview (but also wanted zoom) –Ease of jumping from one topic to another –Multiple routes to topics –Use of category and subcategory labels

Study 2 (cont.) Participants wanted:Participants wanted: –hierarchical organization –other ordering of concepts (alphabetical) –integration of browsing and search –correspondence of color to meaning –more meaningful labels –labels at same level of abstraction –fit more labels in the given space –combined keyword and category search –multiple category assignment (sports+entertain)

Study 3: NIRVE NIRVE Interface by Cugini et al. 96. Each rectangle is a cluster. Larger clusters closer to the “pole”. Similar clusters near one another. Opening a cluster causes a projection that shows the titles.NIRVE Interface by Cugini et al. 96. Each rectangle is a cluster. Larger clusters closer to the “pole”. Similar clusters near one another. Opening a cluster causes a projection that shows the titles.

Study 3 Visualization of search results: a comparative evaluation of text, 2D, and 3D interfaces Sebrechts, Cugini, Laskowski, Vasilakis and Miller, Proceedings of SIGIR 99, Berkeley, CA, 1999.Visualization of search results: a comparative evaluation of text, 2D, and 3D interfaces Sebrechts, Cugini, Laskowski, Vasilakis and Miller, Proceedings of SIGIR 99, Berkeley, CA, This study compared :This study compared : –3D graphical clusters –2D graphical clusters –textual clusters 15 participants, between-subject design15 participants, between-subject design TasksTasks –Locate a particular document –Locate and mark a particular document –Locate a previously marked document –Locate all clusters that discuss some topic –List more frequently represented topics

Study 3 Results (time to locate targets)Results (time to locate targets) –Text clusters fastest –2D next –3D last –With practice (6 sessions) 2D neared text results; 3D still slower –Computer experts were just as fast with 3D Certain tasks equally fast with 2D & textCertain tasks equally fast with 2D & text –Find particular cluster –Find an already-marked document But anything involving text (e.g., find title) much faster with text.But anything involving text (e.g., find title) much faster with text. –Spatial location rotated, so users lost context Helpful viz featuresHelpful viz features –Color coding (helped text too) –Relative vertical locations

Visualizing Clusters Huge 2D maps may be inappropriate focus for information retrievalHuge 2D maps may be inappropriate focus for information retrieval –cannot see what the documents are about –space is difficult to browse for IR purposes –(tough to visualize abstract concepts) Perhaps more suited for pattern discovery and gist-like overviewsPerhaps more suited for pattern discovery and gist-like overviews

Co-Citation Analysis Has been around since the 50’s. (Small, Garfield, White & McCain)Has been around since the 50’s. (Small, Garfield, White & McCain) Used to identify core sets ofUsed to identify core sets of –authors, journals, articles for particular fields –Not for general search Main Idea:Main Idea: –Find pairs of papers that cite third papers –Look for commonalitieis A nice demonstration by Eugene Garfield at: –

Co-citation analysis (From Garfield 98)

Context

Types of Context Personal situationPersonal situation –Where you are –What time it is –Your general preferences Context of other documentsContext of other documents Context of what you have done so far in the search processContext of what you have done so far in the search process

Putting Results in Context Visualizations of Query Term DistributionVisualizations of Query Term Distribution –KWIC, TileBars, SeeSoft Table of Contents as ContextTable of Contents as Context –Superbook, Cha-Cha, DynaCat Visualizing Shared Subsets of Query TermsVisualizing Shared Subsets of Query Terms –InfoCrystal, VIBE, Lattice Views Dynamic Metadata as Query PreviewsDynamic Metadata as Query Previews

KWIC (Keyword in Context) An old standard, ignored by internet search enginesAn old standard, ignored by internet search engines –used in some intranet engines, e.g., Cha-Cha

Table-of-Contents Views Superbook (Remde et al., 87)Superbook (Remde et al., 87) Functions:Functions: –Word Lookup: Show a list query words, stems, and word combinations –Table of Contents: Dynamic fisheye view of the hierarchical topics list Search words can be highlighted here too –Page of Text: show selected page and highlighted search terms See UI/IR textbook chapter for information on interesting user studySee UI/IR textbook chapter for information on interesting user study

Superbook (

Egan et al. Study Goal: compare Superbook with paper bookGoal: compare Superbook with paper book Tasks:Tasks: –structured search: find answer to a specific question using an unfamiliar reference text –open-book essay: synthesize material from different places in the document –incidental learning: how much useful information about the document is acquired while doing other tasks –subjective ratings: user reactions to the form and content

Egan et al. Study Factors for structured search:Factors for structured search: –Does the user’s question correspond to the author’s organization of the material? Half the study search questions contained cues as to which topic heading to use, half did not –Does the user’s query as stated contain some of the same words as those used by the author? Half the questions contained words taken from the text surrounding the target text, half did not

Egan et al. Study Example search questions: –Find the section discussing the basic concept that the value of any expression, however complicated, is a data structure. –The dataset ‘murder’ contains murder rates per 100,000 population. Find the section that says which staes are included in this dataset. –Find the section that describes pie charts and states whether or not they are a good means for analyzing data. –Find the section that describes the first thing you have to do to get S to print pictoral output. blue boldface: terms taken from text pink italics: terms taken from topic heading

Egan et al. Study Hypotheses:Hypotheses: –Conventional document would require good cues from the topic headings, but Superbook would not. Word lookup function hypothesized to allow circumvention of author’s organization scheme. –Superbook’s search facility would result in open- book essays that include more information.

Egan et al. Study Source text: statistics package manual (562 pp.)Source text: statistics package manual (562 pp.) Compare:Compare: –superbook vs. paper versions Four sets of search questions of mixed typeFour sets of search questions of mixed type 20 university students with stats background20 university students with stats background Superbook training tutorialSuperbook training tutorial 15 minutes per structured query15 minutes per structured query One open-book essay retainedOne open-book essay retained

Egan et al. Study Results: Superbook had an advantage in:Results: Superbook had an advantage in: –overall average accuracy (75% vs. 62%) Superbook did better on questions with words from text but not in topic headings Print version did better on questions with no search hits –speed (5.4 vs. 5.6 min/query on average) Superbook faster for text-only cues Paper faster for no questions with no hits –essay creation average score of 5.8 vs. 3.6 points out of 7 average 8.8 facts vs. 6.0 out of 15

Egan et al. Study Results:Results: –Subjective ratings: Superbook users rated it easier than paper (5.8 vs. 3.1 out of 7) Superbook users gave higher ratings on the stat system –Incidental learning: Superbook users recalled more chapter headings –maybe because these were continually displayed No other differences were significant Problems with study:Problems with study: –Did not compare against non-hypertext computerized version –Did not show if/how hyperlinks affected results

Cha-Cha (Chen & Hearst 98) Shows “table-of-contents”-like view, like SuperbookShows “table-of-contents”-like view, like Superbook Takes advantage of human-created structure within hyperlinks to create the TOCTakes advantage of human-created structure within hyperlinks to create the TOC

DynaCat (Pratt, Hearst, & Fagan 99) Decide on important question types in an advanceDecide on important question types in an advance –What are the adverse effects of drug D? –What is the prognosis for treatment T? Make use of MeSH categoriesMake use of MeSH categories Retain only those types of categories known to be useful for this type of query.Retain only those types of categories known to be useful for this type of query.

DynaCat (Pratt, Hearst, & Fagan 99)

DynaCat Study DesignDesign –Three queries –24 cancer patients –Compared three interfaces ranked list, clusters, categories ResultsResults –Participants strongly preferred categories –Participants found more answers using categories –Participants took same amount of time with all three interfaces Similar results have been verified by another study by Chen and Dumais (CHI 2000)Similar results have been verified by another study by Chen and Dumais (CHI 2000)

Cat-a-Cone: Multiple Simultaneous Categories Key Ideas:Key Ideas: –Separate documents from category labels –Show both simultaneously Link the two for iterative feedbackLink the two for iterative feedback Distinguish between:Distinguish between: –Searching for Documents vs. –Searching for Categories

Cat-a-Cone Interface Cat-a-Cone Interface

Collection Retrieved Documents search Category Hierarchy browse query terms

Proposed Advantages Integrate category selection with viewing of categories Integrate category selection with viewing of categories Show all categories + context Show all categories + context Show relationship of retrieved documents to the category structure Show relationship of retrieved documents to the category structure But was not evaluated with user study But was not evaluated with user study

Our new project: FLAMENCO FLexible Access using MEtadata in Novel COmbinations Main idea:Main idea: –Preview and postview information –Determined dynamically and (semi) automatically, based on current task

Recap Search Interfaces TodaySearch Interfaces Today HCI FoundationsHCI Foundations The Information Seeking ProcessThe Information Seeking Process Visualizing Text CollectionsVisualizing Text Collections Incorporating Context and TasksIncorporating Context and Tasks Promising Future DirectionsPromising Future Directions

The future of search tools: A Prediction of a Dichotomy Information IntensiveInformation Intensive –Business analysis –Scientific research –Planning & design Quick lookup –Question answering –Context-dependent info (location, time)

My Predictions of Future Trends in Search Interfaces SpecializationSpecialization –Single topic search (“vortals”) –Task-oriented search PersonalizationPersonalization Question-AnsweringQuestion-Answering Visualization???Visualization??? Our New Project: FLAMENCO

References See the bibliography of Chapter 10 of Modern Information Retrieval, Ricardo Baeza-Yates Berthier Ribeiro-Neto (Eds.) This chapter is called “User Interfaces and Visualization,” by Marti Hearst. Available at See the bibliography of Chapter 10 of Modern Information Retrieval, Ricardo Baeza-Yates Berthier Ribeiro-Neto (Eds.) This chapter is called “User Interfaces and Visualization,” by Marti Hearst. Available at