1 User Centered Design and Evaluation. 2 Overview Why involve users at all? What is a user-centered approach? Evaluation strategies Examples from “Snap-Together.

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

Chapter 9 User-centered approaches to interaction design By: Sarah Obenhaus Ray Evans Nate Lynch.
Chapter 4 Design Approaches and Methods
CS305: HCI in SW Development Evaluation (Return to…)
Saul Greenberg User Centered Design Why User Centered Design is important Approaches to User Centered Design.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
User Testing & Experiments. Objectives Explain the process of running a user testing or experiment session. Describe evaluation scripts and pilot tests.
USABILITY AND EVALUATION Motivations and Methods.
The art and science of measuring people l Reliability l Validity l Operationalizing.
Semester in review. The Final May 7, 6:30pm – 9:45 pm Closed book, ONE PAGE OF NOTES Cumulative Similar format to midterm (probably about 25% longer)
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
Methodology Overview Dr. Saul Greenberg John Kelleher.
James Tam User Centered Design Why User Centered Design is important Approaches to User Centered Design.
Writing a Research Protocol Michael Aronica MD Program Director Internal Medicine-Pediatrics.
Empirical Methods in Human- Computer Interaction.
Evaluation Methodologies
Experiments Testing hypotheses…. Recall: Evaluation techniques  Predictive modeling  Questionnaire  Experiments  Heuristic evaluation  Cognitive.
Evaluation Methods April 20, 2005 Tara Matthews CS 160.
Lecture 12 Psyc 300A. Review: Inferential Statistics We test our sample recognizing that differences we observe may be simply due to chance. Significance.
User-centered approaches to interaction design. Overview Why involve users at all? What is a user-centered approach? Understanding users’ work —Coherence.
1 User Centered Design and Evaluation. 2 Overview My evaluation experience Why involve users at all? What is a user-centered approach? Evaluation strategies.
From Controlled to Natural Settings
ISE554 The WWW 3.4 Evaluation Methods. Evaluating Interfaces with Users Why evaluation is crucial to interface design General approaches and tradeoffs.
HCI revision lecture. Main points Understanding Applying knowledge Knowing key points Knowing relationship between things If you’ve done the group project.
Fig Theory construction. A good theory will generate a host of testable hypotheses. In a typical study, only one or a few of these hypotheses can.
Research in Psychology. Research Basics  All psychological research MUST follow the scientific method  Improves accuracy and validity of findings 
Chapter 14: Usability testing and field studies
SBD: Usability Evaluation
류 현 정류 현 정 Human Computer Interaction Introducing evaluation.
McGraw-Hill © 2006 The McGraw-Hill Companies, Inc. All rights reserved. The Nature of Research Chapter One.
Evaluation: Controlled Experiments Chris North cs3724: HCI.
Chapter 1: Introduction to Statistics
Ch 14. Testing & modeling users
TYPES: Laboratory Experiments- studies in a closed setting, where experimenter has control over all variables. Natural Settings- Real life occurrences,
Human Computer Interaction
Usability testing. Goals & questions focus on how well users perform tasks with the product. – typical users – doing typical tasks. Comparison of products.
Today: Our process Assignment 3 Q&A Concept of Control Reading: Framework for Hybrid Experiments Sampling If time, get a start on True Experiments: Single-Factor.
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)
Evaluation of User Interface Design 4. Predictive Evaluation continued Different kinds of predictive evaluation: 1.Inspection methods 2.Usage simulations.
Review of Research Methods. Overview of the Research Process I. Develop a research question II. Develop a hypothesis III. Choose a research design IV.
1 MP2 Experimental Design Review HCI W2014 Acknowledgement: Much of the material in this lecture is based on material prepared for similar courses by Saul.
SBD: Analyzing Requirements Chris North CS 3724: HCI.
Building Reality: The Social Construction of Knowledge
Week 2 The lecture for this week is designed to provide students with a general overview of 1) quantitative/qualitative research strategies and 2) 21st.
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Evaluation Methods - Summary. How to chose a method? Stage of study – formative, iterative, summative Pros & cons Metrics – depends on what you want to.
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
Introduction to Validity True Experiment – searching for causality What effect does the I.V. have on the D.V. Correlation Design – searching for an association.
Empirical Evaluation Chris North cs5984: Information Visualization.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
PSYCH 610 Entire Course (UOP) For more course tutorials visit  PSYCH 610 Week 1 Individual Assignment Research Studies Questionnaire.
User-centered approaches to interaction design By Haiying Deng Yan Zhu.
고려대학교 산업경영공학과 IMEN 315 인간공학 1. INTRODUCTION. 고려대학교 산업경영공학과 IMEN 315 인간공학.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
Moshe Banai, PhD Editor International Studies of Management and Organization 1.
Primary Care Research: An Introduction (To Some Really Important Concepts) John B. Schorling, M.D., M.P.H. Professor of Medicine and Public Heath Sciences.
SIE 515 Design Evaluation Lecture 7.
Chapter 2 Sociological Research Methods
PSYCH 610 GUIDE Perfect Education/ psych610guide.com.
© 2012 The McGraw-Hill Companies, Inc.
SBD: Analyzing Requirements
Scientific Method Steps
Experiments: Part 2.
Evaluation.
Testing & modeling users
Presentation transcript:

1 User Centered Design and Evaluation

2 Overview Why involve users at all? What is a user-centered approach? Evaluation strategies Examples from “Snap-Together Visualization” paper

3 Why involve users?

4 Understand the users and their problems Visualization users are experts We do not understand their tasks and information needs Intuition is not good enough Expectation management & Ownership Ensure users have realistic expectations Make the users active stakeholders

5 How to involve users? Workshops, consultation Co-authored papers Visualization papers Domain-specific papers User becomes member of the design team Vis expert becomes member of user team

6 More a philosophy than an approach. Based on: –Early focus on users and tasks: directly studying information needs and tasks –Empirical measurement: users’ reactions and performance with prototypes –Iterative design What is a user-centered approach?

7 Focus on Tasks Users’ tasks / goals are the driving force –Different tasks require very different visualizations –Lists of common visualization tasks can help (e.g., Shneiderman’s “Task by Data Type Taxonomy”) –Overview, details, zoom, filter, relate, history, extract –But user-specific tasks are still the best

8 Focus on Users Users’ characteristics and context of use need to be supported Users have varied needs and experience –E.g. radiologists vs. GPs vs. patients

9 Understanding users’ work Ethnography - Observers immerse themselves in workplace culture for days/weeks/months Structured observation / Contextual inquiry - Much shorter than ethnography (a few hours) - Often at user’s workplace Meetings / collaboration

10 Design cycle Design should be iterative –Prototype, test, prototype, test, … –Test with users! Design may be participatory

11 Key point Visualizations must support specific users doing specific tasks “Showing the data” is not enough!

12 Evaluation

13 Types of Evaluation Studies Compare design elements –E.g., coordination vs. no coordination (North & Shneiderman) Compare systems –E.g., Spotfire vs. TableLens Usability evaluation of a system –E.g., Snap system (N & S) Case studies –E.g., bioinformatics, E-commerce, security

14 How to evaluate with users? Quantitative Experiments - Controlled laboratory studies Clear conclusions, but limited realism Other Methods –Observations –Contextual inquiry –Field studies More realistic, but conclusions less precise

15 How to evaluate without users? Heuristic evaluation Cognitive walkthrough? –Hard -- tasks ill-defined & may be accomplished many ways GOMS / User Modeling? –Hard – tasks are ill-defined & not repetitive

16 Snap-Together Vis Custom coordinated views

17 Questions Is this system usable? –Usability testing Is coordination important? Does it improve performance? –Experiment to compare coordination vs. no coordination

18 Usability testing vs. Experiment Usability testing Aim: improve products Few participants Results inform design Not perfectly replicable Partially controlled conditions Results reported to developers Quantitative Experiment Aim: discover knowledge Many participants Results validated statistically Replicable Strongly controlled conditions Scientific paper reports results to community

19 Usability of Snap-Together Vis Can people use the Snap system to construct a coordinated visualization? Not really a research question But necessary if we want to use the system to answer research questions How would you test this?

20 Summary: Usability testing Goals focus on how well users perform tasks with the prototype May compare products or prototypes Major parts of usability testing –Time to complete task & number & type of errors (quantitative performance data) –Qualitative methods (questionnaires, observations, interviews) Informed by video

21 Usability Testing conditions Major emphasis on - selecting representative users - developing representative tasks 5-12 users typically selected Test conditions are the same for every participant

22 Controlled experiments Strives for –Testable hypothesis –Internal validity: Control of variables and conditions Experiment is replicable No experimenter bias –External validity Results are generalizable –Confidence in results Statistics

23 Testable hypothesis State a testable hypothesis –this is a precise problem statement Example: Searching for a graphic item among 100 randomly placed similar items will take longer with a 3D perspective display than with a 2D display.

24 Controlled conditions Purpose: Internal validity –Knowing the cause of a difference found in an experiment –No difference between conditions except the ideas being studied Trade-off between internal validity (control) and external validity (generalizable results)

25 Confounding Factors Group 1 Visualization A in a room with windows Group 2 Visualization B in a room without windows What can you conclude if Group 2 performs the task faster?

26 What is controlled Who gets what condition –Subjects randomly assigned to groups When & where each condition is given How the condition is given –Instructions, actions, etc. –Avoid actions that bias results (e.g., “Here is the system I developed. I think you’ll find it much better than the one you just tried.”) Order effects

27 Order Effects Example: Search for XX with Visualizations A and B with varied numbers of distractors 1.Randomization E.g., number of distractors: 3, 15, 6, 12, 9, 6, 3, 15, 9, 12… 2.Counter-balancing E.g., Half use Vis A 1 st, half use Vis B first

28 Experimental designs Between-subjects design: Each participant tries one condition –No order effects –Participants cannot compare conditions –Need more participants Within-subjects design: All participants try all conditions –Must deal with order effects (e.g., learning or fatigue) –Participants can compare conditions –Fewer participants

29 Statistical analysis Apply statistical methods to data analysis –confidence limits : the confidence that your conclusion is correct “p = 0.05” means: –a 95% probability that there is a true difference –a 5% probability the difference occurred by chance

30 Types of statistical tests T-tests (compare 2 conditions) ANOVA (compare >2 conditions) Correlation and regression Many others

31 Snap-Together Vis Experiment Is coordination important? Does it improve performance? How would you test this?

32 Critique of Snap-Together Vis Experiment Statistical reporting is incomplete –Should look like (F(x, y) = ___, p = ___) (I.e. provide F and degrees of freedom) –Provide exact p values (not p < 0.05) Limited generalizability –Would we get the same result with non-text data? Expert users? Other types of coordination? Complex displays? Unexciting hypothesis – we were fairly sure what the answer would be

33 Take home messages Talk to real users! If you plan to do visualization research (especially evaluation) you should learn more about HCI and statistics