Preparing a User Test Alfred Kobsa University of California, Irvine.

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

The Usability Test Process: Steps, tips, and more! Dr. Jennifer L. Bowie For Digital Rhetoric.
What is Usability Testing? Usability testing is a technique for ensuring that the intended users of a system can carry out the intended tasks efficiently,
Running a User Study Alfred Kobsa University of California, Irvine.
Web E’s goal is for you to understand how to create an initial interaction design and how to evaluate that design by studying a sample. Web F’s goal is.
Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
SEVEN FREQUENTLY ASKED QUESTIONS ABOUT USABILITY TESTING Usability Testing 101.
Deciding How to Measure Usability How to conduct successful user requirements activity?
Analyzing and Presenting Results Establishing a User Orientation Alfred Kobsa University of California, Irvine.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
Customer: Rosalva Gallardo Team members: Susan Lin Buda Chiou Jim Milewski Marcos Mercado November 23, 2010.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
CS CS 5150 Software Engineering Lecture 12 Usability 2.
Presentation: Usability Testing Steve Laumaillet November 22, 2004 Comp 585 V&V, Fall 2004.
Administrivia  Review Deliverable 2 –Overview (audience) –Excellent additions  User Goals  Usability Goals  User Group (who are you designing for?)
COMP6703 : eScience Project III ArtServe on Rubens Emy Elyanee binti Mustapha Supervisor: Peter Stradzins Client: Professor Michael.
Project Sharing  Team discussions –Share results of heuristic evaluations –Discuss your choice of methods and results  Class-level discussion –Each spokesperson.
Orientation to Online Learning An Instructors Guide.
Saul Greenberg Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of.
From Controlled to Natural Settings
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
Field Studies (Ethnographic Studies) Alfred Kobsa University of California, Irvine.
Running User Experiments Humeyra Topcu-Altintas David Girsault Feyi Agagu.
Damian Gordon.  Summary and Relevance of topic paper  Definition of Usability Testing ◦ Formal vs. Informal methods of testing  Testing Basics ◦ Five.
Usability Testing COMP 6620 User Interface Design Dr. Seals.
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
OHT 4.1 Galin, SQA from theory to implementation © Pearson Education Limited 2004 Software Quality assurance (SQA) SWE 333 Dr Khalid Alnafjan
Evaluation IMD07101: Introduction to Human Computer Interaction Brian Davison 2010/11.
Chapter 14: Usability testing and field studies
Sandra Martinez, Ph.D. Apply Texas Team Student Information Systems University of Texas at Austin.
SUNIL GAHLAWAT LU GAN MIYA SYLVESTER YIRAN WANG Usability and Utility of TopicLens a Visualization System for the Exploration of Topic Models.
Usability Testing Teppo Räisänen
WARNING These slides are not optimized for printing or exam preparation. These are for lecture delivery only. These slides are made for PowerPoint 2010.
Part 1-Intro; Part 2- Req; Part 3- Design  Chapter 20 Why evaluate the usability of user interface designs?  Chapter 21 Deciding on what you need to.
22C:082:001 Human-Computer Interaction. Fall Copyright © 2013 Juan Pablo Hourcade. 1 Group Project Phase 1.
Human Computer Interaction
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Usability testing. Goals & questions focus on how well users perform tasks with the product. – typical users – doing typical tasks. Comparison of products.
Selecting, recruiting and informing users Alfred Kobsa University of California, Irvine.
Heuristic Evaluation and Discount Usability Engineering Taken from the writings of Jakob Nielsen – inventor of both.
What is Usability? Usability Is a measure of how easy it is to use something: –How easy will the use of the software be for a typical user to understand,
Preparing a User Test Alfred Kobsa University of California, Irvine.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Usability testing: A reality check for Plain Talk A brief overview October 7, 2008 Dana Howard Botka Manager, Customer Communications, L&I Plain Talk Coordinator,
Chapter 13. Reviewing, Evaluating, and Testing © 2010 by Bedford/St. Martin's1 Usability relates to five factors of use: ease of learning efficiency of.
Selecting, recruiting and informing users Alfred Kobsa University of California, Irvine.
Introduction to Web Authoring Ellen Cushman Class mtg. #21.
Steps to Conduct a Usability Test Dr. Jennifer L. Bowie.
Preparing and Running User Experiments By Mei Li, Pearl Ho, and Deepika Gandhi.
Running a User Study Alfred Kobsa University of California, Irvine.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
Dobrin / Weisser / Keller: Technical Communication in the Twenty-First Century. © 2010 Pearson Education. Upper Saddle River, NJ, All Rights Reserved.
AVI/Psych 358/IE 340: Human Factors Evaluation October 31, 2008.
June 5, 2007Mohamad Eid Usability Testing Chapter 8.
Field Studies (Ethnographic Studies) Alfred Kobsa University of California, Irvine.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
Steps in Planning a Usability Test Determine Who We Want To Test Determine What We Want to Test Determine Our Test Metrics Write or Choose our Scenario.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
Day 8 Usability testing.
Usability Evaluation, part 2
Alfred Kobsa University of California, Irvine
From Controlled to Natural Settings
RACIAL VIOLENCE ARCHIVE
Alfred Kobsa University of California, Irvine
Chapter 23 Deciding how to collect data
Chapter 24 Final preparations for the evaluation
From Controlled to Natural Settings
BTEC Level 2 - Leadership through Sport Unit 8 Planning and Running a Sports Event.
Presentation transcript:

Preparing a User Test Alfred Kobsa University of California, Irvine

Specifying Global Test Goals Specify (Intended) purpose of product Product development status Test goals (with priorities) User profiles (personae)

Selecting Tasks Test subjects should not merely “try out the system for n minutes”, but rather carry out selected tasks with the system. One cannot test every possible user task. Rather, usability tests must focus on representative and/or important tasks. Tasks should be selected that may be fraught with usability problems, as suggested from earlier concerns and usage experience; will be frequently carried out by users (20% is used 80% of time) are mission-critical; are performed under time pressure; or are new or have been modified in comparison with previous version or competitive program ☞ Brainstorm and then filter tasks using these criteria Test the comprehensibility of task descriptions Specify timeout for each task (possibly do not reveal to subjects)

Creating Scenarios Scenarios are created to contextualize user experiments (which in general yields more representative test results) Scenario descriptions should be - short - formulated in the words of the user / task domain - unambiguous - contain enough information for test subjects to carry out tasks - be directly linked to tasks and concerns ☛ User should read scenario descriptions (and experimenters should possibly read them aloud at the same time) ☛ Scenario descriptions should be tested (in the pilot test or even earlier) Examples:

Deciding how to measure usability Performance measures -time needed to carry out a task -Error rate -Task completion rate -Time spent on “unproductive” activities (navigation, looking up help, recovery after an error) -Frequency of “unproductive” activities -Counting keystrokes / mouse clicks -Etc. (see Dumas & Reddish) Measures of satisfaction User-provided: Observed(?): frustration / confusion / surprise / satisfaction -User ratings (e.g., SUS – System Usability Scale) -Comparisons with previous version / competitors’ software / current way of doing it -Behavioral intentions (use, buy(?), recommend to friend) -Free comments (after and during experiment)

Preparing the Test Materials Legal documents Informed consent form Non-disclosure form Waiver of liability form Permissions form (e.g., for video recordings) Instruction and training related materials Software / Powerpoint slides / video to be shown Summary of software functionality Write-up of oral instruction (Guided) training tasks Task-related materials Scenario description Task descriptions ( ☛ one task on each page, large font for task/page number) Pre-test and post-test questionnaires Experiment-related materials “Screener” with participant election criteria Experimental time sheet / log book To-do list for all experimenters

Preparing the testing environment Hardware equipment ☛ cater to users’ normal equipment; remove all potentially distracting programs Sample data ☛ make it look real Voice recording ☛ take care of ambient noise (also exceptional), direction of microphone,… Screen recording ☛ (mind a possible slowdown of tested program) Video recording ☛ take care of video angle, blocked view, glare, different sunlight over the day,… Time taking ☛ avoid races (between participants, or participants against stop watch) Lab layout (see Courage & Baxter) ☛ participants should not influence each other

Setting up a test team Typical roles Greeter: welcomes subjects, makes them relaxed, bridges wait time Briefer: informs about study, makes them sign forms Instructor / trainer: instructs them about the software Test administrator: tells subjects what to do Note taker Video operator Backup technician for emergencies Many of these roles can be combined in a single person No role-switching over the duration of an experiment, to ensure comparability The number of team members also depends on the number of parallel / overlapping subjects and the experimental design Teams of 2-3 are typical Tests are typically carried out by UI design team, and/or outside usability specialists Developers, managers, user representatives should be able to watch (invisible to the subjects, or in the background)