Copyright © 2005, Pearson Education, Inc. An Instructor’s Outline of Designing the User Interface 4th Edition by Ben Shneiderman & Catherine Plaisant Slides.

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

Designing the User Interface CHAPTER 4 Evaluating interface Designs.
6.811 / PPAT: Principles and Practice of Assistive Technology Wednesday, 16 October 2013 Prof. Rob Miller Today: User Testing.
CS305: HCI in SW Development Evaluation (Return to…)
Técnicas de Calidad en el Software Sesión # 10. Good quality software Operations Transition Testing Portability Reusability Interoperability Maintainability.
4.1 Introduction Designers may fail to evaluate adequately.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
User Testing & Experiments. Objectives Explain the process of running a user testing or experiment session. Describe evaluation scripts and pilot tests.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
Evaluating Interface Designs
User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept. Cal Poly San Luis Obispo FJK 2005.
An evaluation framework
Copyright © 2005, Pearson Education, Inc. Chapter 4 Expert Reviews, Usability Testing, Surveys, and Continuing Assessment.
Saul Greenberg Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of.
From Controlled to Natural Settings
HCI Research Methods Ben Shneiderman Founding Director ( ), Human-Computer Interaction Lab Professor, Department of Computer Science.
ISE554 The WWW 3.4 Evaluation Methods. Evaluating Interfaces with Users Why evaluation is crucial to interface design General approaches and tradeoffs.
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
User Interface Evaluation CIS 376 Bruce R. Maxim UM-Dearborn.
Managing Design Processes and Usability Testing
CS 3366: Human Computer Interaction Chapter 4: Evaluating Interface Designs September 8, 2011 Mohan Sridharan Based on Slides for the book: Designing the.
1. Learning Outcomes At the end of this lecture, you should be able to: –Define the term “Usability Engineering” –Describe the various steps involved.
1 ISE 412 Human-Computer Interaction Design process Task and User Characteristics Guidelines Evaluation.
Empirical Evaluation Assessing usability (with users)
Chapter 14: Usability testing and field studies
Literature Review and Parts of Proposal
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
1 User Studies: Surveys and Continuing Assessment Lecture 5.
CHAPTER 4: Evaluating interface Designs
Chapter 11: An Evaluation Framework Group 4: Tony Masi, Sam Esswein, Brian Rood, & Chris Troisi.
Output and User Interface Design
System Design: Designing the User Interface Dr. Dania Bilal IS582 Spring 2009.
Chapter 4 Expert Reviews, Usability, Testing, Surveys, and Continuing Assessments Saba Alavi,Jacob Hicks,Victor Chen.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of Designing the User Interface: Strategies for Effective Human-Computer.
Ch 14. Testing & modeling users
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Evaluating a Research Report
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 7: Focusing on Users and Their Tasks.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Chapter 3: Managing Design Processes
SEG3120 User Interfaces Design and Implementation
Methodology and Explanation XX50125 Lecture 3: Usability testing Dr. Danaë Stanton Fraser.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of Designing the User Interface: Strategies for Effective Human-Computer.
Usability Testing Chapter 6. Reliability Can you repeat the test?
Copyright © 2005, Pearson Education, Inc. An Instructor’s Outline of Designing the User Interface 4th Edition by Ben Shneiderman & Catherine Plaisant Slides.
1-1 Lecture 31 Enterprise Systems Development ( CSC447 ) COMSATS Islamabad Muhammad Usman, Assistant Professor.
Usability Assessment Methods beyond Testing Chapter 7 Evaluating without users.
Chapter 8 Usability Specification Techniques Hix & Hartson.
Welcome to the Usability Center Tour Since 1995, the Usability Center has been a learning environment that supports and educates in the process of usability.
Evaluating a UI Design Expert inspection methods Cognitive Walkthrough
Writing Software Documentation A Task-Oriented Approach Thomas T. Barker Chapter 5: Analyzing Your Users Summary Cornelius Farrell Emily Werschay February.
1 Evaluating Interface Designs  How do you know your design is any good?  When will you know?
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
Evaluating Interface Designs Session 04 Course: T0593 / Human Computer Interaction Year: 2012.
SE 204, IES 506 – Human Computer Interaction Lecture 6: Evaluating Interface Designs Lecturer: Gazihan Alankuş Please look at the end of the.
Ethics in Evaluation Why ethics? What you have to do Slide deck by Saul Greenberg. Permission is granted to use this for non-commercial purposes as long.
1 Evaluating the User Experience in CAA Environments: What affects User Satisfaction? Gavin Sim Janet C Read Phil Holifield.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
Design Evaluation Overview Introduction Model for Interface Design Evaluation Types of Evaluation –Conceptual Design –Usability –Learning Outcome.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
Day 8 Usability testing.
CHAPTER 5: Evaluation and the User Experience
Review of Usability Testing
CHAPTER 4: Evaluating interface Designs
CS 522: Human-Computer Interaction User Evaluation
From Controlled to Natural Settings
CHAPTER 4: Evaluating interface Designs
Presentation transcript:

Copyright © 2005, Pearson Education, Inc. An Instructor’s Outline of Designing the User Interface 4th Edition by Ben Shneiderman & Catherine Plaisant Slides developed by Roger J. Chapman

Copyright © 2005, Pearson Education, Inc. Chapter 4 Expert Reviews, Usability Testing, Surveys, and Continuing Assessment

Copyright © 2005, Pearson Education, Inc. Introduction Designers fail to evaluate their designs adequately Experienced designers know that extensive testing is a necessity Evaluation plan addresses: –stage of design (early, middle, late) –novelty of project (well defined vs. exploratory) –number of expected users –criticality of the interface (life-critical medical system vs. museum exhibit support) –costs of product and finances allocated for testing –time available –experience of the design and evaluation team Length of time spent varies ( nature of project ) Cost of project ( 5% - 20% of project)

Copyright © 2005, Pearson Education, Inc. Expert Reviews formal expert reviews are more effective entail one-half day to one week effort –a lengthy training period may be required to explain the task domain or operational procedures Expert review methods: 1.Heuristic evaluation 8 golden rules 2.Guidelines review need to master guidelines first 3.Consistency inspection terminology, color, fonts, schemes, layouts 4.Cognitive walkthrough need individual as well as group (for discussion) 5.Formal usability inspection present interface and discuss merits and weaknesses

Copyright © 2005, Pearson Education, Inc. Expert Reviews (cont.) scheduled at several points in the development process –when experts are available –when the design team is ready for feedback 3-5 expert reviewers are necessary Caveat: –experts may not have an adequate understanding of the task domain or user communities – experienced expert reviewers are unaware of problems encountered by typical users, especially first-time users

Copyright © 2005, Pearson Education, Inc. Usability Testing and Laboratories designed to find flaws in the user interfaces

Copyright © 2005, Pearson Education, Inc. Usability Testing and Laboratories (cont.) The emergence of usability testing and laboratories –since the early 1980s –shift to user needs –Usability became important Usability testing benefits –sped up many projects –produced dramatic cost savings Typical modest usability lab –two 10 by 10 foot areas –one for the participants to do their work and another, separated by a half-silvered mirror, for the testers and observers Participants chosen to represent the intended users –background in computing –experience with the task –motivation, education, and ability with the natural language used in the interface

Copyright © 2005, Pearson Education, Inc. Usability Testing and Laboratories (cont.) Videotaping participants performing tasks is often valuable for later review and for showing designers or managers the problems that users encounter. Forms of usability testing: –Paper mockups –Discount usability testing –Competitive usability testing –Universal usability testing –Field test and portable labs –Remote usability testing –Can-you-break-this tests Limitations –Emphasizes first time usage –Provides limited coverage of interface features

Copyright © 2005, Pearson Education, Inc. Ethics Testing can be a distressing experience –pressure to perform, errors inevitable –feelings of inadequacy –competition with other subjects Golden rule –subjects should always be treated with respect

Copyright © 2005, Pearson Education, Inc. Managing Subjects Ethically Before the test –Don’t waste the user’s time Use pilot tests to debug experiments, questionnaires etc Have everything ready before the user shows up –Make users feel comfortable Emphasize that it is the system that is being tested, not the user Acknowledge that the software may have problems Let users know they can stop at any time –Maintain privacy Tell user that individual test results will be kept completely confidential –Inform the user Explain any monitoring that is being used Answer all user’s questions (but avoid bias) –Only use volunteers user must sign an informed consent form

Copyright © 2005, Pearson Education, Inc. During the test –Don’t waste the user’s time Never have the user perform unnecessary tasks –Make users comfortable Try to give user an early success experience Keep a relaxed atmosphere in the room Coffee, breaks, etc Hand out test tasks one at a time Never indicate displeasure with the user’s performance Avoid disruptions Stop the test if it becomes too unpleasant –Maintain privacy Do not allow the user’s management to observe the test Managing Subjects Ethically

Copyright © 2005, Pearson Education, Inc. After the test –Make the users feel comfortable State that the user has helped you find areas of improvement –Inform the user Answer particular questions about the experiment that could have biased the results before –Maintain privacy Never report results in a way that individual users can be identified Only show videotapes outside the research group with the user’s permission Managing Subjects Ethically

Copyright © 2005, Pearson Education, Inc. Survey Instruments Written user surveys for usability tests and expert reviews familiar, inexpensive and generally acceptable perform pretest with a pilot sample to improve results Keys to success 1.Clear obtainable goals –Tied to components of Objects and Action Interface model –Users could be asked for their subjective impressions about specific aspects of the interface –task domain objects and actions –syntax of inputs and design of displays 2.Develop focused items that help attain the goals

Copyright © 2005, Pearson Education, Inc. Survey Instruments (cont.) Other information that can be collected –users background age, gender, origins, education, income –experience with computers specific applications or software packages, length of time, depth of knowledge –job responsibilities decision-making influence, managerial roles, motivation –personality style introvert vs. extrovert, risk taking vs. risk aversive, early vs. late adopter, systematic vs. opportunistic –reasons for not using an interface inadequate services, too complex, too slow –familiarity with features printing, macros, shortcuts, tutorials –their feeling state after using an interface confused vs. clear, frustrated vs. in-control, bored vs. excited).

Copyright © 2005, Pearson Education, Inc. Surveys (cont.) Online surveys –avoid the cost of printing –the extra effort needed for distribution and collection of paper forms –can remind those who have not filled out the survey Many people prefer to answer a brief survey displayed on a screen, instead of filling in and returning a printed form, –potential bias in the sample.

Copyright © 2005, Pearson Education, Inc. A cceptance Test the customer or manager sets objective and measurable goals for hardware and software performance If the completed product fails to meet these acceptance criteria, the system must be reworked until success is demonstrated. Need specific, measurable criteria for the user interface: –Time to learn specific functions –Speed of task performance –Rate of errors by users –Human retention of commands over time –Subjective user satisfaction Objective is to uncover as many problems as possible in the prerelease phases

Copyright © 2005, Pearson Education, Inc. A cceptance Test (cont.) In a large system, there may be eight or 10 such tests to carry out on different components of the interface and with different user communities. Once acceptance testing has been successful, there may be a period of field testing before national or international distribution.

Copyright © 2005, Pearson Education, Inc. Evaluation During Active Use Successful active use requires constant attention from dedicated managers, user-services personnel, and maintenance staff Methods 1.Interviews and focus group discussions –individual users: pursue specific issues of concern. –Group discussions: determine the universality of comments. 2.Continuous user-performance data logging –patterns of system usage –Speed of user performance –Rate of errors –Frequency of request for online assistance 3.Online or telephone consultants 4.Online suggestion box or trouble reporting 5.Discussion and newsgroups

Copyright © 2005, Pearson Education, Inc. Controlled Psychologically-oriented Experiments (cont.) Scientific method as applied to human-computer interaction : –Deal with a practical problem: consider the theoretical framework –State a lucid and testable hypothesis –Identify a small number of independent variables –choose the dependent variables that will be measured –Judiciously select subjects and carefully or randomly assign subjects to groups –Control for biasing factors (non-representative sample of subjects or selection of tasks, inconsistent testing procedures) –Apply statistical methods to data analysis –Resolve the practical problem, refine the theory, and give advice to future researchers

Copyright © 2005, Pearson Education, Inc. Controlled Psychologically-oriented Experiments (cont.) Need to develop efficiency measurements and techniques for evaluation Controlled experiments can help fine tuning the human-computer interface of actively used systems. Performance could be compared with the control group. Dependent measures could include performance times, user- subjective satisfaction, error rates, and user retention over time.

Copyright © 2005, Pearson Education, Inc. Summary Definite need to evaluate user interfaces Expert reviews provide comprehensive evaluation Usability labs provide practical user interaction Survey instruments provide much information but should be administered ethically Acceptance tests should contain explicit measurable criteria to ensure project requirements have been satisfied Evaluation of interfaces during active use ensures higher level of user satisfaction Psychologically oriented experiments provide empirical evidence for evaluating user interfaces