Presentation is loading. Please wait.

Presentation is loading. Please wait.

 Designers can become so entranced with their creations that they may fail to evaluate them adequately.  Experienced designers have attained the wisdom.

Similar presentations


Presentation on theme: " Designers can become so entranced with their creations that they may fail to evaluate them adequately.  Experienced designers have attained the wisdom."— Presentation transcript:

1

2  Designers can become so entranced with their creations that they may fail to evaluate them adequately.  Experienced designers have attained the wisdom and humility to know that extensive testing is a necessity.  The determinants of the evaluation plan include:  stage of design (early, middle, late)  novelty of project (well defined vs. exploratory)  number of expected users  criticality of the interface (life-critical medical system vs. museum exhibit support)  costs of product and finances allocated for testing  time available  experience of the design and evaluation team 2

3  Recommendations needs to be based on observational findings  The design team needs to be involved with research on the current system design drawbacks  Tools and techniques are evolving  The range of evaluation plans might be anywhere from an ambitious two-year test with multiple phases for a new national air-traffic–control system to a three-day test with six users for a small internal web site  The range of costs might be from 20% of a project down to 5%.  Usability testing has become an established and accepted part of the design process  Two methods :  Analytic: theory, models, guidelines (experts)  Empirical: observations, surveys (users) 3

4  Expert reviews  While informal demos to colleagues or customers can provide some useful feedback, more formal expert reviews have proven to be effective  Expert reviews entail one-half day to one week effort, although a lengthy training period may sometimes be required to explain the task domain or operational procedures  There are a variety of expert review methods to chose from:  Heuristic evaluation  Guidelines review  Consistency inspection  Cognitive walkthrough  Metaphors of human thinking  Formal usability inspection 4

5  Expert reviews can be scheduled at several points in the development process when experts are available and when the design team is ready for feedback.  Different experts tend to find different problems in an interface, so 3-5 expert reviewers can be highly productive, as can complementary usability testing.  The dangers with expert reviews are that the experts may not have an adequate understanding of the task domain or user communities.  Even experienced expert reviewers have great difficulty knowing how typical users, especially first-time users will really behave. 5

6 Heuristic Evaluation:  A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made.  Heuristic evaluation, developed by Jakob Nielsen and Rolf Molich, is a method for structuring the critique of a system using a set of relatively simple and general heuristics.  It can be performed on a design specification so it is useful for evaluating early design. But it can also be used on prototypes, storyboards and fully functioning systems. It is therefore a flexible, relatively cheap approach. Hence it is often considered a discount usability technique. 6

7 Heuristic Evaluation:  The general idea behind heuristic evaluation is that several evaluators independently critique a system to come up with potential usability problems.  It is important that there be several of these evaluators and that the evaluations be done independently.  To aid the evaluators in discovering usability problems, a set of 10 heuristics are provided. 7

8 Heuristic Evaluation:  Each evaluator assesses the system and notes violations of any of these heuristics that would indicate a potential usability problem.  The evaluator also assesses the severity of each usability problem, based on four factors: how common is the problem, how easy is it for the user to overcome, will it be a one-off problem or a persistent one, and how seriously will the problem be perceived?  These can be combined into an overall severity rating on a scale of 0–4:  0 = I don’t agree that this is a usability problem at all  1 = Cosmetic problem only: need not be fixed unless extra time is available on project  2 = Minor usability problem: fixing this should be given low priority  3 = Major usability problem: important to fix, so should be given high priority  4 = Usability catastrophe: imperative to fix this before product can be released 8

9 Heuristic Evaluation:  Nielsen’s ten heuristics are: 1. Visibility of system status 2. Match between system and the real world 3. User control and freedom 4. Consistency and standards 5. Error prevention 6. Recognition rather than recall 7. Flexibility and efficiency of use 8. Aesthetic and minimalist design 9. Help users recognize, diagnose and recover from errors 10. Help and documentation 9

10 Cognitive Walkthrough Evaluation:  Cognitive walkthrough was originally proposed and later revised by Polson and colleagues [294, 376] as an attempt to introduce psychological theory into the informal and subjective walkthrough technique.  Walkthroughs require a detailed review of a sequence of actions.  In the code walkthrough, the sequence represents a segment of the program code that is stepped through by the reviewers to check certain characteristics  the sequence of actions refers to the steps that an interface will require a user to perform in order to accomplish some known task. 10

11 Cognitive Walkthrough Evaluation:  The evaluators then ‘step through’ that action sequence to check it for potential usability problems.  Usually, the main focus of the cognitive walkthrough is to establish how easy a system is to learn.  Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user’s manual. So the checks that are made during the walkthrough ask questions that address this exploratory learning.  To do this, the evaluators go through each step in the task and provide a ‘story’ about why that step is or is not good for a new user 11

12 Cognitive Walkthrough Evaluation:  To do a walkthrough you need four things: 1. A specification or prototype of the system. It doesn’t have to be complete, but it should be fairly detailed. Details such as the location and wording for a menu can make a big difference. 2. A description of the task the user is to perform on the system. This should be a representative task that most users will want to do. 3. A complete, written list of the actions needed to complete the task with the proposed system. 4. An indication of who the users are and what kind of experience and knowledge the evaluators can assume about them. 12

13 Cognitive Walkthrough Evaluation:  Given this information, the evaluators step through the action sequence (identified in item 3) to critique the system and tell a believable story about its usability.  To do this, the evaluators try to answer the following four questions for each step in the action sequence. 1. Is the effect of the action the same as the user’s goal at that point? 2. Will users see that the action is available? 3. Once users have found the correct action, will they know it is the one they need? 4. After the action is taken, will users understand the feedback they get? 13

14 Cognitive Walkthrough Evaluation: Example: programming a video recorder by remote control:  Imagine we are designing a remote control for a video recorder (VCR) and are interested in the task of programming the VCR to do timed recordings.  The picture on the left illustrates the handset in normal use, the picture on the right after the timed record button has been pressed.  The VCR allows the user to program up to three timed recordings in different ‘streams’.  The next available stream number is automatically assigned. We want to know whether our design supports the user’s task.  We begin by identifying a representative task.  Program the video to time-record a program starting at 18.00 and finishing at 19.15 on channel 4 on 24 February 2005. 14

15 Cognitive Walkthrough Evaluation: Example: programming a video recorder by remote control: 15

16 Cognitive Walkthrough Evaluation: Example: programming a video recorder by remote control:  We will assume that the user is familiar with VCRs but not with this particular design.  The next step in the walkthrough is to identify the action sequence for this task. We specify this in terms of the user’s action (UA) and the system’s display or response (SD). The initial display is as the left-hand picture 16

17 Cognitive Walkthrough Evaluation: Example: programming a video recorder by remote control: 17

18 Cognitive Walkthrough Evaluation: Example: programming a video recorder by remote control:  Having determined our action list we are in a position to proceed with the walkthrough. For each action (1–10) we must answer the four questions and tell a story about the usability of the system. 18

19 Cognitive Walkthrough Evaluation: Example: programming a video recorder by remote control:  So we find we have a potential usability problem relating to the icon used on the ‘timed record’ button.  We would now have to establish whether our target user group could correctly distinguish this icon from others on the remote.  The analysis proceeds in this fashion, with a walkthrough form completed for each action.  A number of experimental conditions are considered which differ only in the values of certain controlled variables. 19

20 20

21  The emergence of usability testing and laboratories since the early 1980s  Usability testing not only sped up many projects but that it produced dramatic cost savings.  The movement towards usability testing stimulated the construction of usability laboratories.  A typical modest usability lab would have two 10 by 10 foot areas, one for the participants to do their work and another, separated by a half-silvered mirror, for the testers and observers  Participants should be chosen to represent the intended user communities, with attention to  background in computing, experience with the task, motivation, education, and ability with the natural 21

22  Participation should always be voluntary, and informed consent should be obtained.  Professional practice is to ask all subjects to read and sign a statement like this one:  I have freely volunteered to participate in this experiment.  I have been informed in advance what my task(s) will be and what procedures will be followed.  I have been given the opportunity to ask questions, and have had my questions answered to my satisfaction.  I am aware that I have the right to withdraw consent and to discontinue participation at any time, without prejudice to my future treatment.  My signature below may be taken as affirmation of all the above statements; it was given prior to my participation in this study.  Institutional Review Boards (IRB) often governs human subject test process 22

23  Videotaping participants performing tasks is often valuable for later review and for showing designers or managers the problems that users encounter.  Use caution in order to not interfere with participants  Invite users to think aloud (sometimes referred to as concurrent think aloud) about what they are doing as they are performing the task. 23

24  Many variant forms of usability testing have been tried:  Paper mockups  Discount usability testing  Competitive usability testing  Universal usability testing  Field test and portable labs  Remote usability testing  Can-you-break-this tests 24

25 In this eye-tracking setup, the participant wears a helmet that monitors and records where on the screen the participant is looking 25

26 More portable eye-tracking devices 26

27  Many variant forms of usability testing have been tried:  Paper mockups  Discount usability testing  Competitive usability testing  Universal usability testing  Field test and portable labs  Remote usability testing  Can-you-break-this tests 27

28  Written user surveys are a familiar, inexpensive and generally acceptable companion for usability tests and expert reviews.  Keys to successful surveys  Clear goals in advance  Development of focused items that help attain the goals.  Survey goals can be tied to the components of the Objects and Action Interface model of interface design.  Users could be asked for their subjective impressions about specific aspects of the interface such as the representation of:  task domain objects and actions  syntax of inputs and design of displays. 28

29  Other goals would be to ascertain  users background (age, gender, origins, education, income)  experience with computers (specific applications or software packages, length of time, depth of knowledge)  job responsibilities (decision-making influence, managerial roles, motivation)  personality style (introvert vs. extrovert, risk taking vs. risk aversive, early vs. late adopter, systematic vs. opportunistic)  reasons for not using an interface (inadequate services, too complex, too slow)  familiarity with features (printing, macros, shortcuts, tutorials)  their feeling state after using an interface (confused vs. clear, frustrated vs. in-control, bored vs. excited). 29

30  Online surveys avoid the cost of printing and the extra effort needed for distribution and collection of paper forms.  Many people prefer to answer a brief survey displayed on a screen, instead of filling in and returning a printed form. 30

31  An alternative method of querying the user is to administer a questionnaire.  it can be used to reach a wider participant group, it takes less time to administer, and it can be analyzed more rigorously.  It can be administered at various points in the design process, including during requirements capture, task analysis and evaluation, in order to get information on the user’s needs, preferences and experience.  There are a number of styles of question that can be included in the questionnaire:  general  open-ended  scalar  multi-choice  ranked 31

32 32

33  Successful active use requires constant attention from dedicated managers, user-services personnel, and maintenance staff.  Perfection is not attainable, but percentage improvements are possible.  Interviews and focus group discussions  Interviews with individual users can be productive because the interviewer can pursue specific issues of concern.  Group discussions are valuable to ascertain the universality of comments. 33

34  Continuous user-performance data logging  The software architecture should make it easy for system managers to collect data about ▪ The patterns of system usage ▪ Speed of user performance ▪ Rate of errors ▪ Frequency of request for online assistance  A major benefit is guidance to system maintainers in optimizing performance and reducing costs for all participants. 34

35  Online or telephone consultants, e-mail, and online suggestion boxes  Many users feel reassured if they know there is a human assistance available  On some network systems, the consultants can monitor the user's computer and see the same displays that the user sees  Online suggestion box or e-mail trouble reporting  Electronic mail to the maintainers or designers.  For some users, writing a letter may be seen as requiring too much effort. 35

36  Discussion groups, wiki’s and newsgroups  Permit postings of open messages and questions  Some are independent, e.g. America Online and Yahoo!  Topic list  Sometimes moderators  Social systems  Comments and suggestions should be encouraged. 36

37 Bug report using Google’s Chrome browser (http://www.google.com/chrome/) 37

38  design:  what it is, interventions, goals, constraints  the design process  what happens when  users  who they are, what they are like …  scenarios  rich stories of design  navigation  finding your way around a system  iteration and prototypes  never get it right first time! 38

39 1. WHAT IS DESIGN? A simple definition is: achieving goals within constraints golden rule of design: understand your materials 39

40 understand your materials understand computers limitations, capacities, tools, platforms understand people psychological, social aspects human error and their interaction … 40 1. WHAT IS DESIGN?

41 what is there vs. what is wanted guidelines principles dialogue notations precise specification architectures documentation help evaluation heuristics scenarios task analysis Requirements analysis design implement and deploy prototype 41 1. THE PROCESS OF DESIGN?

42 Requirements what is there and what is wanted … Analysis ordering and understanding Design what to do and how to decide Iteration and prototyping getting it right … and finding what is really needed! implementation and deployment making it and getting it out there 42 1. THE PROCESS OF DESIGN?

43 widget choice menus, buttons etc. screen design application navigation design environment other apps, O/S 43

44 widget choice screen design navigation design environment elements and tags – page design site structure the web, browser, external links 44

45 widget choice screen design navigation design environment controls –buttons, knobs, dials physical layout modes of device the real world 45

46 think about structure within a screen local looking from this screen out global structure of site, movement between screens relationship with other applications 46

47 goal start Local structure: Local structure: 47

48 start goal progress with local knowledge only... 48 Local structure: Local structure:

49 goal start … but can get to the goal 49 Local structure: Local structure:

50 … try to avoid these bits! goal start 50 Local structure: Local structure:

51  knowing where you are  knowing what you can do  knowing where you are going  or what will happen  knowing where you’ve been  or what you’ve done 51 Local structure: Local structure:

52 the system info and helpmanagementmessages add userremove user 52 Global structure: Global structure:

53 main screen remove user confirm add user 53 Global structure: Global structure:

54 Wider still…: Wider still…: style issues: platform standards, consistency functional issues cut and paste navigation issues embedded applications links to other apps … the web 54

55 grouping of items order of items decoration - fonts, boxes etc. alignment of items white space between items 55

56 logically together  physically together Billing details: Name Address: … Credit card no Delivery details: Name Address: … Delivery time Order details: item quantity cost/item cost size 10 screws (boxes) 7 3.71 25.97 …… … … … 56

57 think! - what is natural order should match screen order! use boxes, space etc. set up tabbing right! 57

58 use boxes to group logical items use fonts for emphasis, headings but not too many!! ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ 58

59 you read from left to right (English and European)  align left hand side Willy Wonka and the Chocolate Factory Winston Churchill - A Biography Wizard of Oz Xena - Warrior Princess Willy Wonka and the Chocolate Factory Winston Churchill - A Biography Wizard of Oz Xena - Warrior Princess fine for special effects but hard to scan boring but readable! 59

60 Usually scanning for surnames  make it easy! Alan Dix Janet Finlay Gregory Abowd Russell Beale Alan Dix Janet Finlay Gregory Abowd Russell Beale Dix, Alan Finlay, Janet Abowd, Gregory Beale, Russell  60

61 think purpose! which is biggest? 532.56 179.3 256.317 15 73.948 1035 3.142 497.6256 61

62 visually: long number = big number align decimal points or right align integers 627.865 1.005763 382.583 2502.56 432.935 2.0175 652.87 56.34 62

63  scanning across gaps hard: (often hard to avoid with large data base fields) sherbert75 toffee120 chocolate35 fruit gums27 coconut dreams85  Multiple columns: 63

64  use leaders : sherbert75 toffee120 chocolate35 fruit gums27 coconut dreams85 64  Multiple columns:

65  or greying sherbert75 toffee120 chocolate35 fruit gums27 coconut dreams85 65  Multiple columns:

66 sherbert75 toffee120 chocolate35 fruit gums27 coconut dreams85 or even (with care!) ‘bad’ alignment 66  Multiple columns:

67  White space – space to separate: 67

68 EF D BC A 68  White space – space to separate:

69 69  White space – space to separate:

70  Physical controls : grouping of items defrost settings type of food time to cook type of food time to cook defrost settings 70

71  grouping of items  order of items 4 4)start 2 2)temperature 3 3)time to cook 1 1)type of heating 71  Physical controls :

72  grouping of items  order of items  decoration different colours for different functions lines around related buttons (temp up/down) 72  Physical controls :

73  grouping of items  order of items  decoration  alignment ? easy to scan ? centred text in buttons 73  Physical controls :

74  grouping of items  order of items  decoration  alignment  white space gaps to aid grouping 74  Physical controls :

75 entering information knowing what to do affordances 75

76 forms, dialogue boxes presentation + data input similar layout issues alignment - N.B. different label lengths logical layout use task analysis groupings natural order for entering information top-bottom, left-right (depending on culture) Name: Address: Alan Dix Lancaster Name: Address: Alan Dix Lancaster Name: Address: Alan Dix Lancaster ? 76

77 what is active what is passive where do you click where do you type consistent style helps e.g. web underlined links labels and icons standards for common actions language – bold = current state or action 77

78 psychological term for physical objects shape and size suggest actions pick up, twist, throw also cultural – buttons ‘afford’ pushing for screen objects button–like object ‘affords’ mouse click physical-like objects suggest use culture of computer use icons ‘afford’ clicking or even double clicking … not like real buttons! mug handle ‘affords’ grasping 78

79 presenting information aesthetics and utility colour and 3D localisation & internationalisation 79

80 purpose matters sort order (which column, numeric alphabetic) text vs. diagram scatter graph vs. histogram use paper presentation principles! but add interactivity softens design choices e.g. re-ordering columns chap1 chap10 chap11 chap12 chap13 chap14 … 17 12 51 262 83 22 … sizename size chap10 chap5 chap1 chap14 chap20 chap8 … 12 16 17 22 27 32 … namesize 80

81 aesthetically pleasing designs increase user satisfaction and improve productivity beauty and utility may conflict mixed up visual styles  easy to distinguish clean design – little differentiation  confusing backgrounds behind text … good to look at, but hard to read but can work together 81

82  both often used very badly!  colour  older monitors limited palette  beware colour blind!  use sparingly to reinforce other information  3D effects  good for physical information and some graphs  Color and 3D: 82

83  over use - without very good reason (e.g. kids’ site)  colour blindness  poor use of contrast  do adjust your set!  adjust your monitor to greys only  can you still read your screen? 83

84  localisation & internationalisation  changing interfaces for particular cultures/languages  globalisation  try to choose symbols etc. that work everywhere  simply change language?  use ‘resource’ database instead of literal text … but changes sizes, left-right order etc.  deeper issues  cultural assumptions and values  meanings of symbols e.g tick and cross … +ve and -ve in some cultures … but … mean the same thing (mark this) in others  84


Download ppt " Designers can become so entranced with their creations that they may fail to evaluate them adequately.  Experienced designers have attained the wisdom."

Similar presentations


Ads by Google