Presentation is loading. Please wait.

Presentation is loading. Please wait.

4/16/2017 Usability Evaluation Howell Istance 1.

Similar presentations


Presentation on theme: "4/16/2017 Usability Evaluation Howell Istance 1."— Presentation transcript:

1 4/16/2017 Usability Evaluation Howell Istance 1

2 Why Evaluate? discovering user problems
4/16/2017 Why Evaluate? discovering user problems testing whether a usability-related specification has been met comparing designs testing conformance to standards SOFT Interactive Systems 2 2

3 Overview of approaches to evaluation
4/16/2017 Overview of approaches to evaluation User testing: observation and monitoring usage Expert reviewing Usability engineering Contextual enquiry Experimental techniques SOFT Interactive Systems 3

4 Usability Requirements
4/16/2017 Usability Requirements Usability of a system is usually related to one of the following areas learnability throughput flexibility attitude learnability - time and effort required to reach a specified level of performance throughput - speed of execution and errors made by experienced users carrying out tasks flexibility - the extent to which system can accommodate changes to tasks and environments beyond those first specified attitude - the extent of the positive attitude engendered in users by the system SOFT Interactive Systems

5 4/16/2017 Definition - ISO9241 / 11 Usability - degree to which specified users can achieve specified goals in a particular environment with effectiveness, efficiency and satisfaction effectiveness - measures of accuracy and completeness of the goals achieved efficiency - measures of resources (e.g time, money, effort) used to achieve goals satisfaction - measures of the physical comfort and subjective acceptability of the product to its users SOFT Interactive Systems

6 Components of a usability specification
4/16/2017 Components of a usability specification statement of the usability attribute statement of how it will be measured statement of criteria that will represent attainment of the specification statement of the subset of users to which the specification applies statement of pre-conditions of measurement (e.g period of training statement of the usability attribute: learnability of reference finding system statement of how it will be measured: time to retrieve class number of a book, given its title and author statement of criteria that will represent attainment of the specification statement of the subset of users to which the specification applies statement of pre-conditions of measurement (e.g period of training SOFT Interactive Systems

7 Criteria of attainment
4/16/2017 Criteria of attainment worst case - lowest acceptable level planned case - target level of attainment best case - agreed state-of-the-art level of the attribute now level - present level of attainment in current system Performance less than the planned case may be acceptable if other aspects of the interface are good enough to compensate SOFT Interactive Systems

8 Observation and monitoring usage
4/16/2017 Observation and monitoring usage direct and indirect observation verbal protocolls user opinions software logging SOFT Interactive Systems 4

9 4/16/2017 Direct Observation give subjects a series of standard tasks to complete using a prototype observe subject completing tasks under standardised conditions data collection aimed at ensuring that qualitative descriptions of problems during task completion are captured what problems are likely in data recording? SOFT Interactive Systems 5

10 Standard tasks in direct observation
4/16/2017 Standard tasks in direct observation structure tasks into incremental difficulty (easy ones first) have a clear policy on subject becoming stuck and providing help have a reason for including each task (avoid unnecessary duplication) ensure (all) functional areas of interface usage are covered ensure tasks of sufficient complexity are included SOFT Interactive Systems 6

11 Indirect observation - video
4/16/2017 Indirect observation - video enables post-session debriefing 'talk-through' (post-event protocolls) enables quantitative data to be extracted - e.g. part task timings serves as a diary and visual record of problems usually very time consuming to analyse usability laboratories SOFT Interactive Systems 7

12 Verbal protocols means of enhancing direct observations
4/16/2017 Verbal protocols means of enhancing direct observations user articulates what they are thinking during task completion (think-aloud protocols) but… doing this can alter normal behaviour subject likely to stop when undertaking complex cognitive activities user may rationalise behaviour in post-event protocols get subjects working in pairs - co-discovery can overcome some of these problems. Think about driving a car - when the task of driving is not demanding, the driver can normally hold a conversation with a passenger. As soon as something happens which demands the drivers attention, conversation automatically stops while the driver attends to the driving task. The conversation is resumed when the driving situation has passed. Post-event protocols occur when a user, say, watches a recording of an interaction session and talks through what they were thinking during the session. This needs to take place immediately after the session. An variation on this is where the investigator selects parts of the recorded session which appears to have caused the user problems and the reasons for the apparent difficulty are talked through. Post event analysis can add considerable time to an evaluation session. SOFT Interactive Systems 8

13 Collecting users opinions
4/16/2017 Collecting users opinions interview and questionnaire suited to both qualitative data and quantitative data collection interviews structured interviews (fixed sequence of questions) semi structured (allows digressions, but all questions covered) flexible (exploration of topic governed by users views) what are the advantages and disadvantages of these? If the emphasis is on qualitative data, with a small number of users, rather than quantitative data, than an interview is likely to elicit more useful information. Questionnaires are useful when a relatively large number of opinions are collected and where the investigator wants to be confident that the questions have been presented to each user in the same way. This enables summaries of opinions to be compiled ‘60% thought x’ SOFT Interactive Systems 9

14 4/16/2017 Questionnaires contain closed questions (attitude scales) and open questions pre- and post questionnaires obtain ratings on an issue before and after an design change can be used to standardise attitude measurement of single subjects following direct observation can be used to survey large user groups Questionnaires are often badly designed, as they are perceived as being trivial. SOFT Interactive Systems 10

15 Types of rating scales Can you use the following edit commands?
4/16/2017 Types of rating scales Can you use the following edit commands? yes no don't know duplicate paste A simple checklist SOFT Interactive Systems 11

16 4/16/2017 Multipoint checklist Rate the usefulness of the duplicate command on the following scale? very of no useful use SOFT Interactive Systems 12

17 4/16/2017 Likert Scale statement of opinion to which the subject expresses their level of agreement Computers can simplify complex problems very much agree slightly neutral slightly disagree strongly agree agree disagree disagree SOFT Interactive Systems 13

18 Caution! what does 'strongly disagree' mean?
4/16/2017 Caution! The help facility in system A is much better than the help facility in system B very much agree slightly neutral slightly disagree strongly agree agree disagree disagree what does 'strongly disagree' mean? The response ‘very much agree’ is clear - A is much better than B ‘Strongly disagree’ could mean ‘I think B is much better than A’ … but it could also mean ‘I think there is no difference between A and B, and so I strongly disagree with the opinion stated in the question’ SOFT Interactive Systems 14

19 Semantic differential Scale
4/16/2017 Semantic differential Scale uses a series of bi-polar adjectives and obtains ratings which respect to each Rate the Beauxarts drawing package on the following dimensions extremely quite slightly neutral slightly quite extremely easy difficult clear confusing fun dreary This type of question needs to be followed by an open-ended question where the user can explain any negative responses which are given. Simply knowing that the package is ‘extremely difficult’ without knowing why, is of limited value SOFT Interactive Systems 15

20 4/16/2017 Rank Order Place the following commands in order of usefulness (rank the most useful as 1, the least useful as 4) paste duplicate group clear This question lacks task context. I.e ‘usefulness’ for what? This doesn’t matter if the question is asked after the user has completed a specific task, say as part of a user trial. The question would be fairly meaningless if it formed part of a general survey of user opinion of the interface to a word processing package, for example. In this case, there are many tasks associated with document preparation that the word processor could be used for, and the usefulness of the command would depend which tasks were being considered. SOFT Interactive Systems 16

21 Do and Don'ts with Questionnaire evaluation
4/16/2017 Do and Don'ts with Questionnaire evaluation do be clear about the information you want to obtain don't risk subjects becoming demotivated don't be lazy do provide specific task reference for questions don’t assume that responses will be positive do pilot the questionnaire first have a clear idea of what specifically you want information about and ensure there are questions that directly address these issues don't risk subjects being demotivated not interested in the questionnaire questionnaire is too long don't be lazy focus questions to the specific interface. Make sure that all questions apply to the interface being evaluated avoid 'not applicable' responses if questions ask for opinions about particular details of the use of the interface, ensure that the task context is clear. although you may think that the interface is very good, the questionnaire has to be objective and allow for as many negative comments as positive. Ensure that that there is sufficient opportunity for users to justify negative attitudes as positive ones. SOFT Interactive Systems 17

22 Structured Expert Reviews
4/16/2017 Structured Expert Reviews uses ‘experts’ in HCI and task domain to review design rather than subject-based testing methods vary according to how the review is structured two popular methods heuristic evaluation cognitive walkthrough SOFT Interactive Systems 30

23 Neilsen’s Usability Heuristics
4/16/2017 Neilsen’s Usability Heuristics Visibility of system status Match between system and the real world User control and freedom Consistency and standards Error prevention Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom : Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. SOFT Interactive Systems

24 Neilsen’s Usability Heuristics
4/16/2017 Neilsen’s Usability Heuristics Recognition rather than recall Flexibility and efficiency of use Aesthetic and minimalist design Help users recognize, diagnose, and recover from errors Help and documentation Recognition rather than recall: Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. SOFT Interactive Systems

25 Cognitive Walkthrough
4/16/2017 Cognitive Walkthrough developed on the basis of Cognitive Theory of Initial Learning intended for systems where the user ‘guesses their way’ through an interaction sequence the task is decomposed into paths to successful task completion consisting of individual actions method provides set of guide lines to support development procedures and checklist for evaluation To do a walkthrough you need four things. 1.A description of the prototype of the system. It doesn't have to be complete, but it should be fairly detailed. Details such as the location and wording for a menu can make a big difference. 2.A description of the task the user is to perform on the system. This should be a representative task that most users will want to do. 3.A complete, written list of the actions needed to complete the task with the given prototype. 4.An indication of who the users are and what kind of experience and knowledge the evaluators can assume about them. SOFT Interactive Systems 39

26 Cognitive Walkthrough Checklist
4/16/2017 Cognitive Walkthrough Checklist Will the correct action to be made be sufficiently evident to the user? Will the user connect the correct action’s description with what he or she is trying to do? Will the user interpret the system’s response to the chosen action correctly, that is, will the user know if he or she has made a right or a wrong choice? The technique focuses on problems associated with the user breaking down the overall task into an appropriate structure of sub-tasks, each with its own goal Problems forming correct goals failure to add goals failure to drop goals addition of spurious goals premature loss of goals problems identifying action correct action does not match goal incorrect actions match goals problems performing action physical difficulties timeouts SOFT Interactive Systems 40

27 Example of correct goal structure
4/16/2017 Example of correct goal structure Program video for timed recording Press timed recording button Set Stream type stream number press ‘timed recording’ button Set start time type start time (24 hour clock) ….. SOFT Interactive Systems 41

28 Usability Engineering
4/16/2017 Usability Engineering extends the principles underlying usability specifications provides techniques to direct resources to improve the system with respect to individual usability attributes defines usability goals through metrics sets planned levels of usability that have to be achieved analyses the impact of possible design solutions incorporates user defined feedback in product design iterating through the 'design-evaluate-design' loop until planned levels are achieved SOFT Interactive Systems 24

29 Impact Analysis Partition total time = task time + error time
4/16/2017 Impact Analysis Partition total time = task time + error time error time represents potential saving if time spent in error can be removed by designing out the cause of the error determine those errors which contribute the most to error time component allocating resources to designing out individual errors gives then a known saving in task completion time Difficulty arises when deciding how to apportion total time into the components shown. What is an error? - If a user selects a menu out of curiosity to see which items it contains, then this is clearly not an error. If a user browses through menus, searching for a command, this is not really an error If the user mistakenly selects a menu when searching for an item because they think that it is there, then this is an error .. once an action has been classified as the error, then it remains to determine how much time is actually spent in an error state. SOFT Interactive Systems 25

30 Strengths of Usability Engineering
4/16/2017 Strengths of Usability Engineering agreeing on a definition of usability setting this definition in terms of metrics and usability goals putting usability on a par with other usability goals providing a method for prioritisng usability problems SOFT Interactive Systems 26

31 4/16/2017 and Weaknesses…. assumption that usability that usability can be operationalised requirement that practitioner is familiar with laboratory methods cost of conducting usability tests unnaturalness of testing environment The unnaturalness of the testing environment is exemplified by considering differences between ‘laboratory’ and ‘real world’ conditions work context lab. test - may use a cutdown version of a task to fit the available time, say edit a report with 6 pages real world - reports of over 30 pages may be normal time context lab test - time when task will be completed is prescribed real world - individual has some control over scheduling motivational context clear differences between lab and real world social context lab test - no support real world - social network of support - ‘ask my friends’ SOFT Interactive Systems 27

32 4/16/2017 Contextual Enquiry contextual enquiry technique devised to overcome unnaturalness of test environment researcher and user work together to identify and undertsand usability problems in the natural environment of the user - focusses on structure and language used in work individual and group actions and intentions culture affecting the work explicit and implicit aspects of the work This technique enables usability issues with existing systems to be investigated. Requires a large amount of support from management to enable users to be taken off-line to assist in the investigation, or to use prototype systems which due to an early stage in development, may not be as effective as current systems. In-situ testing is a variant on this. A user is given a prototype system to use for an extended period in the course of their normal work. The tasks for which the system is used are recorded in a diary or log, together with an observations of usability issues that arise. This provides a realistic test environment, with realistic tasks. A high level of investigator involvement is needed to sustain motivation on the part of the user to complete the self reporting log. The system has to be sufficiently robust to be useful to the user in the course of their normal work. SOFT Interactive Systems 29


Download ppt "4/16/2017 Usability Evaluation Howell Istance 1."

Similar presentations


Ads by Google