Download presentation
Presentation is loading. Please wait.
1
Useability
2
The maxim of HCI designers
Know Thy Users For They Are Not You Who are your users? How old are they What do they know What do they want How do they work
3
Key Pillars of Design: Guidelines Prototyping (software tools)
Reviews and usability testing
4
Interaction and Tasks and Users
Computers are good at remembering, people are not. Command languages are not good for occasional users/novices Do what people expect, model after real world object, or user interface guidelines: both better and consistent Experts do remember – give them shortcuts, even command languages Give people individual options, avoid overall modes.
5
Scenario development Study range and distribution of task frequencies and sequences User communities x Tasks
6
Ethnographic observation
Guidelines Preparation Field study Analysis Reporting
7
Participatory design = Having end-users participate in design (lavi)
Pros More accurate info about tasks Opportunity for users to influence design decisions Increased user acceptance Cons Very costly Lengthens implementation process Builds antagonism with users whose ideas are rejected Force designers to compromise designs to satisfy incompetent participants
8
Usability A. Expert reviews Guidelines review Consistency inspection
Cognitive walkthrough Formal usability inspection
9
B. Usability testing Typical excuse: nice idea, but time/resources are limited
Steps: determine tasks and users, design and test activities, develop and test prototypes, collect data, analyze data , repeat
10
Cognitive walkthrough
The origin of the cognitive walkthrough approach to evaluation is the code walkthrough familiar in software engineering. Walkthroughs require a detailed review of a sequence of actions. Ie steps that an interface will require a user to perform in order to accomplish some task. The evaluators then step through that action sequence to check it for potential usability problems. Usually, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user.
11
Walkthrough requirements
A description of the prototype of the system. It doesn't have to be complete, but it should be fairly detailed. Details such as the location and wording for a menu can make a big difference. A description of the task the user is to perform on the system. This should be a representative task that most users will want to do. A complete, written list of the actions needed to complete the task with the given prototype. An indication of who the users are and what kind of experience and knowledge the evaluators can assume about them. Given this information, the evaluators step through the action sequence (item 3 above) to critique the system
12
The iteration cycle for each action, the evaluators try to answer the following four questions A. Will the users be trying to produce whatever effect the action has? Are the assumptions about what task the action is supporting correct given the user's experience and knowledge up to this point in the interaction? B. Will users be able to notice that the correct action is available? Will users see the button or menu item, for example, that is how the next action is actually achieved by the system? This is not asking whether they will know that the button is the one they want. This is merely asking whether it is visible to them at the time when they will need to invoke it. example a VCR remote control has a hidden panel of buttons that are not obvious to a new user. C. Once users find the correct action at the interface, will they know that it is the right one for the effect they are trying to produce? This complements the previous question. D. After the action is taken, will users understand the feedback they get?
13
Records what is good and what needs improvement in the design
standard evaluation forms Then for each action (from item 3 on the cover form), a separate standard form is filled out that answers each of the questions A-D above. Any negative answer for any of the questions for any particular action should be documented on a separate usability problem report sheet. This problem report sheet should indicate the system being built (the version, if necessary), the date, the evaluators and a detailed description of the usability problem. severity of the problem, that is, whether the evaluators think this problem will occur often and an impression of how serious the problem will be for the users. This information will help the designers to decide priorities for correcting the design.
14
Evaluation during active use
Interviews Focus-group discussions Data logging Online consultants Newsgroup Newsletters
15
Experimental evaluation
Subjects match expected user population use actual users if possible choose sample size to yield statistically significant results Variables independent: vars that are manipulated (interface style, num items) dependent: vars that are measured (speed, errors) Hypotheses: prediction of outcome in terms of variables Experimental design Between-groups: subjects assigned to different conditions Within-groups: all subjects use all conditions Statistical measures 2 rules: look at the data, save the data
16
Observational techniques
Think-aloud user describes what they believe is happening, why they act, what they want to do simple, little expertise, useful insight Protocol analysis paper and pencil audio recording video recording computer logging user notebooks automatic protocol analysis tools Post-task walkthroughs Discuss alternative (but not pursued) actions Reflect back on actions
17
Query techniques A. Interviews
Level of questioning can be varied to suit context Evaluator can probe the user on interesting issues pro: High-level evaluation, info about preferences, reveal problems con: hard to plan, need skilled interviewers and willing participants
18
B Questionnaires General Open-ended Scalar Multi-choice Ranked
establish background, gender, experience, personality Open-ended unpromped opinion on a question Can you suggest...? How would you...? often result in brief answers, cannot be summarized statistically Scalar judge statement on numeric scale 1-5, 1-7, -2-2 used to balance coarse/fine negative choice (hostile, vague, misleading), positive (friendly, specific, beneficial) Multi-choice choose one or more from a list of explicit responses ex: How do you get help with the system: manual, online, colleague gather info on user's previous experience Ranked place an ordering on a list to indicate user's preferences ex: Please rank usefulness of methods: menu, command line, accelerator
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.