Presentation is loading. Please wait.

Presentation is loading. Please wait.

Heuristic Evaluation HCC 729, 2/13/14 ☃. We’ll follow up next time Inspirations, reading feedback Your HTAs and personas.

Similar presentations


Presentation on theme: "Heuristic Evaluation HCC 729, 2/13/14 ☃. We’ll follow up next time Inspirations, reading feedback Your HTAs and personas."— Presentation transcript:

1 Heuristic Evaluation HCC 729, 2/13/14 ☃

2 We’ll follow up next time Inspirations, reading feedback Your HTAs and personas

3 How to conduct a Heuristic Evaluation Read this: http://www.nngroup.com/articles/ho w-to-conduct-a-heuristic-evaluation/ Adapted from slides by Karen Tang and Ryan Baker

4 What is an evaluation? Gather data about the usability of a product or design by a particular group of users for a particular activity or task within a particular environment or context Evaluation goals: Assess extent of system’s functionality Assess effect of interface on user Identify specific problems with system

5 HE vs. user testing When we can, we want to test with real users HE is a “discount” usability technique When it’s useful: – When real users are unavailable – Very early in the design – As a sanity check (but not a replacement for user testing)

6 Why HE is great Cheap Doesn’t “spend” users Fast 1-2 days (instead of 1 week) Good Proven effective: the more careful you are, the better it gets Easy to use Relatively easy to learn, can be taught

7 Heuristic Evaluation A type of discount usability testing A rational method – an expert applies “heuristics” mentally apply a theory or rule to the design and see if that theory/rule’s advice is being followed Key Idea: Multiple expert evaluators independently apply a set of heuristics to an interface, produce Usability Action Reports (UARs), combine & prioritize their findings

8 What can you evaluate with a HE? Any interface that has been “developed” Pre-existing webpage Sketch of a future interface (can be fully implemented or only exist as a sketch) This method can be applied on your own interface, or a competitor’s You will evaluate the interface according to a standard set of 10 heuristics

9 How many evaluators are needed? Nielsen recommends at least 3, but go for 5!

10 Who should do the HE? Anyone who knows the appropriate heuristics can do a HE But, heuristic evaluation experts find almost twice as many problems as novices Heuristic evaluation experts who are also domain experts find almost three times as many problems as novices

11 Phases of Heuristic Evaluation 0) Pre-evaluation training (optional) Give evaluators needed domain knowledge & information on the scenario 1)Evaluate the interface to find usability problems 2)Record the problem 3)Aggregate problems 4)Assign severity rating 5) Find a solution complexity rating

12 #1: evaluate the interface

13 Which heuristics to use? Many possible heuristic sets Some standard sets (e.g. Nielsen’s usability heuristics) You might create your own heuristics, e.g. for specific applications We’ll focus on Nielsen’s, which cover a range of general usability issues

14 Find which heuristic is violated 1.Simple & Natural Dialog 2.Speak User’s Language 3.Minimize User’s Memory Load 4.Consistency 5.Feedback 6.Clearly Marked Exits 7.Shortcuts 8.Good Error Messages 9.Prevent Errors 10.Help & Documentation Nielsen’s 10 Heuristics http://www.nngroup.com/articles/ten- usability-heuristics/

15 Examples of applying the heuristics http://www.slideshare.net/sacsprasath/ten- usability-heuristics-with-example http://www.slideshare.net/sacsprasath/ten- usability-heuristics-with-example

16 #2: record the problem

17 Record the problem Each evaluator writes a Usability Action Report (UAR) describing each usability problem they encounter HEs are typically used to report problems However, UARs can be used to report both the good and bad qualities of an interface in other usability evaluations… I have posted a template UAR on the blog with the assignment

18 Sample UAR EVALUATOR:XXXXX ID NUMBER:XXX NAME: Descriptive name for the problem EVIDENCE: Describe the violation, and why you wrote this report. what heuristic was violated, EXPLANATION: Your interpretation: and why. Severity: Write up at the end of the evaluation Fixability: Write up at the end of the evaluation Possible Fix: Write up at the end of the evaluation

19 Keep looking for problems! Usually takes a few hours A shorter time may not find important problems A longer time will exhaust the evaluator, and they may become less productive For very large interfaces, it is good to break heuristic evaluation into several sessions

20 What about multiple problems? This happens a lot, record them separately. This is not busywork…. It may be possible to fix some of the problems, but not all of them The problems might not always be linked to each other – one may show up in other situations too

21 You are not done yet… You still need to address the bottom half of the UAR: Severity Solution Complexity Possible Fix You may want to take a break before finishing these UARs…

22 #3 aggregate the problems

23 Aggregate Problems Wait until all UARs are in You are aggregating across all evaluators Aggregating usability problems: Combine problems by consensus Gain a sense of relative importance after you’ve seen a few problems At this point, decide which entries are and aren’t problems (but keep original version of report somewhere)

24 #4: assign each problem a severity rating

25 Assign Severity Rating to UARs Severity Ratings help project leads determine what problems should be given more developer time Not all problems can be fixed Some problems will have more severe consequences Each evaluator should assign severity separately

26 Assign Severity Rating to UARs Based on a combination of: Frequency How common or rare is the problem? Impact How easy is it to overcome the problem? How disastrous might the problem be? Persistence How repeatedly will users experience the problem? Are workarounds learnable?

27 Assign an Overall Severity Rating It is helpful to developers in allocating resources to have one severity rating for the problem. Therefore, evaluators need to combine their opinion of a problem’s Frequency, Impact, & Persistence ratings into one Severity evaluation

28 Nielsen’s Severity Ratings 1.Usability Blemish. Mild annoyance or cosmetic problem. Easily avoidable. 2.Minor usability problem. Annoying,misleading, unclear, confusing. Can be avoided or easily learned. May occur only once. 3.Major usability problem. Prevents users from completing tasks. Highly confusing or unclear. Difficult to avoid. Likely to occur more than once. 4.Critical usability problem. Users won’t be able to accomplish their goals, and may quit using system.

29 False positives There’s no virtue in finding 6,233 problems, if very few of them actually cause problems for a user Every problem reported in a heuristic evaluation takes time for the developers to consider Some interface aspects that seem like problems at first might not be problems at all

30 #5 Assign each solution a complexity rating

31 5: Solution Complexity Rating Some problems take more time to fix than others, so it’s important to allocate developers’ time well Ideally this could be made by either a developer, or someone who is familiar with development in the target platform

32 Solution Complexity Rating 1.Trivial to fix. Textual changes and cosmetic changes. Minor code tweaking. 2.Easy to fix. Minimal redesign and straightforward code changes. Solution known and understood. 3.Difficult to fix. Redesign and re-engineering required. Significant code changes. Solution identifiable but details not fully understood. 4.Nearly impossible to fix. Requires massive re- engineering or use of new technology. Solution not known or understood at all.

33 Record Possible Fixes While evaluating solution complexity evaluator may have thought about how the problem could be fixed Record these possible fixes as suggestions to developers Don’t focus on feasibility of solutions (that is their job) Your suggestions may be thought-provoking

34 Phases of Heuristic Evaluation 0) Pre-evaluation training (optional) Give evaluators needed domain knowledge & information on the scenario 1)Evaluate the interface to find usability problems 2)Record the problem 3)Aggregate problems 4)Severity rating 5) Solution complexity rating

35 Why HE? They find a reasonably large set of problems They are one of the easiest, quickest, and cheapest methods available

36 HE vs. User Testing User tests are more effective at revealing when a system ’ s manifest model or metaphor is confusing User tests are less effective at finding obscure problems User tests are also much more expensive Advice: use HE first, to find the obvious problems, then user test.

37 For next week Assignment Readings

38 In-class assignment Perform HE on UMBC class search with PeopleSoft Use template form from the blog

39

40 HW: Perform HE on your sites Go through the 5 stages of HE for your website – If on a team, each go through HE individually, then combine later Turn in 1 completed form for each incident Come up with at least 8 UARs for each webpage Aggregate, finish filling out the template! Everyone writes 100-200 words describing what they learned

41 HW Extra credit Perform a HE on another student’s web site (same as previous slide)

42 Reading Required Usability Engineering Chapter 6 (focus on 6.1- 6.6 and 6.8) Optional Think aloud method


Download ppt "Heuristic Evaluation HCC 729, 2/13/14 ☃. We’ll follow up next time Inspirations, reading feedback Your HTAs and personas."

Similar presentations


Ads by Google