Presentation is loading. Please wait.

Presentation is loading. Please wait.

PostPC Computing Heuristic Evaluation Prof. Scott Kirkpatrick, HUJI

Similar presentations


Presentation on theme: "PostPC Computing Heuristic Evaluation Prof. Scott Kirkpatrick, HUJI"— Presentation transcript:

1 PostPC Computing Heuristic Evaluation Prof. Scott Kirkpatrick, HUJI
Amnon Dekel, Bezalel 12/31/2018 *: Tom Igoe, NYU-IYTP:

2 Heuristic Evaluation A Usability Engineering Technique developed by Jacob Nielson 12/31/2018 *: Tom Igoe, NYU-IYTP:

3 Heuristic Evaluation HE is done by looking at an interface and trying to develop an opinion about what is good and bad about it's interface. Most people probably perform a personal heuristic evaluation based on their own common sense and intuition. HE as described by Nielson is a systematic inspection of a user interface design for usability. HE involves having a small number of evaluators examine the interface and judge its compliance with recognized usability principles. 12/31/2018 *: Tom Igoe, NYU-IYTP:

4 How Many Evaluators are Needed?
Nielson shows that any single evaluator will miss most usability problems in an interface. Single evaluators found only 35% of the problems in an interface. Different evaluators tend to find different problems in an interface. By aggregating the results of more than one evaluator it is possible to get much better results. Nielson recommends using no less than 3 evaluators, while 5 evaluators can find close to 75% of the problems in an interface. 12/31/2018 *: Tom Igoe, NYU-IYTP:

5 Performing HE – Method 1 In this method each evaluator performs the inspections alone. This is important in order to ensure unbiased evaluations. Results can be either: Written evaluator reports Observer reports of evaluator recordings. (evaluator talk out loud) Using an observer adds to the overhead involved, but reduces the evaluator workload and can help in making results available fairly quickly. 12/31/2018 *: Tom Igoe, NYU-IYTP:

6 Performing HE – Method 1 Observers can also:
Assist evaluators with an unstable interface. Answer evaluator's questions. (Only after the evaluators are clearly in trouble and have commented on the usability problem). In a user test, the observer uses their observation of a user in order to analyze the interface design. In this case the observer is responsible for reaching conclusions as to the interface problems of a design. In a HE, it is the evaluator who is responsible for analyzing the interface, and the observer just needs to record the evaluator's comments. (And help if the need arises under the above constraints). 12/31/2018 *: Tom Igoe, NYU-IYTP:

7 Performing HE – Method 1 In a HE session, the evaluator goes through the interface several times. During their runs, they inspect the design and dialog elements and compare them with a list of recognized usability principles (i.e. Nielson's 10 Usability Heuristics ( It is generally recommended (Nielson 93, p. 158) that evaluators go through the design at least twice. The first pass allows them to get a feel for the flow of interaction and the general scope of the system. The second pass allows them to focus on specific interface elements, while knowing how they fit in the larger picture. 12/31/2018 *: Tom Igoe, NYU-IYTP:

8 10 Usability Heuristics Visibility of system status
Match between system and the real world User control and freedom Consistency and standards Error prevention Recognition rather than recall Flexibility and efficiency of use Aesthetic and minimalist design Help users recognize, diagnose, and recover from errors Help and documentation 12/31/2018 *: Tom Igoe, NYU-IYTP:

9 Performing HE – Helping?
If the system is intended to be used as a walk up and use interface, or if the evaluators are domain experts, then it will be possible to let them use the system without any further help. If the system is domain dependent, and the evaluators lack domain expertise, then it will be necessary to help them in using the system. One successful approach has been to supply the evaluators with a typical usage scenario, listing the various steps that a user would have to take to perform a few realistic tasks (Nielson 93, p. 159). 12/31/2018 *: Tom Igoe, NYU-IYTP:

10 Results of HE Once finished, a HE should provide the team with a list of usability problems in the interface. A HE does not in and of itself provide a systematic way to generate fixes. But because the method tries to link each usability problem with an established usability principle, it will often be fairly simple to generate a revised design according to the guidelines provided by the usability principle that was violated. Additionally, many usability problems have obvious fixes once they are identified. 12/31/2018 *: Tom Igoe, NYU-IYTP:

11 Debriefing This can be done after the last evaluation session and can extend the HE method to provide some design advice. Participants: Evaluators, observers, and members of the design team. Method: brainstorming. 12/31/2018 *: Tom Igoe, NYU-IYTP:


Download ppt "PostPC Computing Heuristic Evaluation Prof. Scott Kirkpatrick, HUJI"

Similar presentations


Ads by Google