Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 10 Evaluation. Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world.

Similar presentations


Presentation on theme: "Chapter 10 Evaluation. Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world."— Presentation transcript:

1 Chapter 10 Evaluation

2 Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world constraints. Explain the concepts and terms used to discuss evaluation. Examine how different techniques are used at different stages of development.

3 What is Evaluation? Evaluation is a process by which the interface is tested against the needs and practices of the users. Evaluation should occur throughout the design life cycle. The results of the evaluation will be used into modifications of the design. This process helps to ensure that any problem occurs during the lifecycle phases shall be solved earlier and easier to do the modification.

4 What is Evaluation? part of the system are simulated and tested. prototypes: part of the system, mock-ups, storyboards, paper systems. Evaluation Analysis phase Design phase Pre-production phase (Develop/Implement) Pre-production phase (Develop/Implement) analyzing work that has been done in similar fields. get feedback from users of previous designed systems. changes will be the cheapest. develop details of the system. measure user performance.

5 Goals of Evaluation There are 3 main goals: To assess the extent of the system’s functionality To assess the effect of the interface on the user To identify any specific problems with the system.

6 Goals of Evaluation The system’s functionality must accord with the user’s task requirements In other words, the design of the system should enable the user to perform the tasks more easily. The system must function and reachable by the user, this involves matching the use of the system to the user’s expectations of the task. It is also important to measure the impact of the design on the user.

7 Star Lifecycle EVALUATION IMPLEMENTATION TASK ANALYSIS/ FUNCTIONAL ANALYSIS PROTOTYPING REQUIREMENT SPECIFICATION CONCEPTUAL DESIGN/ FORMAL DESIGN The Star Life Cycle- adapted from Hix and Hartson, 1993

8 Styles of Evaluation There are two main styles: Evaluation performed under laboratory conditions Evaluation performed in the work environment or ‘in the field’

9 Styles of Evaluation Laboratory studies In some cases, it may not involve any users. done in the lab which contain a complete equipment for the evaluation. Advantage: suitable for the system that need to be located in a dangerous or remote location and involved a single- user tasks. Disadvantages: the system is not tested in real environment. the system is not tested in real environment. users handling the system not in a real way. users handling the system not in a real way.

10 Styles of Evaluation Field studies It takes the designer or evaluator out into the user’s work environment. Advantage: You will be able to observe interactions between systems and individuals which would have been missed in a laboratory study. Disadvantage: interruptions by noises (phone calls) and movements.

11 Why you need to evaluate? Nielsen Norman Group point out: “User experience” encompasses all aspects of the end- user’s interaction.. The first requirement for an exemplary user experience is to meet the exact needs of the customer, without fuss or bother. Next comes simplicity and elegance that produce products that are a joy to own, a joy to use.” Evaluation is needed to check that users can use the product and like it.

12 Why you need to evaluate? 4 good reasons for investing in user testing/ evaluation: Problems are fixed before the product is delivered, not after. The team can concentrate on real problems, not imaginary ones Time to market is sharply reduced

13 When to evaluate? The product can be: A brand new product An upgrade product If the product new ; Time is usually invested in market research Designers may support this process by developing mockups of the potential product To gain understanding of users’ needs and early requirements

14 When to evaluate? An upgrade product: Focus on improving the overall product The evaluations will compare user performance and attitudes towards the previous version and the new one. Evaluations done during design to check the products continues to meet users’ needs are known as formative evaluations Evaluations done to assess the success of a finished product or to check a standard is upheld, are known as summative evaluation

15 Evaluating the design Evaluation should occur throughout the design process. The first evaluation of a system should ideally be performed before any implementation work has started 4 possible approaches: Cognitive walkthrough Heuristic evaluation Review-based evaluation Model-based evaluation

16 Cognitive Walkthrough A review technique where expert evaluators construct task scenarios from a specification or early prototype and then role play the part of a user working with that interface--"walking through" the interface. They act as if the interface was actually built and they (in the role of a typical user) was working through the tasks. This technique evaluate how well the interface supports "exploratory learning," i.e., first-time use without formal training. It can be performed by the system's designers in the early stages of design, before empirical user testing is possible.

17 Cognitive Walkthrough The early versions relied on a detailed series of questions, to be answered on paper or electronic forms. The cognitive walkthrough was developed as an additional tool in usability engineering, to give design teams a chance to evaluate early mockups of designs quickly. It does not require a fully functioning prototype, or the involvement of users. Instead, it helps designers to take on a potential user’s perspective, and therefore to identify some of the problems that might arise in interactions with the system.

18 Cognitive Walkthrough To do the cognitive walkthrough, you need: A description of the prototype of the system A description of the task that user needs to perform on the system A complete, written list of the actions needed to complete the task An indication of who the users are, and what kind of knowledge and experience the evaluators can assume about them.

19 Cognitive Walkthrough: Example For example, operating a car begins with the goals of opening the door, sitting down in the driver's seat with the controls easily accessible, and starting the car. And we're not even driving yet! This example shows the granularity that some walkthroughs attain. The goal of "opening the door" could be broken down into sub-goals: find the key, orient the key, unlock the door, open the door. Each of these goals requires cognitive (thinking) and physical actions. To open the door, do I orient my hand with the palm up or with the palm down? What affordances are provided for opening the door?

20 Heuristic Evaluation A method for quick, cheap, and easy evaluation of a user interface design. It is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made. Heuristic evaluation is the most popular of the usability inspection methods. The goal is to find the usability problems in the design so that they can be attended to as part of an iterative design process. Have a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics").

21 Heuristic Evaluation The heuristics are related to principles and guidelines. The list is as follows: Visibility of system status – system should always keep users informed about what is going on Match between system and real world – system should speak the user’s language User control and freedom- support undo and redo. Make user feels in control Consistency & standards- follow platform convention Error prevention – prevent a problem from occurring in the first place

22 Heuristic Evaluation Recognition rather than recall- make objects, actions and options visible Flexibility & efficiency- allow users to tailor frequent actions Aesthetic & minimalist design- dialogs should not contain information which is irrelevant or rarely needed. Helps user recognize, diagnose & recover from error- error messages should be expressed in plain language, precise and constructively suggest a solution Help and documentation- provide help and documentation

23 Review-based evaluation Experiment is done to a specific domain. For example, the usability issues: the review is done on the menu designs, the recall of command name and the choice of icons. May use the output to support the aspects of design. The reviewer must therefore select evidence carefully, noting the experimental design chosen, the population of subjects used, the analyses and the assumptions made.

24 Model-based evaluation May use cognitive or design models to perform the evaluation. For example the GOMS(Goals, Operators, Methods and Selections) used to predict user performance with a particular interface and also can be used to filter particular design options. Design methodologies also have a role in evaluation. It may provide a framework in which design options can be evaluated.

25 Evaluating the implementation This type of evaluation will involve users and actual implementation of the system. The approaches are: Experimental methods Observational methods Query techniques

26 Empirical methods: Experimental evaluation One of the most powerful methods of evaluating design is to use a controlled experiment. The evaluator will choose a hypothesis to test, which can be determined by measuring some attribute of subject behavior. Main components in experimental evaluation are as follows: Subjects- subject should be chosen to match the expected user population. May involve the actual users. The sample size also must be identified.

27 Empirical methods: Experimental evaluation Variables – experiments manipulate and measure variables under controlled conditions, in order to test the hypothesis. Two main type : variable to manipulate, and variable to measure. Hypothesis – a prediction of the outcome of an experiment. The aim of the experiment is to show that this prediction is correct. Experimental design – in order to produce reliable and generalizable results, an experiment must be carefully design Statistical measures – there are 2 rules, look at the data and to save the data. This can be used to interpret the data.

28 Observational techniques A popular way to gather information about actual use of a system is to observe users interacting with it. Usually they are asked to complete a set of predetermined tasks, and may perform it in the users’ place of work. 2 ways to perform observational techniques: Think aloud Cooperative evaluation

29 Observational: Think aloud May be used to observe how the system is actually used user observed performing task user is asked to describe what he is doing and why, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use Disadvantages subjective selective depending on the tasks provided

30 Observational: Cooperative evaluation A variation on think aloud User is encouraged to see himself as a collaborator in the evaluation and not simply as an experimental subjects both user and evaluator can ask each other questions Additional advantages less constrained and easier to use user is encouraged to criticize system clarification possible

31 Query Techniques Usually based on prepared questions Informal, subjective and relatively cheap Can be useful in eliciting detail of the user’s view of a system The best way to find out how a system meets user requirements is to ask the user. Advantages can be varied to suit context issues can be explored more fully can elicit user views and identify unanticipated problems Simple and cheap

32 Query Techniques Disadvantages Very subjective Time consuming Difficult to get accurate feedback 2 ways to perform query techniques: Interviews questionnaires

33 Choosing an Evaluation Method Factors to consider: what style of evaluation is required? laboratory vs field how objective should the technique be? subjective vs objective what type of measures are required? qualitative vs quantitative

34 Choosing an Evaluation Method what level of information is required? High level vs low level what level of interference? obtrusive vs unobtrusive what resources are available? time, subjects, equipment, expertise

35 Example Choose an appropriate evaluation method for each situations in the next page. In each case identify The subjects. The technique used. Representative tasks to be examined Measurements that would be appropriate. An outline plan for carrying out the evaluation.

36 Example 1 You are at an early stage in the design of a spreadsheet package and you wish to test what type of icons will be easiest to learn.

37 Spread Sheet Package Subjects Typical users: secretaries, academics, students, accountants, home users, schoolchildren Technique Heuristic evaluation Representative tasks Sorting data, printing spreadsheet, formatting cells, adding functions, producing graphs Measurements Speed of recognition, accuracy of recognition, user-perceived clarity Outline plan Test the subjects with examples of each icon in various styles noting responses

38 Example 2 You have developed a group decision support system for a solicitors' office.

39 Group decision support system Subjects Solicitors, legal assistants, possibly clients Technique Cognitive walkthrough Representative tasks Anything requiring shared decision making: compensation claims, plea bargaining, complex issues with a diverse range of expertise needed. Measurements Accuracy of information presented and accessible, veracity of audit trail of discussion, screen clutter and confusion, confusion owing to turn-taking protocols Outline plan Evaluate by having experts walk through the system performing tasks, commenting as necessary

40 Example 3 You have been asked to develop a system to store and manage student exam results and would like to test two different designs prior to implementation or prototyping.

41 Exam Management System Subjects Exams officer, secretaries, academics Technique Think aloud, questionnaires Representative tasks Storing marks, altering marks, deleting marks, collating information, security protection Measurements Ease of use, levels of security and error correction provided, accuracy of user Outline plan Users perform tasks set, with running verbal commentary on immediate thoughts and considered views gained by questionnaire at end.

42 Summary Evaluation is an important part of the design process and should take place throughout the design life cycle. The aim is to test the functionality and usability of the design and to identify and rectify any problems.


Download ppt "Chapter 10 Evaluation. Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world."

Similar presentations


Ads by Google