Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of User Interface Design

Similar presentations


Presentation on theme: "Evaluation of User Interface Design"— Presentation transcript:

1 Evaluation of User Interface Design
Evaluation is very important in User Interface Design and it is generally considered that there is no way round evaluation. Evaluation is the gathering of data about the usability of a design.

2 Evaluation of User Interface Design
Why is evaluation important? Possible reasons: Without evaluation… …it cannot be said whether the designers have properly understood the users. …one would not know whether the computer systems are intolerant of minor errors.

3 Evaluation of User Interface Design
Why is evaluation important? Possible reasons continued: Without evaluation… …one would not know whether the systems cause disruption, frustration, unacceptable changes or conflict in organisations. …one would not know how to improve the systems in order to fit the users needs better.

4 Evaluation of User Interface Design
Why is evaluation important? Possible reasons continued: Without evaluation… …alternative designs could not be compared. …one would not know whether systems cause a cognitive overload in users (e.g. whether they require users to learn, attend or memorise too much).

5 Evaluation of User Interface Design
Why is evaluation important? Possible reasons continued: Without evaluation… …it would be hard to assess whether computer systems force users to perform tasks in undesirable ways. …it would be hard to check conformance to a standard.

6 Evaluation of User Interface Design
Why is evaluation important? Possible reasons continued: Without evaluation… …engineering towards a target (often expressed as some form of metric) would be hard to achieve.

7 Evaluation of User Interface Design
The 2 main kinds of evaluation in interaction design? Formative Evaluation Summative Evaluation

8 Evaluation of User Interface Design
Formative Evaluation = Evaluation of the design as it is being developed. It takes place early and continually throughout the design process. It is begun as early as possible in the development cycle. This is done in order to discover usability problems while there is still plenty of time for modifications to be made. It is often performed several times throughout the design process.

9 Evaluation of User Interface Design
2. Summative Evaluation = Evaluation of the design after it is complete, or nearly so. It is often used to compare different design products with each other. It is usually performed only once, near the end of the user interface design process.

10 Evaluation of User Interface Design
What role these forms of evaluation play in usability testing: It is important to consider that Formative Evaluation is much more common for usability testing than Summative Evaluation. In practice, Summative evaluation is hardly ever used for usability testing.

11 Evaluation of User Interface Design
People generally agree that evaluation should not be thought of as a single phase in the design process. Rather, it should occur throughout the entire design life cycle, with the results of evaluation being used for modifications. For instance, in the star life cycle model, evaluation was the most central point.

12 Evaluation of User Interface Design
Overview of different kinds of evaluation: Observations, monitoring, user‘s opinions. Direct/Indirect observation, protocols, Software logging, Interviews and surveys, questionnaires Experiments and usability engineering Interpretative evaluation Predictive evaluation Inspection methods, Usage simulations, Heuristic evaluation, Discount usability evaluation, Walkthroughs, Modelling.

13 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions. Direct observation is the cheapest form of observation. In direct observation, the observer makes notes while observing the individual interacting with the system.

14 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Direct observation may alter user behaviour of the users because they are constantly aware of being observed (Hawthorne effect). However, if the purpose is to get informal feedback from the users, direct observation is often considered a useful method.

15 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Indirect observation uses some form of automatic recording, e.g. videotaping the interaction of the user with the system. This provides a permanent record of the interaction. Moreover, the cameras can be arranged in several ways (e.g. one focusing on the keyboard, others on screen and user).

16 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Indirect observation often couples video and audio, which is called verbal protocol. It is sometimes also synchronised with automatic keystroke recording. Think aloud protocols require the users to think aloud whilst they are interacting with the system.

17 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Think aloud protocols in Indirect observation are often confusing for the user, because they are required to think aloud in difficult tasks where they usually do not talk (because this talking might lead to an overload in their working memory).

18 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Post-event protocols in Indirect observation are obtained after the tasks have been completed. In this case, users often watch videos of their own interaction with the system and are asked to comment upon their actions. However, hindsight might have an effect at the overall results.

19 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Software logging is a form of analysis where the researcher does not need to be present during the interaction, e.g. Time-stamped keypresses monitor a record of each key press of the user and the exact time of these keypresses and Interaction logging lets the observer see the user interaction in real time.

20 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. In practical applications, it often happens that video, audio, protocols and software logging, interviews, questionnaires etc. are combined with each other.

21 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. In interviews and surveys, data on users‘ preferences can be gathered. The data collected in interviews are mostly qualitative data (e.g. what would you improve in the layout), whilst the data gathered in surveys are mostly quantitative (e.g. how many points from 1 to 10 would you give the layout).

22 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. There are 2 kinds of interviews: Structured and unstructured (flexible). Structured interviews consist of exactly specified questions for which answers are sought, whilst unstructured interviews consider several aspects that will be asked but not in exactly the way or sequential order they would be specified in an structured interview.

23 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Structured and unstructured (flexible). In unstructured interviews, the interviewer is free to follow the individuals‘ replies and to find out about the individuals‘ attitudes. Unstructured interviews have often been applied to find out about the users‘ understanding of interfaces. There are also mixtures between structured and unstructured interviews, e.g. semi-structured interviews.

24 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Questionnaires: Questionnaires particularly focus on preparing unambiguous questions. There are 2 types of question structures: Closed (choice between alternatives) and open (respondent is free to provide the answer the way s/he likes) questions.

25 Evaluation of User Interface Design
Observations, monitoring, user‘s opinions continued. Questionnaires: Closed questions are often in the form of a rating scale. Once the questionnaires get back to the researcher, s/he can then evaluate the open questions qualitatively and the closed questions quantitatively, i.e. with statistical data analyses.

26 Evaluation of User Interface Design
2. Experiments and usability engineering Before an experiment is carried out, a careful planning process needs to take place. This planning stage usually consists of 3 main aspects (see next slide):

27 Evaluation of User Interface Design
2. Experiments and usability engineering continued What is the purpose of the experiment, i.e. what is being changed, kept constant and what is being measured. What is the hypothesis, i.e. what needs to be stated in such a way that it can be tested. What statistical tests will be applied so that the collected data can be interpreted in a meaningful way.

28 Evaluation of User Interface Design
2. Experiments and usability engineering continued The main purposes of experiments are to test hypotheses. There will be specific variables of interest that need to be tested, and all other variables need to be controlled so that they do not confound the results. The variable that the experimenter manipulates is called the Independent Variable (e.g. age group, experience with the world wide web).

29 Evaluation of User Interface Design
2. Experiments and usability engineering continued The variable where the effect is measured is called Dependent Variable (e.g. reaction times, number of errors, etc.). For example, the hypothesis could be that a particular interface improves user interaction among teenagers over an older interface.

30 Evaluation of User Interface Design
2. Experiments and usability engineering continued For example, the hypothesis could be that a particular interface improves user interaction among teenagers. How can this be tested? One could draw a sample of teenagers tested with the new interface and compare them with a sample of teenagers who are tested with the old interface.

31 Evaluation of User Interface Design
2. Experiments and usability engineering continued The dependent variables could be related to usability dimensions, but they could also be reaction times, number of errors, rating by the teenagers etc. (whatever aspect one is interested in).

32 Evaluation of User Interface Design
2. Experiments and usability engineering continued The experimental approach has been criticised for a number of reasons, e.g. the laboratory is not like the real world, it is not possible to control for all variables that affect human behaviour, no account is taken of context or environment, participants do artificial tasks that need to be completed in a short time, little or no attention is given to the participants‘ ideas, thoughts and beliefs.

33 Evaluation of User Interface Design
2. Experiments and usability engineering continued What is usability engineering? This is a process that specifies the usability of a product quantitatively and a priori. Then the product is implemented and it is tested whether the product does or does not reach the required level of usability.

34 Evaluation of User Interface Design
2. Experiments and usability engineering continued This process of Usability engineering consists of several steps: Defining usability goals through metrics. Setting usability levels that need to be achieved. Analysing the impact of possible design solutions. Incorporating user-derived feedback in product design. Iterating through the design-evaluate-design loop until the planned levels are achieved.

35 Evaluation of User Interface Design
2. Experiments and usability engineering continued Usability engineering is often carried out in a controlled and standardised laboratory setting. Recordings of the individuals‘ behaviour are often made via video or keystroke logging equipment.

36 Evaluation of User Interface Design
2. Experiments and usability engineering continued Advantages of usability engineering: Agreeing on a definition of usability Setting this definition in terms of metrics and usability goals. Considering usability important along with other engineering goals. Providing a method for prioritising usability problems.

37 Evaluation of User Interface Design
2. Experiments and usability engineering continued Weaknesses of usability engineering: The assumption that usability can be operationalised (=observed in some way). Experimenter needs to be familiar with lab methods. Cost of conducting usability tests. Testing environment (=lab) is an unnatural environment in which users do not operate in daily life.

38 Evaluation of User Interface Design
3. Interpretative Evaluation Purpose: To enable designers to understand better how users use systems in their natural environments. Therefore, the data are collected in an informal and naturalistic way, with the aim of causing as little disturbance as possible whilst the user interacts with the system.

39 Evaluation of User Interface Design
3. Interpretative Evaluation continued It is quite common that users also participate in the stage of analysing and interpreting the data. One evaluation approach where users and researchers participate to understand the usability problems within the normal working environment of the user is called Contextual inquiry.

40 Evaluation of User Interface Design
3. Interpretative Evaluation continued In the Contextual inquiry, usability issues are identified by users or jointly by the users and the evaluators, whilst users work in their natural working environments (e.g. at home). The term contextual inquiry has been used to describe the discussions that drive this evaluation process.

41 Evaluation of User Interface Design
3. Interpretative Evaluation continued The following points are considered important: * Structure and language used in working environment * Individual and group actions/intentions * The work culture * Aspects of the work In interpretative evaluation, there are no metrics (in contrast to usability engineering).

42 Evaluation of User Interface Design
4. Predictive Evaluation Aim of predictive evaluation: To predict the sort of problems that users will experience when using a system without actually testing the system with the users, e.g. getting experts to predict the problems that typical users will experience with the user interface design.


Download ppt "Evaluation of User Interface Design"

Similar presentations


Ads by Google