Presentation is loading. Please wait.

Presentation is loading. Please wait.

COM531 Multimedia Technologies

Similar presentations


Presentation on theme: "COM531 Multimedia Technologies"— Presentation transcript:

1 COM531 Multimedia Technologies
Lecture 10 – HCI Guidelines Evaluation

2 The User Interface All those parts of the system we come into contact with… Physically we might interact with a device by pressing buttons or moving levers and the interactive device might respond by providing feedback through the pressure of the button or lever. Perceptually the device displays things on a screen, or makes noises which we can see and hear. Conceptually we interact with a device by trying to work out what it does and what we should be doing. The device provides messages and other displays which are designed to help us do this.

3 The User Interface Input Output
Methods are needed to enter commands (tell the system what we want it to do) We also need to be able to navigate through the commands and the content of the system We need to enter data or other content into the system Output So the system can tell us what is happening - provide feedback So the system can display the content to us.

4 Key Issues Accessibility Usability Acceptability Engagement

5 Accessibility Removal of the barriers that would otherwise exclude some people from using the system at all Legislation requires software to be accessible. UK’s Equality Act 2010 and Section 508 in the US Web Accessibility Initiative The Web Content Accessibility Guidelines (WCAG) 2.0, explain how to make Web content accessible for people with disabilities. ISO 9241 Part 171 Guidance on Software Accessibility

6 Principles of Universal Design
Equitable Use: The design does not disadvantage or stigmatize any group of users. Flexibility in Use: The design accommodates a wide range of individual preferences and abilities. Simple, Intuitive Use: Use of the design is easy to understand, regardless of the user's experience, knowledge, language skills, or current concentration level. Perceptible Information: The design communicates necessary information effectively to the user, regardless of ambient conditions or the user's sensory abilities.

7 Principles of Universal Design
Tolerance for Error: The design minimizes hazards and the adverse consequences of accidental or unintended actions. Low Physical Effort: The design can be used efficiently and comfortably, and with a minimum of fatigue. Size and Space for Approach & Use: Appropriate size and space is provided for approach, reach, manipulation, and use, regardless of the user's body size, posture, or mobility.

8 Acceptability and Engagement
Acceptability refers to fitness for purpose in the context of use Unacceptable use of mobile phones Engagement is concerned with all the qualities of an experience that really pull people in A sense of immersion that one feels

9 Design Principles, Guidelines and Rules
International Standards Design Principles Universally applicable high level design goals, based on International Standards Open to board interpretation ‘Design for Human Cognitive Limitations’ Design Guideline Principles are interpreted; guidelines produced to assist with design situations Must be interpreted within the context of the task. Usability must include task-context dependent feature ‘Recognition rather than recall’ ‘Make it obvious which menu items are/are not active at any time’ Design Rule Highly specific low level design rules Found in corporate style guides and design manuals. In menu design, ‘Max of 10 items per panel; inactive items should be greyed out’

10 HCI Guidelines “Broad brush” design rules
Useful check list for good design Better design using these than using nothing! Different collections: Benyon and Turner’s 12 Principles Nielsen’s 10 Heuristics Shneiderman’s 8 Golden Rules

11 Benyon and Turner’s 12 Principles
Visibility Consistency Familiarity Affordance Navigation Control Feedback Recovery Constraints Flexibility Style Conviviality

12 1. Visibility Computers are good at remembering, people are not!
Try to ensure that things are visible so that people can see what functions are available and what the system is currently doing This is an important part of a psychological principle that it is easier to recognize things than to have to recall them If it is not possible to make it visible, make it observable Consider making things ‘visible’ through the use of sound and touch Use menus, icons, dialog boxes vs commands

13 1. Visibility The common commands and defaults are made visible
Other commands are observable by using the drop down menus Visibility and sensible grouping makes people aware of other options

14 2. Consistency Be consistent in the use of design features
Be consistent with similar systems and standard ways of working Consistent use of commands, sequence of tasks, use of terminology, layout and structure Internal consistency within the system; interaction style and presentation External consistency between packages; easier to move from one application to another; ‘look and feel’ (but don’t guarantee usability)

15 3. Familiarity Use language and symbols that users are familiar with
Where this is not possible because the concepts are quite different from those people know about, provide a suitable metaphor To help them transfer similar and related knowledge from a more familiar domain User: “I just got a message Rstrd Info. What does it mean?” Designer: “That’s restricted information.” User: “But surely you can tell me!!!” Designer: “No, no… Rsdrd Info stands for “Restricted Information.” User: “Hmm… but what does it mean???” Designer: “It means the program is too busy to let you log on.” User: “Ok. I’m taking a break.”

16 4. Affordance Design things so that it is clear what they are for
The properties of objects and how these relate to how objects are used For example make buttons look like buttons so people will press them Use textboxes for data entry, labels for displaying output

17 5. Navigation Provide support to enable people to move around the system Maps, directional signs, informational signs Menus are often used for navigation, signs (labels) indicate where else you can go in the system

18 6. Control Make it clear who or what is in control
Allow user to take control of the system which responds to actions Interleaving modes Normal View, Print Preview Design View, Code View Tailor system to individual needs (Accessibility) Window Size, Font Size, Colour, Toolbars Good help and documentation Tooltips, Context Sensitive Help User Guides Online help

19 7. Feedback Feed back information from the system to user so that they know what effect their actions have had Continuously inform the user about what it is doing how it is interpreting the user’s input Constant and consistent feedback will enhance the feeling of control E.g. Cursor style, Status bar User should know when completion is successful Direct view as changes happen Message box

20 8. Recovery Enable recovery from actions and errors quickly and effectively Users know that they can return to the previous state by providing Undo and Cancel options If they make a mistake, offer them clear and informative instructions to enable them to recover E.g. Message boxes giving instructions Users don’t like to feel trapped by the computer! Provide clearly marked exits - should offer an easy way out of as many situations as possible

21 9. Constraints Provide constraints so that people do not try to do things that are inappropriate Have allowable actions Give confirmation of dangerous operations Users are prevented from making mistakes by limiting the amount of typing required Disable menu commands (grey = inactive)

22 10. Flexibility Multiple ways of doing things to accommodate different levels and needs of a range of users Provide Shortcuts through use of Hot Keys E.g. Ctrl C for ‘Copy’ , Ctrl V for ‘Paste’ Perform regular, familiar actions more quickly Personalise the system by viewing/removing toolbars as needed

23 11. Style 12. Conviviality Stylish designs Attractive
Polite, friendly and pleasant designs

24 Nielsen’s 10 Heuristics Visibility of system status
Match between system and the real world User control and freedom Consistency and standards Error prevention Helping users recognise, diagnose and recover from errors Recognition rather than recall Flexibility and efficiency of use Aesthetic and minimalist design Help and documentation

25 Shneiderman’s 8 Golden Rules
Strive for consistency Enable frequent users to use shortcuts Offer informative feedback Design dialogs to yield closure Offer error prevention and simple error handling Permit easy reversal of actions Support internal locus of control Reduce short-term memory load

26 International Standards
ISO 9126 Software Engineering International standard for the evaluation of the quality of software 4 Parts Part 1 Software Quality Functionality Reliability Usability Efficiency Maintainability Portability

27 International Standards
ISO :2002 Software ergonomics for multimedia user interfaces Part 1: Design principles and framework Part 2: Multimedia navigation and control Part 3: Media selection and combination ISO :2006 Ease of operation of everyday products Part 1: Design requirements for context of use and user characteristics ISO 6385:2004 Ergonomic principles in the design of work systems ISO :1999 Parts 1 and 2 "Ergonomic requirements for the design of displays and control actuators"

28 International Standards
ISO/IEC 25051:2005 Software engineering -- Software product Quality Requirements and Evaluation (SQuaRE) Requirements for quality of Commercial Off-The-Shelf (COTS) software product and instructions for testing ISO/IEC 25062:2006 Software engineering -- Software product Quality Requirements and Evaluation (SQuaRE) Common Industry Format (CIF) for usability test reports ISO/TR 16982:2002 Ergonomics of human-system interaction Usability methods supporting human-centred design

29 International Standards
Specific for Icons ISO Icon Symbols & Functions - General ISO Object Icons ISO Pointer Icons ISO Control Icons ISO Tool Icons ISO Action Icons ISO Icons for Controlling Multimedia Software ISO Icons for World Wide Browser Toolbars

30 International Standards
ISO 9241 Ergonomic Requirements for Office Work with Visual Display Terminals (VDT). 32 Parts covering all aspects of Usability (hardware, software, processes) Part 11 Guidance on Usability Read ISO 9241 Bluffer Guide For more information on standards:

31 Evaluation

32 User-Centred System Design
Problem Statement Observation of existing systems Task Analysis Requirements Gathering Requirements Statement – Functional and non-functional Usability Guidelines & Heuristics Design & Storyboarding Storyboard Prototype Implementation Prototype Evaluation Transcript & Evaluation Report Installation Final Implementation

33 The Star Method

34 Evaluation ‘Evaluation is concerned with gathering data about the usability of a design or product by a specified group of users for a particular activity within a specified environment or work context’ (Preece, 1994) Evaluation is central to designing interactive systems Everything gets evaluated at every step of the process For example, requirements are evaluated, storyboards evaluated and a prototype built. The prototype is then evaluated and some aspect of a physical design identified and implemented; this is then evaluated again and so the iteration continues until a final product is complete

35 ‘Users will evaluate your interface sooner or later.’
Why do we Evaluate? ‘Users will evaluate your interface sooner or later.’ To suggest improvements to the design To confirm that the software meets all of the functional and usability specifications To confirm acceptability of the interface and/or supporting materials To compare alternative designs To ensure that it meets the expectations of users To match or exceed the usability of competitor’s products To ensure that it complies with standards and any statutory requirements Some of the good reasons why evaluation should be done include, 1. To suggest improvements to the design so that it fits better with the tasks it is to perform, the work environment in which it will be used, and the preferred ways of working of the target users. 2. To confirm that the software meets all of the functional and usability specifications that were made at the beginning (and throughout) the development process. 3. To compare alternative designs to determine which is ‘best’. 4. To ensure that it meets the expectations of your customers; that it will not damage your competitive reputation in the marketplace. 5. To see that it matches, or exceeds, the usability of your competitors’ products and/or of any earlier version of the same package. 6. To ensure that it complies with any statutory requirements (such as those incorporated into EU legislation.).

36 Evaluation is Often Performed Badly
Designers assume their own personal behaviour is ‘representative’ of that of an average user Designers make assumptions about how people are able to operate the software, but these assumptions might well be unfounded. Acceptance of traditional/standard interface design - assume style guides ensure good software design Evaluation may be postponed until ‘a more convenient time’ when functionality is complete Poor knowledge of evaluation techniques and lack of expertise in analysing experiments Reasons for poor evaluation have been identified as: 1. The designers assume that their own personal behaviour in using the software is representative of that of the average user. Thus, if they can easily and effectively use the software, then they think anyone will be able to do so. 2. Designers make assumptions about how people are able to perform with software, but these assumptions might well be unfounded. 3. Some companies develop a standard style and layout (style guidelines) for their software. It is then assumed that if these guidelines are followed then good software will result. However, a style developed for one kind of product and for one set of users, will not necessarily be successful in a different product for a different user base. 4. Evaluation is often put off while developers concentrate on achieving the functional requirements of the software. Since this process usually takes longer than one planned for, then by the time they get round to thinking of evaluation, time and budget are running out and so the job is skimped. 5. A number of software designers have poor knowledge of evaluation techniques, due to a poor level of training in this area

37 What Do We Evaluate? Usability (Criteria)
Initial designs (pre-implementation) Interfaces/Interaction (Heuristics) Prototype at various stages Final implementation of software system Documentation

38 Types of Evaluation Formative Evaluation
Evaluation within the design process Produce good usability through the process of evolution, forming/reforming the product Informal, Structured Summative Evaluation Take the finished system & assess it for aspects of usability Carry out experiments, after implementation Purpose is Quality Control Formal, Costly & Time-consuming

39 Formative Evaluation Ask the Experts (No users) Cognitive Walkthrough
Heuristic Evaluation Ask the Users Focus groups Questionnaire Interviews User interaction Think Aloud Co-operative Think Aloud Expensive Cheap C O S T

40 Types of Data Quantitative data
‘Objective’ measures of certain factors by direct observation E.g. time to complete certain tasks, accuracy of recall, number of errors made User performances or attitudes can be recorded in a numerical form Qualitative data ‘Subjective’ responses; Opinions rather than measurements Reports and opinions that may be categorized in some way but not reduced to numerical values Depending on the method used, evaluation can result in the collection of either quantitative or qualitative data. Quantitative data: ‘Objective’ measurement of certain factors by direct observation. For example, the time to complete certain tasks, or accuracy in recalling how to do some function, or number of errors made in certain tasks. More ‘subjective’ measures can be quantified by using some form of rating scale. Qualitative data: More subjective in nature, opinions rather than measurements, so it is obviously more difficult to analyse. It may, in some cases, be possible to place sets of opinions into certain defined categories though they cannot be reduced to any numerical form. Despite the lack of objective measure, these kinds of data are just as useful to the design team in developing software that people will actually want to use.

41 Recording Methods Paper and pencil – cheap, limited to writing speed
Audio – good for think aloud, difficult to match with other protocols Video – accurate and realistic, needs special equipment Computer logging – automatic and unobtrusive, large amounts of data difficult to analyze User notebooks – coarse and subjective, useful insights, good for longitudinal studies Mixed use in practice. Audio/video transcription difficult and requires skill. Some automatic support tools available There are a number of different methods by which data can be collected. Most of them will require data gathering from at least one representative user at a time and so they would properly be used for what we termed empirical evaluation. Some of these tools are outlined above. Explanation of each of the tools will be provided in class.

42 Cognitive Walkthrough
‘Expert’ simulates user actions/thoughts and steps through the action sequence to complete the task Has rough plan and explores the system for possible actions Interprets system responses and assesses if each step is or is not good for a new user Are the actions appropriate and visible? Is the feedback adequate? Suits systems primarily learned by exploration e.g. walk-up-and-use such as ATM, ticket machines Overall question: How successfully does this design guide the unfamiliar user through the performance of the task? This is another ‘analytic’ evaluation technique in that it is carried out by the designer, or an ‘expert’ user, who attempts to simulate the way an average user would react to the system. It is particularly useful for the design of ‘walk-up-and-use” systems, such as cash machines, where the user will have to learn how to operate it by ‘exploring’ the interface. That is, the designer must make sure that it is going to be obvious, just from looking at the interface, what action should be taken by the user at each stage, and that sufficient and clear feedback is given to the user to show that they have done the correct thing. So, the main question to be answered by the technique is “How well does the design guide the unfamiliar user through the performance of some task?” There is no concern to measure, for example, the speed with which the user can perform the action, though the designer might want to try to predict, say, likely error rates and their consequences since these could greatly affect the user’s ‘satisfaction’ with the system. To apply the ‘cognitive walkthrough; technique the designer selects a particular task and breaks it down into its constituent subtasks. For example, in an automated ticket machine, the task might by described as , “Purchase a ticket (single or return) to a particular destination.” This can be broken down into sub-tasks such as, 1. Choose destination 2. Indicate journey type (single/return) 3. Deposit money 4. Get ticket. The designer ‘walks through’ these stages in sequence and for each one will ask him/herself the questions, (a) Will it be obvious to the user what action to take next? b) Will the user correctly interpret the description given to the correct action, and connect it with what they are trying to achieve? (c)Will the system give feedback so the user will interpret the system’s response correctly? This process, of asking the three questions, is carried out through all the stages of the task. Any “No” answers are an indication of where the designer needs to improve the interface design.

43 Heuristic Evaluation A ‘heuristic’ can be defined as a ‘rule-of-thumb’ or general rule The idea is to assess the design against known criteria A number of reviewers (3-5) go through product, screen by screen, and note any problems and note violations of these principles, with a severity rating (0-4) All responses are collected and aggregated About 5 reviewers can find about 75% of the problems Nielsen’s 10 Heuristics A ‘heuristic’ can be defined as a ‘rule-of-thumb’, or general rule. For example, in evaluating an interface, heuristics might be derived from the general guidelines that have been used in the system’s design. These might take the form of statements like “prevent errors”, provide feedback”, etc. A number of reviewers then go through the product, screen by screen, and make note of any problems, using the list of heuristics to focus their thoughts. Some studies have shown that about five reviewers can find about 75% of the problems that are found by 15 reviewers. So this method can be very economical in discovering many of the major problems with an interface, especially if it is going to be used in conjunction with other evaluation methods that would detect problems missed by the heuristic evaluation. Refer to Preece et al, (1994) pp. 676 for further explanation.

44 3. Focus Groups A group of users and an evaluator
Structured group interview; flexible Allows interaction between users Typically for requirements gathering, not system use… … but can be used post-task

45 4. Questionnaire Questions fixed in advance
Completed independently of the evaluator The purpose of the questionnaire and purpose of information must be clear General, open ended, scalar, multi-choice, ranked questions

46 5. Interviews Asking users about their experience with a system
General questions first, followed by more detailed ones Needs careful planning, structured around some central questions Structured, semi-structured, unstructured Requires consistency and flexibility

47 6. Think Aloud User participation
User is asked to talk through what he/she is doing whilst being observed describing what is happening, why an action is taken, what the user is trying to do, what the user is thinking, the goal Evaluator documents actions and problems found with the interface Actions and comments are recorded by the observer using paper notes, video or audio recording As the name suggests, this involves someone observing a small group (usually between 3 and 6 people) of (articulate) users as they work through specified tasks or benchmark tests. The users talk aloud as they work, describing not only what they are doing but also what they are thinking while they are doing it. Their actions and their comments are recorded by the observer using paper notes, video or audio recording. Advantages of the method are that any difficulties of the system are quickly highlighted and the users comments are a very valuable source of information for the developers, particularly about qualitative aspects of the interface. It can be used at any stage of the development and gives rapid feedback which can be quickly incorporated into the next refinement of the system. Some disadvantages are that the very fact of being observed can affect one’s performance (up or down) so that the test subjects might not be acting like ‘normal’ users. (This is known as “The Hawthorne Effect.) The technique can also be quite cumbersome in that many notes, or long recordings, are produced and it is quite a job to analyse these, taking perhaps quite a long time. The analysis might even involve more expensive ‘experts’, such as psychologists to analyse, say, body language observed on video tapes. In addition, the designer is depending on the test subjects being truthful and not holding anything back for fear of embarrassing either themselves or the designer.

48 6. Think Aloud To increase the quality of the research, we must avoid the following effects during an observational study: Hawthorne Effect User increases performance to please observer Observer Bias Observer only sees and records what they want to see Halo Effect Observer’s judgement is influenced by another, separate, positive judgement

49 7. Co-operative Think Aloud
A variation on think-aloud User and evaluator co-operate in identifying problems Evaluator asks questions during the session User can ask for clarification

50 General Points Evaluation is relevant throughout all stages of development Different methods are most suited at different stages - rule-of-thumb: Early design - analytical Prototype development - observational/ experimental Late development - survey A mix of objective and subjective measures is desirable Remember, we earlier said that evaluation can have two main purposes : Formative - done throughout the development cycle in order to test ideas, find problems, with a view to improving the design. Summative - when system more-or-less complete - th check that it is at last ready for ‘the market’. Example of late survey method might be to include a questionnaire, or an contact address, along with a ‘beta version’ of the software and then invite comment, over a period of months, from people using the Beta version. This sort of method is often used with software such as Web browsers and other programs that can be downloaded from the Internet.


Download ppt "COM531 Multimedia Technologies"

Similar presentations


Ads by Google