2 The User InterfaceAll those parts of the system we come into contact with…Physically we might interact with a device by pressing buttons or moving levers and the interactive device might respond by providing feedback through the pressure of the button or lever.Perceptually the device displays things on a screen, or makes noises which we can see and hear.Conceptually we interact with a device by trying to work out what it does and what we should be doing. The device provides messages and other displays which are designed to help us do this.
3 The User Interface Input Output Methods are needed to enter commands (tell the system what we want it to do)We also need to be able to navigate through the commands and the content of the systemWe need to enter data or other content into the systemOutputSo the system can tell us what is happening - provide feedbackSo the system can display the content to us.
5 AccessibilityRemoval of the barriers that would otherwise exclude some people from using the system at allLegislation requires software to be accessible.UK’s Equality Act 2010 and Section 508 in the USWeb Accessibility InitiativeThe Web Content Accessibility Guidelines (WCAG) 2.0, explain how to make Web content accessible for people with disabilities.ISO 9241 Part 171 Guidance on Software Accessibility
6 Principles of Universal Design Equitable Use: The design does not disadvantage or stigmatize any group of users.Flexibility in Use: The design accommodates a wide range of individual preferences and abilities.Simple, Intuitive Use: Use of the design is easy to understand, regardless of the user's experience, knowledge, language skills, or current concentration level.Perceptible Information: The design communicates necessary information effectively to the user, regardless of ambient conditions or the user's sensory abilities.
7 Principles of Universal Design Tolerance for Error: The design minimizes hazards and the adverse consequences of accidental or unintended actions.Low Physical Effort: The design can be used efficiently and comfortably, and with a minimum of fatigue.Size and Space for Approach & Use: Appropriate size and space is provided for approach, reach, manipulation, and use, regardless of the user's body size, posture, or mobility.
8 Acceptability and Engagement Acceptability refers to fitness for purpose in the context of useUnacceptable use of mobile phonesEngagement is concerned with all the qualities of an experience that really pull people inA sense of immersion that one feels
9 Design Principles, Guidelines and Rules International StandardsDesign PrinciplesUniversally applicable high level design goals, based on International StandardsOpen to board interpretation‘Design for Human Cognitive Limitations’Design GuidelinePrinciples are interpreted; guidelines produced to assist with design situationsMust be interpreted within the context of the task. Usability must include task-context dependent feature‘Recognition rather than recall’‘Make it obvious which menu items are/are not active at any time’Design RuleHighly specific low level design rulesFound in corporate style guides and design manuals.In menu design,‘Max of 10 items per panel; inactive items should be greyed out’
10 HCI Guidelines “Broad brush” design rules Useful check list for good designBetter design using these than using nothing!Different collections:Benyon and Turner’s 12 PrinciplesNielsen’s 10 HeuristicsShneiderman’s 8 Golden Rules
11 Benyon and Turner’s 12 Principles VisibilityConsistencyFamiliarityAffordanceNavigationControlFeedbackRecoveryConstraintsFlexibilityStyleConviviality
12 1. Visibility Computers are good at remembering, people are not! Try to ensure that things are visible so that people can see what functions are available and what the system is currently doingThis is an important part of a psychological principle that it is easier to recognize things than to have to recall themIf it is not possible to make it visible, make it observableConsider making things ‘visible’ through the use of sound and touchUse menus, icons, dialog boxes vs commands
13 1. Visibility The common commands and defaults are made visible Other commands are observable by using the drop down menusVisibility and sensible grouping makes people aware of other options
14 2. Consistency Be consistent in the use of design features Be consistent with similar systems and standard ways of workingConsistent use of commands, sequence of tasks, use of terminology, layout and structureInternal consistency within the system; interaction style and presentationExternal consistency between packages; easier to move from one application to another; ‘look and feel’ (but don’t guarantee usability)
15 3. Familiarity Use language and symbols that users are familiar with Where this is not possible because the concepts are quite different from those people know about, provide a suitable metaphorTo help them transfer similar and related knowledge from a more familiar domainUser: “I just got a message Rstrd Info. What does it mean?”Designer: “That’s restricted information.”User: “But surely you can tell me!!!”Designer: “No, no… Rsdrd Info stands for “Restricted Information.”User: “Hmm… but what does it mean???”Designer: “It means the program is too busy to let you log on.”User: “Ok. I’m taking a break.”
16 4. Affordance Design things so that it is clear what they are for The properties of objects and how these relate to how objects are usedFor example make buttons look like buttons so people will press themUse textboxes for data entry, labels for displaying output
17 5. NavigationProvide support to enable people to move around the systemMaps, directional signs, informational signsMenus are often used for navigation, signs (labels) indicate where else you can go in the system
18 6. Control Make it clear who or what is in control Allow user to take control of the system which responds to actionsInterleaving modesNormal View, Print PreviewDesign View, Code ViewTailor system to individual needs (Accessibility)Window Size, Font Size, Colour, ToolbarsGood help and documentationTooltips, Context Sensitive HelpUser GuidesOnline help
19 7. FeedbackFeed back information from the system to user so that they know what effect their actions have hadContinuously inform the user aboutwhat it is doinghow it is interpreting the user’s inputConstant and consistent feedback will enhance the feeling of controlE.g. Cursor style, Status barUser should know when completion is successfulDirect view as changes happenMessage box
20 8. RecoveryEnable recovery from actions and errors quickly and effectivelyUsers know that they can return to the previous state by providing Undo and Cancel optionsIf they make a mistake, offer them clear and informative instructions to enable them to recoverE.g. Message boxes giving instructionsUsers don’t like to feel trapped by the computer!Provide clearly marked exits - should offer an easy way out of as many situations as possible
21 9. ConstraintsProvide constraints so that people do not try to do things that are inappropriateHave allowable actionsGive confirmation of dangerous operationsUsers are prevented from making mistakes by limiting the amount of typing requiredDisable menu commands (grey = inactive)
22 10. FlexibilityMultiple ways of doing things to accommodate different levels and needs of a range of usersProvide Shortcuts through use of Hot KeysE.g. Ctrl C for ‘Copy’ , Ctrl V for ‘Paste’Perform regular, familiar actions more quicklyPersonalise the system by viewing/removing toolbars as needed
23 11. Style 12. Conviviality Stylish designs Attractive Polite, friendly and pleasant designs
24 Nielsen’s 10 Heuristics Visibility of system status Match between system and the real worldUser control and freedomConsistency and standardsError preventionHelping users recognise, diagnose and recover from errorsRecognition rather than recallFlexibility and efficiency of useAesthetic and minimalist designHelp and documentation
25 Shneiderman’s 8 Golden Rules Strive for consistencyEnable frequent users to use shortcutsOffer informative feedbackDesign dialogs to yield closureOffer error prevention and simple error handlingPermit easy reversal of actionsSupport internal locus of controlReduce short-term memory load
26 International Standards ISO 9126 Software EngineeringInternational standard for the evaluation of the quality of software4 PartsPart 1 Software QualityFunctionalityReliabilityUsabilityEfficiencyMaintainabilityPortability
27 International Standards ISO :2002 Software ergonomics for multimedia user interfacesPart 1: Design principles and frameworkPart 2: Multimedia navigation and controlPart 3: Media selection and combinationISO :2006 Ease of operation of everyday productsPart 1: Design requirements for context of use and user characteristicsISO 6385:2004 Ergonomic principles in the design of work systemsISO :1999 Parts 1 and 2 "Ergonomic requirements for the design of displays and control actuators"
28 International Standards ISO/IEC 25051:2005 Software engineering -- Software product Quality Requirements and Evaluation (SQuaRE)Requirements for quality of Commercial Off-The-Shelf (COTS) software product and instructions for testingISO/IEC 25062:2006 Software engineering -- Software product Quality Requirements and Evaluation (SQuaRE)Common Industry Format (CIF) for usability test reportsISO/TR 16982:2002 Ergonomics of human-system interactionUsability methods supporting human-centred design
29 International Standards Specific for IconsISO Icon Symbols & Functions - GeneralISO Object IconsISO Pointer IconsISO Control IconsISO Tool IconsISO Action IconsISO Icons for Controlling Multimedia SoftwareISO Icons for World Wide Browser Toolbars
30 International Standards ISO 9241 Ergonomic Requirements for Office Work with Visual Display Terminals (VDT).32 Parts covering all aspects of Usability (hardware, software, processes)Part 11 Guidance on UsabilityRead ISO 9241 Bluffer GuideFor more information on standards:
34 Evaluation‘Evaluation is concerned with gathering data about the usability of a design or product by a specified group of users for a particular activity within a specified environment or work context’ (Preece, 1994)Evaluation is central to designing interactive systemsEverything gets evaluated at every step of the processFor example, requirements are evaluated, storyboards evaluated and a prototype built. The prototype is then evaluated and some aspect of a physical design identified and implemented; this is then evaluated again and so the iteration continues until a final product is complete
35 ‘Users will evaluate your interface sooner or later.’ Why do we Evaluate?‘Users will evaluate your interface sooner or later.’To suggest improvements to the designTo confirm that the software meets all of the functional and usability specificationsTo confirm acceptability of the interface and/or supporting materialsTo compare alternative designsTo ensure that it meets the expectations of usersTo match or exceed the usability of competitor’s productsTo ensure that it complies with standards and any statutory requirementsSome of the good reasons why evaluation should be done include,1. To suggest improvements to the design so that it fits better with the tasks it is to perform, the work environment in which it will be used, and the preferred ways of working of the target users.2. To confirm that the software meets all of the functional and usability specifications that were made at the beginning (and throughout) the development process.3. To compare alternative designs to determine which is ‘best’.4. To ensure that it meets the expectations of your customers; that it will not damage your competitive reputation in the marketplace.5. To see that it matches, or exceeds, the usability of your competitors’ products and/or of any earlier version of the same package.6. To ensure that it complies with any statutory requirements (such as those incorporated into EU legislation.).
36 Evaluation is Often Performed Badly Designers assume their own personal behaviour is ‘representative’ of that of an average userDesigners make assumptions about how people are able to operate the software, but these assumptions might well be unfounded.Acceptance of traditional/standard interface design - assume style guides ensure good software designEvaluation may be postponed until ‘a more convenient time’ when functionality is completePoor knowledge of evaluation techniques and lack of expertise in analysing experimentsReasons for poor evaluation have been identified as:1. The designers assume that their own personal behaviour in using the software is representative of that of the average user. Thus, if they can easily and effectively use the software, then they think anyone will be able to do so.2. Designers make assumptions about how people are able to perform with software, but these assumptions might well be unfounded.3. Some companies develop a standard style and layout (style guidelines) for their software. It is then assumed that if these guidelines are followed then good software will result. However, a style developed for one kind of product and for one set of users, will not necessarily be successful in a different product for a different user base.4. Evaluation is often put off while developers concentrate on achieving the functional requirements of the software. Since this process usually takes longer than one planned for, then by the time they get round to thinking of evaluation, time and budget are running out and so the job is skimped.5. A number of software designers have poor knowledge of evaluation techniques, due to a poor level of training in this area
37 What Do We Evaluate? Usability (Criteria) Initial designs (pre-implementation)Interfaces/Interaction (Heuristics)Prototype at various stagesFinal implementation of software systemDocumentation
38 Types of Evaluation Formative Evaluation Evaluation within the design processProduce good usability through the process of evolution, forming/reforming the productInformal, StructuredSummative EvaluationTake the finished system & assess it for aspects of usabilityCarry out experiments, after implementationPurpose is Quality ControlFormal, Costly & Time-consuming
39 Formative Evaluation Ask the Experts (No users) Cognitive Walkthrough Heuristic EvaluationAsk the UsersFocus groupsQuestionnaireInterviewsUser interactionThink AloudCo-operative Think AloudExpensiveCheapCOST
40 Types of Data Quantitative data ‘Objective’ measures of certain factors by direct observationE.g. time to complete certain tasks, accuracy of recall, number of errors madeUser performances or attitudes can be recorded in a numerical formQualitative data‘Subjective’ responses; Opinions rather than measurementsReports and opinions that may be categorized in some way but not reduced to numerical valuesDepending on the method used, evaluation can result in the collection of either quantitative or qualitative data.Quantitative data:‘Objective’ measurement of certain factors by direct observation. For example, the time to complete certain tasks, or accuracy in recalling how to do some function, or number of errors made in certain tasks. More ‘subjective’ measures can be quantified by using some form of rating scale.Qualitative data:More subjective in nature, opinions rather than measurements, so it is obviously more difficult to analyse. It may, in some cases, be possible to place sets of opinions into certain defined categories though they cannot be reduced to any numerical form. Despite the lack of objective measure, these kinds of data are just as useful to the design team in developing software that people will actually want to use.
41 Recording Methods Paper and pencil – cheap, limited to writing speed Audio – good for think aloud, difficult to match with other protocolsVideo – accurate and realistic, needs special equipmentComputer logging – automatic and unobtrusive, large amounts of data difficult to analyzeUser notebooks – coarse and subjective, useful insights, good for longitudinal studiesMixed use in practice. Audio/video transcription difficult and requires skill. Some automatic support tools availableThere are a number of different methods by which data can be collected. Most of them will require data gathering from at least one representative user at a time and so they would properly be used for what we termed empirical evaluation. Some of these tools are outlined above.Explanation of each of the tools will be provided in class.
42 Cognitive Walkthrough ‘Expert’ simulates user actions/thoughts and steps through the action sequence to complete the taskHas rough plan and explores the system for possible actionsInterprets system responses and assesses if each step is or is not good for a new userAre the actions appropriate and visible? Is the feedback adequate?Suits systems primarily learned by exploration e.g. walk-up-and-use such as ATM, ticket machinesOverall question:How successfully does this design guide the unfamiliar user through the performance of the task?This is another ‘analytic’ evaluation technique in that it is carried out by the designer, or an ‘expert’ user, who attempts to simulate the way an average user would react to the system. It is particularly useful for the design of ‘walk-up-and-use” systems, such as cash machines, where the user will have to learn how to operate it by ‘exploring’ the interface. That is, the designer must make sure that it is going to be obvious, just from looking at the interface, what action should be taken by the user at each stage, and that sufficient and clear feedback is given to the user to show that they have done the correct thing.So, the main question to be answered by the technique is “How well does the design guide the unfamiliar user through the performance of some task?” There is no concern to measure, for example, the speed with which the user can perform the action, though the designer might want to try to predict, say, likely error rates and their consequences since these could greatly affect the user’s ‘satisfaction’ with the system.To apply the ‘cognitive walkthrough; technique the designer selects a particular task and breaks it down into its constituent subtasks. For example, in an automated ticket machine, the task might by described as ,“Purchase a ticket (single or return) to a particular destination.”This can be broken down into sub-tasks such as,1. Choose destination2. Indicate journey type (single/return)3. Deposit money4. Get ticket.The designer ‘walks through’ these stages in sequence and for each one will ask him/herself the questions,(a) Will it be obvious to the user what action to take next?b) Will the user correctly interpret the description given to the correct action, and connect it with what they are trying to achieve?(c)Will the system give feedback so the user will interpret the system’s response correctly?This process, of asking the three questions, is carried out through all the stages of the task. Any “No” answers are an indication of where the designer needs to improve the interface design.
43 Heuristic EvaluationA ‘heuristic’ can be defined as a ‘rule-of-thumb’ or general ruleThe idea is to assess the design against known criteriaA number of reviewers (3-5) go through product, screen by screen, and note any problems and note violations of these principles, with a severity rating (0-4)All responses are collected and aggregatedAbout 5 reviewers can find about 75% of the problemsNielsen’s 10 HeuristicsA ‘heuristic’ can be defined as a ‘rule-of-thumb’, or general rule. For example, in evaluating an interface, heuristics might be derived from the general guidelines that have been used in the system’s design. These might take the form of statements like “prevent errors”, provide feedback”, etc.A number of reviewers then go through the product, screen by screen, and make note of any problems, using the list of heuristics to focus their thoughts.Some studies have shown that about five reviewers can find about 75% of the problems that are found by 15 reviewers. So this method can be very economical in discovering many of the major problems with an interface, especially if it is going to be used in conjunction with other evaluation methods that would detect problems missed by the heuristic evaluation.Refer to Preece et al, (1994) pp. 676 for further explanation.
44 3. Focus Groups A group of users and an evaluator Structured group interview; flexibleAllows interaction between usersTypically for requirements gathering, not system use…… but can be used post-task
45 4. Questionnaire Questions fixed in advance Completed independently of the evaluatorThe purpose of the questionnaire and purpose of information must be clearGeneral, open ended, scalar, multi-choice, ranked questions
46 5. Interviews Asking users about their experience with a system General questions first, followed by more detailed onesNeeds careful planning, structured around some central questionsStructured, semi-structured, unstructuredRequires consistency and flexibility
47 6. Think Aloud User participation User is asked to talk through what he/she is doing whilst being observeddescribing what is happening,why an action is taken,what the user is trying to do,what the user is thinking,the goalEvaluator documents actions and problems found with the interfaceActions and comments are recorded by the observer using paper notes, video or audio recordingAs the name suggests, this involves someone observing a small group (usually between 3 and 6 people) of (articulate) users as they work through specified tasks or benchmark tests. The users talk aloud as they work, describing not only what they are doing but also what they are thinking while they are doing it. Their actions and their comments are recorded by the observer using paper notes, video or audio recording.Advantages of the method are that any difficulties of the system are quickly highlighted and the users comments are a very valuable source of information for the developers, particularly about qualitative aspects of the interface. It can be used at any stage of the development and gives rapid feedback which can be quickly incorporated into the next refinement of the system.Some disadvantages are that the very fact of being observed can affect one’s performance (up or down) so that the test subjects might not be acting like ‘normal’ users. (This is known as “The Hawthorne Effect.)The technique can also be quite cumbersome in that many notes, or long recordings, are produced and it is quite a job to analyse these, taking perhaps quite a long time. The analysis might even involve more expensive ‘experts’, such as psychologists to analyse, say, body language observed on video tapes.In addition, the designer is depending on the test subjects being truthful and not holding anything back for fear of embarrassing either themselves or the designer.
48 6. Think AloudTo increase the quality of the research, we must avoid the following effects during an observational study:Hawthorne EffectUser increases performance to please observerObserver BiasObserver only sees and records what they want to seeHalo EffectObserver’s judgement is influenced by another, separate, positive judgement
49 7. Co-operative Think Aloud A variation on think-aloudUser and evaluator co-operate in identifying problemsEvaluator asks questions during the sessionUser can ask for clarification
50 General PointsEvaluation is relevant throughout all stages of developmentDifferent methods are most suited at different stages - rule-of-thumb:Early design - analyticalPrototype development - observational/ experimentalLate development - surveyA mix of objective and subjective measures is desirableRemember, we earlier said that evaluation can have two main purposes :Formative - done throughout the development cycle in order to test ideas, find problems, with a view to improving the design.Summative - when system more-or-less complete - th check that it is at last ready for ‘the market’.Example of late survey method might be to include a questionnaire, or an contact address, along with a ‘beta version’ of the software and then invite comment, over a period of months, from people using the Beta version. This sort of method is often used with software such as Web browsers and other programs that can be downloaded from the Internet.