Week: 13 Human-Computer Interaction

Slides:



Advertisements
Similar presentations
HCI in the Software Process and Design Rules
Advertisements

Human Computer Interaction
Human Computer Interaction
Interaksi Manusia Komputer – Marcello Singadji. design rules Designing for maximum usability – the goal of interaction design Principles of usability.
evaluation techniques
Usability paradigms and principles z Designing for maximum usability is the goal of design z History of interactive system design provides paradigms for.
Evaluation techniques Part 2
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
CS147 - Terry Winograd - 1 Lecture 6 – Usability Terry Winograd CS147 - Introduction to Human-Computer Interaction Design Computer Science Department Stanford.
Chapter 7 design rules.
Chapter 7 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Human Computer Interaction Chapter 3 Evaluation Techniques.
Gary MarsdenSlide 1University of Cape Town Human-Computer Interaction - 7 Design Guidelines & Standards Gary Marsden ( ) July 2002.
Chapter 7 design rules.
Human Computer Interaction
CMPUT 301: Lecture 18 Usability Paradigms and Principles Lecturer: Martin Jagersand Department of Computing Science University of Alberta Notes based on.
Evaluation Techniques zEvaluation ytests usability and functionality of system yoccurs in laboratory, field and/or in collaboration with users yevaluates.
Design Rules-Part B Standards and Guidelines
Human Computer Interaction
Lecture 11 Design Rules Prof. Dr. Sajjad Mohsin. design rules Designing for maximum usability – the goal of interaction design Principles of usability.
Chapter 7 design rules. Designing for maximum usability – the goal of interaction design Principles of usability –general understanding Standards and.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in collaboration with users –evaluates.
1 Lecture 18 chapter 9 evaluation techniques. 2 Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field.
CS 580 chapter 9 evaluation techniques. Evaluation Tests usability and functionality of system Occurs in laboratory, field and/or in collaboration with.
CENG 394 Introduction to Human-Computer Interaction
Evaluating a UI Design Expert inspection methods Cognitive Walkthrough
1 chapter 7 design rules. 2 Designing for maximum usability – the goal of interaction design Principles of usability –general understanding Standards.
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Fall 2002CS/PSY UI Design Principles Categories  Learnability Support for learning for users of all levels  Flexibility Support for multiple ways.
Department of CSE, KLU 1 Chapter 9 evaluation techniques.
Observational Methods Think Aloud Cooperative evaluation Protocol analysis Automated analysis Post-task walkthroughs.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation Objective –Tests usability –Efficiency – Functionality of system occurs in laboratory,
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation – tests usability and functionality of system – occurs in laboratory, field and/or in.
Chapter 7 design rules. Designing for maximum usability – the goal of interaction design Principles of usability –general understanding Standards and.
Design rules.
Human Computer Interaction
SIE 515 Design Evaluation Lecture 7.
Evaluation through user participation
Human Computer Interaction Lecture 15 Usability Evaluation
Human-Computer Interaction
design rules قواعد التصميم
Cognitive walkthrough
SIE 515 Design Rules Lecture 5.
Design rules.
Usability paradigms and principles
UI Design Principles Categories
The design process Software engineering and the design process for interactive systems Standards and guidelines as design rules Usability engineering.
The design process Software engineering and the design process for interactive systems Standards and guidelines as design rules Usability engineering.
Evaluation techniques
HCI – DESIGN RATIONALE 20 November 2018.
Usability paradigms and principles
Onno Kubbe Design Rule Ontology Onno Kubbe 12/2/2018.
CSE310 Human-Computer Interaction
Usability Techniques Lecture 13.
evaluation techniques
Evaluation.
Week: 14 Human-Computer Interaction
CSE310 Human-Computer Interaction
HCI Evaluation Techniques
Chapter 7 design rules.
CSM18 Usability Engineering
Chapter 7 design rules.
Chapter 7 design rules.
Evaluation Techniques
Experimental Evaluation
evaluation techniques
Chapter 7 design rules.
Presentation transcript:

Week: 13 Human-Computer Interaction Waseem Iqbal Assistant Professor PhD-Scholar (Adaptive Interface for Mobile Devices in User’s Context)

Acknowlwdgement Dr. Ibrar Hussain (Assistant Professor / HEC Approved Supervisor) PhD. in Computer Science Pervasive Computing Research Lab, Zhejiang University, China. Carried out 6 months collaborative research work with HCI & SE group at University of Colorado, Boulder, USA. Human Computer Interaction (Book) 3rd Edition by Alan Dix

Design rules Designing for maximum usability – the goal of interaction design Principles of usability general understanding Standards and guidelines direction for design Design patterns capture and reuse design knowledge

Design rules We require design rules, which are rules a designer can follow in order to increase the usability of the eventual software product. We can classify these rules along two dimensions, based on the rule’s authority and generality. By authority, we mean an indication of whether or not the rule must be followed in design or whether it is only suggested. By generality, we mean whether the rule can be applied to many design situations or whether it is focused on a more limited application situation.

increasing generality Types of design rules Principles abstract design rules low authority high generality standards specific design rules high authority limited application guidelines lower authority more general application increasing authority increasing generality

Principles to support usability Learnability the ease with which new users can begin effective interaction and achieve maximal performance. Flexibility the multiplicity of ways the user and system exchange information. Robustness the level of support provided the user in determining successful achievement and assessment of goal-directed behaviour.

Principles of learnability Predictability determining effect of future actions based on past interaction history operation visibility Synthesizability assessing the effect of past actions immediate vs. eventual honesty

Principles of learnability (ctd) Familiarity how prior knowledge applies to new system guessability; affordance Generalizability extending specific interaction knowledge to new situations Consistency likeness in input/output behaviour arising from similar situations or task objectives

Principles of flexibility Dialogue initiative freedom from system imposed constraints on input dialogue system vs. user pre-emptiveness Multithreading ability of system to support user interaction for more than one task at a time concurrent vs. interleaving; multimodality Task migratability passing responsibility for task execution between user and system

Principles of flexibility (ctd) Substitutivity allowing equivalent values of input and output to be substituted for each other representation multiplicity; equal opportunity Customizability modifiability of the user interface by user (adaptability) or system (adaptivity)

Principles of robustness Observability ability of user to evaluate the internal state of the system from its perceivable representation browsability; defaults; reachability; persistence; operation visibility Recoverability ability of user to take corrective action once an error has been recognized reachability; forward/backward recovery; commensurate effort

Principles of robustness (ctd) Responsiveness how the user perceives the rate of communication with the system Stability Task conformance degree to which system services support all of the user's tasks task completeness; task adequacy

increasing generality Using design rules increasing authority increasing generality Design rules suggest how to increase usability differ in generality and authority

Standards set by national or international bodies to ensure compliance by a large community of designers standards require sound underlying theory and slowly changing technology hardware standards more common than software high authority and low level of detail ISO 9241 defines usability as effectiveness, efficiency and satisfaction with which users accomplish tasks

Guidelines more suggestive and general many textbooks and reports full of guidelines abstract guidelines applicable during early life cycle activities detailed guidelines (style guides) applicable during later life cycle activities understanding justification for guidelines aids in resolving conflicts

Golden rules and heuristics “Broad brush” design rules Useful check list for good design Better design using these than using nothing! Different collections e.g. Nielsen’s 10 Heuristics Shneiderman’s 8 Golden Rules Norman’s 7 Principles

Shneiderman’s 8 Golden Rules 1. Strive for consistency 2. Enable frequent users to use shortcuts 3. Offer informative feedback 4. Design dialogs to yield closure 5. Offer error prevention and simple error handling 6. Permit easy reversal of actions 7. Support internal locus of control 8. Reduce short-term memory load

Norman’s 7 Principles 1. Use both knowledge in the world and knowledge in the head. 2. Simplify the structure of tasks. 3. Make things visible: bridge the gulfs of Execution and Evaluation. 4. Get the mappings right. 5. Exploit the power of constraints, both natural and artificial. 6. Design for error. 7. When all else fails, standardize.

HCI design patterns An approach to reusing knowledge about successful design solutions Originated in architecture: Alexander A pattern is an invariant solution to a recurrent problem within a specific context. Examples Light on Two Sides of Every Room (architecture) Go back to a safe place (HCI) Patterns do not exist in isolation but are linked to other patterns in languages which enable complete designs to be generated

HCI design patterns (cont.) Characteristics of patterns capture design practice not theory capture the essential common properties of good examples of design represent design knowledge at varying levels: social, organisational, conceptual, detailed embody values and can express what is humane in interface design are intuitive and readable and can therefore be used for communication between all stakeholders a pattern language should be generative and assist in the development of complete designs.

Summary Principles for usability repeatable design for usability relies on maximizing benefit of one good design by abstracting out the general properties which can direct purposeful design The success of designing for usability requires both creative insight (new paradigms) and purposeful principled practice Using design rules standards and guidelines to direct design activity

evaluation techniques chapter 9 evaluation techniques

Evaluation Techniques tests usability and functionality of system occurs in laboratory, field and/or in collaboration with users evaluates both design and implementation should be considered at all stages in the design life cycle

Goals of Evaluation assess extent of system functionality assess effect of interface on user identify specific problems

Cognitive Walkthrough Heuristic Evaluation Review-based evaluation Evaluating Designs Cognitive Walkthrough Heuristic Evaluation Review-based evaluation

Cognitive Walkthrough The cognitive walkthrough is a usability evaluation method in which one or more evaluators work through a series of tasks and ask a set of questions from the perspective of the user. The focus of the cognitive walkthrough is on understanding the system's learnability for new or infrequent users Proposed by Polson et al. evaluates design on how well it supports user in learning task usually performed by expert in cognitive psychology expert ‘walks though’ design to identify potential problems using psychological principles forms used to guide analysis

Cognitive Walkthrough (ctd) For each task walkthrough considers what impact will interaction have on user? what cognitive processes are required? what learning problems may occur? Analysis focuses on goals and knowledge: does the design lead the user to generate the correct goals?

Heuristic Evaluation Proposed by Nielsen and Molich. usability criteria (heuristics) are identified design examined by experts to see if these are violated Example heuristics system behaviour is predictable system behaviour is consistent feedback is provided Heuristic evaluation `debugs' design.

Review-based evaluation Results from the literature used to support or refute parts of design. Care needed to ensure results are transferable to new design. Model-based evaluation Cognitive models used to filter design options e.g. GOMS prediction of user performance. Design rationale can also provide useful evaluation information GOMS is a specialized human information processor model for human-computer interaction observation that describes a user's cognitive structure on four components : "a set of Goals, a set of Operators, a set of Methods for achieving the goals, and a set of Selection rules for choosing among competing methods for goals.

Evaluating through user Participation

Laboratory studies Advantages: specialist equipment available uninterrupted environment Disadvantages: lack of context difficult to observe several users cooperating Appropriate if system location is dangerous or impractical for constrained single user systems to allow controlled manipulation of use

Field Studies Advantages: natural environment context retained (though observation may alter it) longitudinal studies possible Disadvantages: distractions noise Appropriate where context is crucial for longitudinal studies A longitudinal study is an observational research method in which data is gathered for the same subjects repeatedly over a period of time. Longitudinal research projects can extend over years or even decades.

Evaluating Implementations Requires an artefact: simulation, prototype, full implementation

Experimental evaluation controlled evaluation of specific aspects of interactive behaviour evaluator chooses hypothesis to be tested a number of experimental conditions are considered which differ only in the value of some controlled variable. changes in behavioural measure are attributed to different conditions

Experimental factors Subjects who – representative, sufficient sample Variables things to modify and measure Hypothesis what you’d like to show Experimental design how you are going to do it

Variables independent variable (IV) characteristic changed to produce different conditions e.g. interface style, number of menu items dependent variable (DV) characteristics measured in the experiment e.g. time taken, number of errors.

Hypothesis prediction of outcome framed in terms of IV and DV e.g. “error rate will increase as font size decreases” null hypothesis: states no difference between conditions aim is to disprove this e.g. null hyp. = “no change with font size”

Experimental design within groups design each subject performs experiment under each condition. transfer of learning possible less costly and less likely to suffer from user variation. between groups design each subject performs under only one condition no transfer of learning more users required variation can bias results.

Analysis of data Before you start to do any statistics: look at data save original data Choice of statistical technique depends on type of data information required Type of data discrete - finite number of values continuous - any value

Analysis - types of test parametric assume normal distribution robust powerful non-parametric do not assume normal distribution less powerful more reliable contingency table classify data by discrete attributes count number of data items in each group

Analysis of data (cont.) What information is required? is there a difference? how big is the difference? how accurate is the estimate? Parametric and non-parametric tests mainly address first of these

Experimental studies on groups More difficult than single-user experiments Problems with: subject groups choice of task data gathering analysis

Subject groups larger number of subjects  more expensive longer time to `settle down’ … even more variation! difficult to timetable so … often only three or four groups

The task must encourage cooperation perhaps involve multiple channels options: creative task e.g. ‘write a short report on …’ decision games e.g. desert survival task control task e.g. ARKola bottling plant

Data gathering several video cameras + direct logging of application problems: synchronisation sheer volume! one solution: record from each perspective

Analysis N.B. vast variation between groups solutions: within groups experiments micro-analysis (e.g., gaps in speech) anecdotal and qualitative analysis look at interactions between group and media controlled experiments may `waste' resources!

Field studies Experiments dominated by group formation Field studies more realistic: distributed cognition  work studied in context real action is situated action physical and social environment both crucial Contrast: psychology – controlled experiment sociology and anthropology – open study and rich data

Observational Methods Think Aloud Cooperative evaluation Protocol analysis Automated analysis Post-task walkthroughs

Think Aloud user observed performing task user asked to describe what he is doing and why, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use Disadvantages subjective selective act of describing may alter task performance

Cooperative evaluation variation on think aloud user collaborates in evaluation both user and evaluator can ask each other questions throughout Additional advantages less constrained and easier to use user is encouraged to criticize system clarification possible

Protocol analysis paper and pencil – cheap, limited to writing speed audio – good for think aloud, difficult to match with other protocols video – accurate and realistic, needs special equipment, obtrusive computer logging – automatic and unobtrusive, large amounts of data difficult to analyze user notebooks – coarse and subjective, useful insights, good for longitudinal studies Mixed use in practice. audio/video transcription difficult and requires skill. Some automatic support tools available

automated analysis – EVA Workplace project Post task walkthrough user reacts on action after the event used to fill in intention Advantages analyst has time to focus on relevant incidents avoid excessive interruption of task Disadvantages lack of freshness may be post-hoc interpretation of events

post-task walkthroughs transcript played back to participant for comment immediately  fresh in mind delayed  evaluator has time to identify questions useful to identify reasons for actions and alternatives considered necessary in cases where think aloud is not possible

Interviews Questionnaires Query Techniques Interviews Questionnaires

Interviews analyst questions user on one-to -one basis usually based on prepared questions informal, subjective and relatively cheap Advantages can be varied to suit context issues can be explored more fully can elicit user views and identify unanticipated problems Disadvantages very subjective time consuming

Questionnaires Set of fixed questions given to users Advantages quick and reaches large user group can be analyzed more rigorously Disadvantages less flexible less probing

Questionnaires (ctd) Need careful design Styles of question what information is required? how are answers to be analyzed? Styles of question general open-ended scalar multi-choice ranked

Physiological methods Eye tracking Physiological measurement

eye tracking head or desk mounted equipment tracks the position of the eye eye movement reflects the amount of cognitive processing a display requires measurements include fixations: eye maintains stable position. Number and duration indicate level of difficulty with display saccades: rapid eye movement from one point of interest to another scan paths: moving straight to a target with a short fixation at the target is optimal

physiological measurements emotional response linked to physical changes these may help determine a user’s reaction to an interface measurements include: heart activity, including blood pressure, volume and pulse. activity of sweat glands: Galvanic Skin Response (GSR) electrical activity in muscle: electromyogram (EMG) electrical activity in brain: electroencephalogram (EEG) some difficulty in interpreting these physiological responses - more research needed

Choosing an Evaluation Method when in process: design vs. implementation style of evaluation: laboratory vs. field how objective: subjective vs. objective type of measures: qualitative vs. quantitative level of information: high level vs. low level level of interference: obtrusive vs. unobtrusive resources available: time, subjects, equipment, expertise