User Centered Design Intro to UCD Explain its motivation Discuss key stages in the process Present basic methods and techniques.

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

Evaluation of User Interface Design
Case Studies M.Sc. in Applied Statistics Dr. Órlaith Burke Michaelmas Term 2012.
Human Computer Interaction
SECOND MIDTERM REVIEW CS 580 Human Computer Interaction.
CS305: HCI in SW Development Evaluation (Return to…)
Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
CAP 252 Lecture Topic: Requirement Analysis Class Exercise: Use Cases.
The Process of Interaction Design. Overview What is Interaction Design? —Four basic activities —Three key characteristics Some practical issues —Who are.
Usability presented by the OSU Libraries’ u-team.
Evaluation. formative 4 There are many times throughout the lifecycle of a software development that a designer needs answers to questions that check.
The Process of Interaction Design. What is Interaction Design? It is a process: — a goal-directed problem solving activity informed by intended use, target.
An evaluation framework
Usability 2004 J T Burns1 Usability & Usability Engineering.
Feedback from Usability Evaluation to User Interface Design: Are Usability Reports Any Good? Christian M. Nielsen 1 Michael Overgaard 2 Michael B. Pedersen.
ICS 463, Intro to Human Computer Interaction Design: 8. Evaluation and Data Dan Suthers.
Identifying needs and establishing requirements. Overview The importance of requirements Different types of requirements Data gathering Task descriptions:Scenarios.
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
Web Design Process CMPT 281. Outline How do we know good sites from bad sites? Web design process Class design exercise.
Reflective practice Session 4 – Working together.
Usability 2009 J T Burns1 Usability & Usability Engineering.
1. Learning Outcomes At the end of this lecture, you should be able to: –Define the term “Usability Engineering” –Describe the various steps involved.
CS3205: Identifying needs and establishing requirements
류 현 정류 현 정 Human Computer Interaction Introducing evaluation.
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
Requirements Gathering. Why are requirements important? To understand what we are going to be doing We build systems for others, not for ourselves Requirements.
1www.id-book.com Identifying needs and establishing requirements Chapter 10.
Chapter 11: An Evaluation Framework Group 4: Tony Masi, Sam Esswein, Brian Rood, & Chris Troisi.
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Interviews. Having worked out who will be using your web site (personas, questionnaires etc), you may want to interview selected representatives In traditional.
Human Computer Interaction
CS305: Fall 2008 Identifying needs and establishing requirements Readings: 1) Chapter 10 of the ID-Book textbook 2) Chapter 2 from Task-Centered User Interface.
Evaluation Techniques zEvaluation ytests usability and functionality of system yoccurs in laboratory, field and/or in collaboration with users yevaluates.
Identifying needs and establishing requirements What, how and why?
Level 2 Prepared by: RHR First Prepared on: Nov 23, 2006 Last Modified on: Quality checked by: MOH Copyright 2004 Asia Pacific Institute of Information.
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
COMP5047 Pervasive Computing: 2012 Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research.
Database Analysis and the DreamHome Case Study
Chapter 15 Qualitative Data Collection Gay, Mills, and Airasian
CS2003 Usability Engineering Human-Centred Design Dr Steve Love.
Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in collaboration with users –evaluates.
1 Lecture 5: (Ref. Chapter 7) Identifying Needs and Establishing Requirements.
User Interface Design & Usability for the Web Card Sorting You should now have a basic idea as to content requirements, functional requirements and user.
CSCI 4163 / CSCI 6904 – Winter Housekeeping  Clarification about due date for reading comments/questions  Skills sheet  Active listening handout.
The Software Development Process
Identifying needs and establishing requirements Data gathering for requirements.
Facilitate Group Learning
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Task Analysis Lecture # 8 Gabriel Spitz 1. Key Points  Task Analysis is a critical element of UI Design  It describes what is a user doing or will.
Task Analysis Lecture # 8 Gabriel Spitz 1. Key Points  Task Analysis is a critical element of UI Design  It specifies what functions the user will need.
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Today Discussion Follow-Up Interview Techniques Next time Interview Techniques: Examples Work Modeling CD Ch.s 5, 6, & 7 CS 321 Human-Computer Interaction.
AVI/Psych 358/IE 340: Human Factors Data Gathering October 1, 2008.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
GATHERING DATA Supplementary Note. What are requirements? A requirement is a statement about an intended product that specifies what it should do or how.
Investigate Plan Design Create Evaluate (Test it to objective evaluation at each stage of the design cycle) state – describe - explain the problem some.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
Day 8 Usability testing.
Evaluation through user participation
Evaluation techniques
COMP444 Human Computer Interaction Usability Engineering
HCI Evaluation Techniques
Evaluation Techniques
Experimental Evaluation
THE PROCESS OF INTERACTION DESIGN
COMP444 Human Computer Interaction Evaluation
Presentation transcript:

User Centered Design Intro to UCD Explain its motivation Discuss key stages in the process Present basic methods and techniques

UCD is about designing interactive technologies to meet users’ needs. Different stages: understanding user needs establishing requirements prototyping alternative designs evaluating designs

Key characteristics of any UCD process: Focus on users early in the design and evaluation of the artefact Identify, document and agree specific usability and user experience goals Iteration is inevitable. Designers never get it right first time

Why involve Users Around 63% of software projects exceed their cost estimates Top four reasons are, - Frequent requests for changes from users - Overlooked tasks - User’s lack of understanding of their own requirements - Insufficient user analysis, communication, and understanding

If you involve end users in the design process, More likely to design/build something useful! Improve productivity Reduce human error Reduce maintenance Reduce employee turnover Manage expectations Encourages ownership of the solution Understanding of shortcomings/tradeoffs Increase satisfaction

Principles of UCD approach User’s behaviour and context of use are studied and product is designed to support them Understanding user needs/pain points, as opportunities for design User characteristics are captured and designed for.

Users are consulted from early concept phases, throughout design, to final product Responses to concepts, prototypes, etc are taken seriously All design decisions are taken within the context of the user, their work and their environment All design iterations can be traced back to user goals

- not just what users say, but what they do - how action and interaction are achieved - interest in the ‘mundane’, taken for granted, moment-by- moment interactions of people Outputs: rich descriptions - need interpreting through use of conceptual frameworks, models etc.

A range of user research methods – observation – interview – questionnaire – focus groups – participant analysis

Interviews - Forum for talking to people - Structured, unstructured or semi- structured - Props, e.g. sample scenarios of use, prototypes, can be used in interviews - Good for exploring specific issues - But are time consuming and may be unfeasible to visit everyone

Questionnaires - A series of questions designed to elicit specific information -Questions may require different kinds of answers: simple YES/NO choice of pre-supplied answers comment - Often used in conjunction with other techniques - Can give quantitative or qualitative data - Good for answering specific questions from a large, dispersed group of people

Focus Groups Workshops or focus groups: - Group interviews - Good at gaining a consensus view and/or highlighting areas of conflict

Why ‘establish’ requirements? Requirements arise from understanding users’ needs Requirements should be justified & related to data

Establishing requirements What do users want? What do users ‘need’? Requirements need clarification, refinement and completion over several iterations of the design process. Focused problem definition established by analysising user data, will lead to stable list of requirements.

Types of Requirements Users: Who are they? Usability and user experience qualities Environment or context of use Functional Data

Users Characteristics abilities, physical, background, attitude to computers etc System use Novice: step-by-step, constrained, clear information Expert: flexibility, access/power Frequent: short cuts Casual/infrequent: clear instructions, e.g. menu paths

Usability and User Experience Requirements Effectiveness, efficiency, safety, privacy utility, learnability, memorability, (and fun, helpful) etc etc

Environment or Context of Use: Physical: dusty? noisy? vibration? light? heat? humidity? On the move? Layout of workspace? Social: Sharing of files, of displays, in paper, across great distances, work individually, privacy for clients AND Informal information distribution among users. Organisational: hierarchy, IT department’s attitude and remit, user support, communications structure and infrastructure, availability of training

Functional Historically the main focus of requirements activities: What the system should do? example; train a new employee how to carry out a task. And also, memory size, response time, platform constraints...

Data What data or input is required to make the system function for the user and how is it accessed

Self-service cafeteria system at UL Functional: system will calculate cost of purchases without the help of cafeteria staff. Data: system must have access to price of products

Environmental: Most users will be carrying a tray, in a rush, noisy, talking/distracted, queing, etc User: most users comfortable with technology Usability: simple for new users, memorable for frequent users, quick to use, no waiting around for processing

Evaluation A – an existing system or B – a new design. A continuous iterative process examining: Early prototypes of the new system Later, more complete prototypes & product

Looking for: Extent of functionality, effect of interface on user, specific problems/issues Usability, user experience, other objectives e.g., performance Designers need to check that they understand users’requirements and meeting key objectives

When to evaluate At all stages throughout design From the first descriptions, sketches etc. of users needs through to final product Iterative cycles of ‘design - EVALUATE - redesign’

Two main types of evaluation reflecting different stages and goals: Formative Summative (Involving users) ‘quick’ usability testing field studies (Involving experts) predictive evaluation

Quick What is it informally asking users/consultants for feedback Advantages Can be done any time, anywhere Emphasis on fast input to the design process Sense-checking, do early design ideas make sense? early concept testing

Quick Disadvantages/issues Not necessarily accurate or extensive No careful documentation of findings

Usability Testing Recording typical users’ performance on typical tasks In a controlled setting, can be in the lab or in the field Data: write-ups, video, log of key presses, time to complete tasks etc

Usability Testing Advantages – Uninterrupted; can assess performance, identify errors and help explain why users did what they did – Can be used with satisfaction questionnaires and interviews to elicit user opinions

Usability Testing Disadvantages/issues Lack of context; skill to determine typical users and typical tasks Time to set up tests, recruit participants, and run tests Need access to resources/equipment

Field Studies What is it Observations and interviews in natural settings Advantages Helps understand what users do naturally and how technology impacts them in context For product design Identify opportunities; determine design requirements; decide how best to introduce tech; evaluate in use

Field Studies Disadvantages/issues – Access to settings – Lack of control; noise, distractions, time-tabling etc

Think-Aloud and Cooperative Evaluation

Think-Aloud Evaluation "think-aloud", is an evaluation technique in which the user performs a number of tasks and is asked to think aloud to explain what they are doing at each stage, and why.

The evaluator records the users actions using, tape recordings video computer logging user notes

Advantages Think-aloud has the advantage of simplicity, it requires little expertise to perform, and can provide a useful insight into any problems with an interface.

Disadvantages The information is necessarily subjective, and can be selective depending on the tasks chosen. Being observed and having to describe what you are doing can also affect the way in which you do something: ask a juggler to describe how she juggles.....

Cooperative evaluation "Cooperative evaluation" is a variant of think aloud, in which the user is encouraged to see himself as a collaborator in the evaluation rather than just a subject. As well as getting the user to think aloud, the evaluator can ask such questions as "Why?" and "What if.....?"; likewise, the user can ask the evaluator for clarification if problems arise.

Advantages It is less constrained and therefore easier for the evaluator, who is not forced to sit in solemn silence; the user is encouraged to actively criticise the system rather than simply suffer it; the evaluator can clarify points of confusion so maximising the effectiveness of the approach. Note that it is often not the designer who is the evaluator, but an independent person.

One of the problems with both these techniques is that they generate a large volume of information which has to be painstakingly and time-consumingly analysed. Such a record of an evaluation session is known as a protocol, and there are a number to use; pen and paper, audio and video recording, computer logging and user diaries. eg ‘pen and paper’ protocol…

How to run a session As an evaluator spend a few minutes thinking of some scenarios and tasks for the user to perform. Include some complex tasks as well as some simple ones. Decide on whether you are going to use think-aloud or cooperative evaluation. Then run the evaluation, keeping full notes of the users actions and behaviour and any problems. The fuller the detail in the protocol, the better.

Recruit Users Define the Target Group - Recruit users who are similar to target group. - Describe users, background, age, sex, familiarity with computers etc etc

Prepare tasks Are the tasks specific? Will the task focus the user on the part of the system you are most interested in? How do you know that the task you have chosen represents the work the product is designed to support. Have you written the task down in a way that a novice user can understand?

Before the user arrives…. Have you got the evaluation space prepared, system/device, tasks, debriefing questions etc Have you worked through the tasks yourself?

When the user arrives… Put users at ease. Explain the co-operative design process Explain the users role as collaborator CLEARLY.

While the user is using the system…. Keep Them Talking!!! Make sure you know what they are doing and why they are doing it. Do not contradict them. Make notes.

People tend to say less when they are unsure what to do, but this is the time that the evaluator needs to know most. You must encourage the user to go through explaining which buttons they would press and when, showing you what, if anything, would happen to the display and what they expect should happen. (if you are using cooperative evaluation, you can discuss it with them, but for think aloud you have to just accept what is presented).

Debriefing the Users….. Ask the users what is the best and worst feature. Engage the user in a discussion on the system, remember that they are your collaborator in evaluating this system. Ask the users what they though of the evaluation session.

Summerise your observations 1.Corralate your notes, the evaluation team should do this together. 2.Focus on unexpected behaviour and comments on the usability of the system. 3.Think of users problems as symtoms of bad design. 4.Which problems should be fixed? 5.Make recommendations.

Real World Situation Sometimes it is neccesary to set up evaluations in a similar context to that in which they are designed to be used. Eg mobile technology? Where appropriate, involve two or three users in one session. If this reflects a real world situation. Eg watching a DVD?

Project Four persons per group Due date WK 9 (Monday 22 nd Nov. 5pm) An Evaluation Report words 40% of overall module mark.

Submit Group Names and Topic to me by , (once per group) Subject Heading – ‘Co-Operative Evaluation’ By Friday 5 th November. Anyone without a group please attend the tutorial on Thursday 4 th Nov at 3pm. SG20

Co-Operative Evaluation of an Interactive System or Object Write a report outlining; 3 evaluation sessions, evaluating one system. Describe; The System or Object to be evaluated The evaluation technique The Users The Session Set-up The Tasks The Observations The Recommendations.

Included in the Appendix… I want a brief outline form each student on their role in the evaluation sessions and write-up. An example of the material from an evaluation

Try to do evaluate something relevant to your FYP!!!!! (just a suggestion!)

PLEASE READ THE EXAMPLE SUPPLIED BEFORE STARTING YOUR PROJECT. Nokia “N-Gage” Game Deck posted at