Questionnaires contain closed questions (attitude scales) and open questions pre- and post questionnaires obtain ratings on an issue before and after an.

Slides:



Advertisements
Similar presentations
Writing constructed response items
Advertisements

Chapter 11 Direct Data Collection: Surveys and Interviews Zina OLeary.
Evaluation of User Interface Design
Methodology and Explanation XX50125 Lecture 3: Interviews and questionnaires Dr. Danaë Stanton Fraser.
4. NLTS2 Data Sources: Parent and Youth Surveys. 4. Sources: Parent and Youth Surveys Prerequisites Recommended modules to complete before viewing this.
Fact-finding Techniques Transparencies
Session # 2 SWE 211 – Introduction to Software Engineering Lect. Amanullah Quadri 2. Fact Finding & Techniques.
Week 3: Designing a questionnaire.  Decided on a subject area  Performed a literature search  Started to think about your research question and hypotheses.
Developing a Questionnaire
What is Primary Research and How do I get Started?
 Obtaining data by asking people questions and recording their answers  A standardised set of question is given to each respondent; they give their answers.
Care and support planning Care Act Outline of content  Introduction Introduction  Production of the plan Production of the plan  Planning for.
Maura Bidinost User Experience Designer Omnyx LLC Usability: A Critical Factor in the Successful Adoption of Digital Pathology for Routine Sign-out.
2.06 Understand data-collection methods to evaluate their appropriateness for the research problem/issue.
Questionnaire Surveys Obtaining data by asking people questions and recording their answers Obtaining data by asking people questions and recording their.
Web E’s goal is for you to understand how to create an initial interaction design and how to evaluate that design by studying a sample. Web F’s goal is.
Surveys An overview skills/surveys/srvcontents.html.
Evaluation Howell Istance. Why Evaluate? n testing whether criteria defining success have been met n discovering user problems n testing whether a usability-related.
1 Sources:  SusanTurner - Napier University  C. Robson, Real World Research, Blackwell, 1993  Steve Collesano: Director, Corporate Research and Development.
© De Montfort University, 2001 Questionnaires contain closed questions (attitude scales) and open questions pre- and post questionnaires obtain ratings.
4/16/2017 Usability Evaluation Howell Istance 1.
Evaluation. formative 4 There are many times throughout the lifecycle of a software development that a designer needs answers to questions that check.
Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research group School of Information.
Evaluation of usability tests. Why evaluate? 1. choose the most suitable data- collection techniques 2. identify methodological strength and weaknesses.
ICS 463, Intro to Human Computer Interaction Design: 8. Evaluation and Data Dan Suthers.
Attitudes an introduction ist=PL03B96EBEDD01E386.
Developing a Questionnaire Chapter 4. Using Questionnaires in Survey Research Construction is key to valid and reliable research Well written and manageable.
CHAPTER EIGHT COLLECTING DATA I: THE QUESTIONNAIRE SURVEY.
The Marketing Survey How to construct a survey
C M Clarke-Hill1 Collecting Quantitative Data Samples Surveys Pitfalls etc... Research Methods.
1 Chapter 11: Survey Research Summary page 343 Asking Questions Obtaining Answers Multi-item Scales Response Biases Questionnaire Design Questionnaire.
Gathering User Data IS 588 Dr. Dania Bilal Spring 2008.
Evaluating a Research Report
Data Collection Methods
Human Computer Interaction
Introduction to Marketing Bangor Transfer Abroad Programme MARKETINGRESEARCH.
Chapter 12 Survey Research.
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
COMP5047 Pervasive Computing: 2012 Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research.
Database Analysis and the DreamHome Case Study
Questionnaire Surveys Obtaining data by asking people questions and recording their answers Obtaining data by asking people questions and recording their.
Methods for Human- Computer Interactions (HCI) Research Dr. Xiangyu Wang Design Computing Acknowledgement to Sasha Giacoppo and.
AMSc Research Methods Research approach IV: Experimental [1] Jane Reid
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Using Measurement Scales to Build Marketing Effectiveness CHAPTER ten.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 13 Data Collection in Quantitative Research.
Questionnaires How to gain relevant/useful information using the self report technique.
Slide 10-1 © 1999 South-Western Publishing McDaniel Gates Contemporary Marketing Research, 4e Using Measurement Scales to Build Marketing Effectiveness.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
CHAPTER 3: METHODOLOGY.
Guidelines for Writing Surveys A number of resources available ◦ Survey Research by Backstrom and Hursh- Casar ◦ Survey Research Methods by Fowler.
Primary Research HSB 4UI ISU. Primary Research Quantitative Quantify (measure) Quantify (measure) Large number of test subjects Large number of test subjects.
Lesson 11: Designing Research. Naturalistic Observation When designing a naturalistic observation researchers need to consider;  behavioural categories,
Educational Research Chapter 8. Tools of Research Scales and instruments – measure complex characteristics such as intelligence and achievement Scales.
Research Tools: Questionnaires. What is a Questionnaire? –A tool to: Collect answers to questions Collect factual data A well designed questionnaire should.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
BY :SALEEQ AHMAD ROLL NO: 06
Designing Questionnaire
Attitudes.
RELEVANCE OF QUESTIONNAIRE METHOD OF DATA COLLECTION IN SOCIAL SCIENCERESEARCH BY : POOJAR BASAVARAJ HEAD, DEPT OF POLITICAL SCIENCE KARNATAK ARTS.
Usability Evaluation, part 2
Questionnaires and interviews
Using Measurement Scales to Build Marketing Effectiveness
Week 4 REVIEW.
Developing and using questionnaires
CSM18 Usability Engineering
The Interview 2. Review the Literature 3. Form a 1. Define the
Human-Computer Interaction: Overview of User Studies
Quantitative and Qualitative Methods of Data Collection.
Presentation transcript:

Questionnaires contain closed questions (attitude scales) and open questions pre- and post questionnaires obtain ratings on an issue before and after an design change can be used to standardise attitude measurement of single subjects following direct observation can be used to survey large user groups Questionnaires are often badly designed, as they are perceived as being trivial. 10

Types of rating scales Can you use the following edit commands? yes no don't know duplicate paste A simple checklist 11

Multipoint checklist Rate the usefulness of the duplicate command on the following scale? very of no useful use 12

Likert Scale statement of opinion to which the subject expresses their level of agreement Computers can simplify complex problems very much agree slightly neutral slightly disagree strongly agree agree disagree disagree 13

Caution! what does 'strongly disagree' mean? The help facility in system A is much better than the help facility in system B very much agree slightly neutral slightly disagree strongly agree agree disagree disagree what does 'strongly disagree' mean? The response ‘very much agree’ is clear - A is much better than B ‘Strongly disagree’ could mean ‘I think B is much better than A’ … but it could also mean ‘I think there is no difference between A and B, and so I strongly disagree with the opinion stated in the question’ 14

Semantic differential Scale uses a series of bi-polar adjectives and obtains ratings which respect to each Rate the Beauxarts drawing package on the following dimensions extremely quite slightly neutral slightly quite extremely easy difficult clear confusing fun dreary This type of question needs to be followed by an open-ended question where the user can explain any negative responses which are given. Simply knowing that the package is ‘extremely difficult’ without knowing why, is of limited value 15

Rank Order Place the following commands in order of usefulness (rank the most useful as 1, the least useful as 4) paste duplicate group clear This question lacks task context. I.e ‘usefulness’ for what? This doesn’t matter if the question is asked after the user has completed a specific task, say as part of a user trial. The question would be fairly meaningless if it formed part of a general survey of user opinion of the interface to a word processing package, for example. In this case, there are many tasks associated with document preparation that the word processor could be used for, and the usefulness of the command would depend which tasks were being considered. 16

Do and Don'ts with Questionnaire evaluation do be clear about the information you want to obtain don't risk subjects becoming demotivated don't be lazy do provide specific task reference for questions don’t assume that responses will be positive do pilot the questionnaire first have a clear idea of what specifically you want information about and ensure there are questions that directly address these issues don't risk subjects being demotivated not interested in the questionnaire questionnaire is too long don't be lazy focus questions to the specific interface. Make sure that all questions apply to the interface being evaluated avoid 'not applicable' responses if questions ask for opinions about particular details of the use of the interface, ensure that the task context is clear. although you may think that the interface is very good, the questionnaire has to be objective and allow for as many negative comments as positive. Ensure that that there is sufficient opportunity for users to justify negative attitudes as positive ones. 17

Planning and logistics of questionnaire design Quantitative or qualitative? Legal requirements: the Data Protection Act Confidentiality and anonymity Sample size Volunteer respondents Identifying subject areas Determining appropriate length Typical time scale Main components of questionnaires

Content of items Avoiding response set Components of attitudes Common types of faulty items leading questions context effects double barelled questions vague and ambiguous terminology hidden assumptions social desirability

Leading questions and context effects Would you agree that the governments policies on health are unfair? Item wordings should not contain value judgements How many pints of beer did you drink last night? Think how the context of the study would affect the response, say in a survey of young peoples life styles survey of health behaviour and heart disease

Double barreled questions Do you believe the training programme was a good one and effective in teaching you new skills? avoid questions that involve multiple premises

Vague and ambiguous terminology How often do you clean your teeth? Frequently often infrequently never what does ‘frequently’ mean? Give quantifiers to ensure all respondents understand the same thing by the response categories

Hidden assumptions, social desirability When did you last borrow a video tape? Avoid hidden assumptions - what are these? Do you ever give to charity? May lead to a positive response as otherwise something negative about the respondent is being conveyed

User diaries Used with early releases of complete systems People use system as part of their normal work and keep a log of the tasks they have used the system for and whether or not they were successful in using the system Has the advantage of using real tasks not contrived standard tasks Requires that the system is capable of supporting enough tasks to be useful to the person doing the evaluation Requires input from the evaluator to maintain the person’s motivation to keep using the diary

Observation and monitoring usage User trials direct and indirect observation verbal protocolls Collecting user opinions User diaries over period of extended use Surveys Software logging 4

User trials - duration Aimed at observing people who are typically of the intended user group using the interface (often a prototype) People are usually volunteers – this limits the time available for each trial – serious constraint on what can be done How much of your time would you give to help someone test a piece of software? Assume a total trial length of 30 – 45 minutes – this has to include introduction, demonstration, data collection and de-brief If subjects are paid then longer trials are possible

User Trials: structured tasks One approach is to give subjects a series of standard tasks to complete using a prototype observe subject completing tasks under standardised conditions data collection aimed at ensuring that qualitative descriptions of problems during task completion are captured Intention is to see whether different people encounter similar problems when using the interface what problems are likely to arise in data recording? 5

Standard tasks in user trials structure tasks into incremental difficulty (easy ones first) have a clear policy on subject becoming stuck and providing help have a reason for including each task (avoid unnecessary duplication) ensure (all) functional areas of interface usage are covered ensure tasks of sufficient complexity are included 6

Example of standard tasks ‘Find the time of the latest train service leaving Leicester that I can take next Tuesday to arrive in Dundee before 8.00 pm’ ‘Find the cost of a return ticket for 2 adults and 2 children for the journey from Leicester to Bristol with no discounts such as saver or supersaver’ ‘Find how many copies of Preece ‘Human-Computer Interaction’ the library currently holds’ Note: each task has a definite end point – the user can provide the answer to the question, which is either correct or incorrect

Unstructured user trials Another approach to user trials is to ask the user to browse through information – more appropriate for web-sites or multimedia presentations Browsing behaviour is directed by the user’s interest rather than being asked to retrieve a specific piece of information No guarantee that subject visits all parts of the application or site – how much of the site they visit is often useful information in itself Requires that subject is actually interested in the application or site

Indirect observation - video enables post-session debriefing 'talk-through' (post-event protocolls) enables quantitative data to be extracted - e.g. part task timings serves as a diary and visual record of problems usually very time consuming to analyse usability laboratories – facilities to administer standard tasks, record data and analyse these 7

Verbal protocols means of enhancing direct observations user articulates what they are thinking during task completion (think-aloud protocols) but… doing this can alter normal behaviour subject likely to stop when undertaking complex cognitive activities user may rationalise behaviour in post-event protocols get subjects working in pairs - co-discovery can overcome some of these problems. Think about driving a car - when the task of driving is not demanding, the driver can normally hold a conversation with a passenger. As soon as something happens which demands the drivers attention, conversation automatically stops while the driver attends to the driving task. The conversation is resumed when the driving situation has passed. Post-event protocols occur when a user, say, watches a recording of an interaction session and talks through what they were thinking during the session. This needs to take place immediately after the session. An variation on this is where the investigator selects parts of the recorded session which appears to have caused the user problems and the reasons for the apparent difficulty are talked through. Post event analysis can add considerable time to an evaluation session. 8